My Research Vision
You might be familiar with the renowned scientist or film "Oppenheimer". In fact, I think now is the Oppenheimer period for Artificial Intelligence. Unlike the atom bomb, where the use rights are under human control, foundation models are the true uncontrollable force. As we increasingly rely on AI-based applications, we must confront their potential risks. It may gradually surpass human cognition due to a disconnect between theory and application, making it hard to impose limitations and rendering the technology uncontrollable. This enlightenment stage also presents us with the best opportunity to address these challenges. We can strive to make AI interpretable and controllable. Only when theory and application truly merge can we ensure a positive future for AI and humanity.
When the world is addicted to marveling at the effectiveness of LLMs, most of people overlook their risks. Deep learning itself is a black-box, and as the box becomes larger and more data is thrown in, we can never infer from the results what it has become. Only few people are clearly aware of this, but they do not have sufficient theoretical ability to complete it (Most of the people who have the ability are not willing to sincerely participate in this long-term process, because they just want to profit from the Foundation Models). Meanwhile, there have been many so-called interpretable works before, but they are all presented through visualization or other means, which can only serve as motivation, rather than the interpretability. The essence of interpretability of a method must be supported by Natural Science!
My PhD ultimate objective is to achieve a truly interpretable and controllable Artificial General Intelligence (AGI) framework!
—— 2023.08.30 by Xinyu after watching the film "Oppenheimer"