Research Vision
You might be familiar with the renowned scientist or film "Oppenheimer". In fact, I think now is the Oppenheimer period for Artificial Intelligence. Unlike the atom bomb, where the use rights are under human control, foundation models are the true uncontrollable force. As we increasingly rely on AI-based applications, we must confront their potential risks. It may gradually surpass human cognition due to a disconnect between theory and application, making it hard to impose limitations and rendering the technology uncontrollable. This enlightenment stage also presents us with the best opportunity to address these challenges. We can strive to make AI interpretable and controllable. Only when theory and application truly merge can we ensure a positive future for AI and humanity.
When the world is busy marveling at the power of LLMs, most people are overlooking their risks. Deep learning itself is a black box, and as the box grows larger and we feed in more data, we lose any reliable way to understand its nature. Only a few people are aware of this problem, but often lack the theoretical tools to address it. Meanwhile, many of those with the necessary expertise are reluctant to commit to the long, difficult work of fundamental research, choosing instead to focus on short‑term gains from foundation models. Yet the interpretability of any method must ultimately rest on solid principles from the natural sciences; without that foundation, what we call "understanding" is little more than an illusion.
My PhD ultimate objective is to achieve a truly interpretable and controllable framework!
—— 2023.08.30 by Xinyu after watching the film "Oppenheimer"