creating-a-common-language

Numerous things have evolved in the 15 years since Kaiming He was pursuing his PhD.

“While you are in your PhD phase, there exists a considerable barrier between various disciplines and subjects, and there was even a notable separation within computer science,” He remarks. “The individual adjacent to me might be engaged in activities that I entirely couldn’t comprehend.”

In the seven months since he joined the MIT Schwarzman College of Computing as the Douglas Ross (1954) Career Development Professor of Software Technology in the Department of Electrical Engineering and Computer Science, He mentions experiencing something that he believes is “extremely uncommon in human scientific evolution” — a reduction of barriers that spans across diverse scientific fields.

“There is no possibility I could ever grasp high-energy physics, chemistry, or the forefront of biological research, but now we are observing something that assists us in dismantling these barriers,” He states, “and that is the establishment of a shared language that has emerged in AI.”

Constructing the AI bridge

As per He, this transformation commenced in 2012 following the “deep learning revolution,” a moment when it was recognized that this collection of machine-learning techniques based on neural networks was remarkably potent and could be utilized more extensively.

“At this juncture, computer vision — enabling computers to observe and interpret the world similarly to humans — started to increase exponentially because, as it turns out, you can implement this same strategy for various issues and numerous domains,” He explains. “Thus, the computer vision community expanded significantly since these different subfields could now communicate in a common language and utilize a shared toolkit.”

From there, He indicates that the trend began to proliferate into other realms of computer science, including natural language processing, voice recognition, and robotics, forming the groundwork for ChatGPT and advancements toward artificial general intelligence (AGI).

“All of this has unfolded over the past decade, leading us to a new emerging trend that I am genuinely excited about, and that is observing AI methodologies spreading into other scientific fields,” He remarks.

One of the most renowned examples, He notes, is AlphaFold, an AI program created by Google DeepMind, which forecasts protein structures.

“It’s a distinctly different scientific field, a fundamentally different challenge, yet individuals are also employing the same suite of AI tools, the same methodologies to tackle these issues,” He asserts, “and I perceive that as merely the beginning.”

The future of AI in research

Since his arrival at MIT in February 2024, He claims to have engaged with professors across nearly every department. Some days, he finds himself conversing with two or more professors from vastly different fields.

“I certainly don’t completely grasp their research focus, but they will provide some context, and then we can begin discussing deep learning, machine learning, [and] neural network models in relation to their challenges,” He states. “In this context, these AI instruments act like a common vernacular bridging these scientific fields: the machine learning tools ‘translate’ their terminology and concepts into language I can comprehend, enabling me to learn about their challenges and share my insights, and occasionally suggest solutions or avenues for them to explore.”

Extending to various scientific domains holds significant promise, ranging from leveraging video analysis to forecast weather and climate changes to accelerating the research timeline and slashing costs linked to new drug development.

While AI tools provide distinct advantages to He’s scientific colleagues, he also underscores the reciprocal influence they can exert, and have exerted, on the development and enhancement of AI.

“Scientists present new problems and challenges that enable us to keep refining these tools,” He states. “However, it is also essential to acknowledge that many of the current AI tools are derived from earlier scientific fields — for instance, artificial neural networks were inspired by biological observations; diffusion models for image generation were influenced by the concept of physics.”

“Science and AI are not separate subjects. We have been pursuing the same objectives from different viewpoints, and now we are converging.”

And what better venue for this convergence than MIT.

“It is not surprising that MIT has been able to recognize this transformation earlier than many other institutions,” He comments. “[The MIT Schwarzman College of Computing] has fostered an environment that connects diverse individuals, enabling them to sit together, converse, collaborate, and exchange their ideas while communicating in the same language — and I am witnessing this starting to unfold.”

Regarding when the barriers will completely dissolve, He specifies that this is a long-term undertaking that will not transpire overnight.

“Many decades ago, computers were deemed cutting-edge technology, and one required specialized knowledge to understand them, but now everybody is using computers,” He states. “I anticipate that in ten years or more, everyone will be utilizing some form of AI in various aspects of their research — it will simply be their fundamental tools, their primary language, allowing them to leverage AI to address their challenges.”


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This