The Easiest Way to Achieve Artificial General Intelligence Coming up with scientific definitions of ambiguous terms like consciousness and sentience can spur progress but mislead the public.

Published
Reading time
2 min read
The Easiest Way to Achieve Artificial General Intelligence: Coming up with scientific definitions of ambiguous terms like consciousness and sentience can spur progress but mislead the public.

Dear friends,

As I wrote in an earlier letter, whether AI is sentient or conscious is a philosophical question rather than a scientific one, since there is no widely agreed-upon definition and test for these terms. While it is tempting to “solve” this problem by coming up with precise definitions and well defined tests for whether a system meets them, I worry that poor execution will lead to premature declarations of AI achieving such criteria and generate unnecessary hype. 

Take the concept of self-awareness, which refers to a conscious knowledge of one's own self. Suppose we define a robot as self-aware if it can recognize itself in the mirror, which seems a natural way to test a robot’s awareness of itself. Given this definition — and that it’s not very hard to build a robot that recognizes itself — we would be well on a path to hype about how AI was now self-aware. 

This example isn’t a prediction about the future. It actually happened about 10 years ago, when many media sources breathlessly reported that a robot “Passes Mirror Test, Is Therefore Self-Aware  … conclusively proving that robots are intelligent and self-aware.” 

While bringing clarity to ambiguous definitions is one way for science to make progress, the practical challenge is that many people already have beliefs about what it means for something to be self-aware, sentient, conscious, or have a soul. There isn’t widespread agreement on these terms. For example, do all living things have souls? How about a bacterium or virus?

So even if someone comes up with a reasonable new scientific definition, many people — unaware of the new definition — will still understand the term based on their earlier understanding. Then, when media outlets start talking about how AI has met the definition, people won’t recognize that the hype refers to a narrow objective (like a robot recognizing itself in the mirror). Instead, they’ll think that AI accomplished what they generally associate with words like sentience. 

Because of this, I have mixed feelings about attempts to come up with new definitions of artificial general intelligence (AGI). I believe that most people, including me, currently think of AGI as AI that can carry out any intellectual task that a human can. With this definition, I think we’re still at least decades away from AGI. This creates a temptation to define it using a lower bar, which would make it easier to declare success: The easiest way to achieve AGI might be to redefine what the term means! 

Should we work to clarify the meanings of ambiguous terms that relate to intelligence? In some cases, developing a careful definition and getting widespread agreement behind it could set a clear milestone for AI and help move the field forward. But in other cases, I’m satisfied to avoid the risk of unnecessary hype and leave it to the philosophers.

Keep learning!

Andrew

P.S. LLMOps is a rapidly developing field that takes ideas from MLOps (machine learning operations) and specializes them for building and deploying LLM-based applications. In our new course, “LLMOps,” taught by Google Cloud’s Erwin Huizenga, you’ll learn how to use automation and experiment tracking to speed up development. Specifically, you’ll develop an LLMOps pipeline to automate LLM fine-tuning. By building a tuning pipeline and tracking the experiment artifacts — including the parameters, inputs, outputs, and experimental results — you can reduce manual steps in the development process, resulting in a more efficient workflow. Sign up here!

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox