Artificial General Intelligence Hope or Hype?

Published
Reading time
2 min read
Artificial-general-intelligence meme

Dear friends,

I’ve always thought that building artificial general intelligence — a system that can learn to perform any mental task that a typical human can — is one of the grandest challenges of our time. In fact, nearly 17 years ago, I co-organized a NeurIPS workshop on building human-level AI.

Artificial general intelligence (AGI) was a controversial topic back then and remains so today. But recent progress in self-supervised learning, which learns from unlabeled data, makes me nostalgic for the time when a larger percentage of deep learning researchers — even though it was a very small group — focused on algorithms that might play a role in mimicking how the human brain learns.

Obviously, AGI would have extraordinary value. At the same time, it’s a highly technical topic, which makes it challenging for laypeople — and even experts — to judge which approaches are feasible and worthwhile. Over the years, the combination of AGI’s immense potential value and technical complexity has tempted entrepreneurs to start businesses on the argument that, if they have even a 1 percent chance of success, they could be very valuable. Around a decade ago, this led to a huge amount of hype around AGI, generated sometimes by entrepreneurs promoting their companies and sometimes by business titans who bought into the hype.

Of course, AGI doesn’t exist yet and there’s no telling if or when it will. The volume of hype around it has made many respectable scientists shy away from talking about it. I’ve seen this in other disciplines as well. For decades, overoptimistic hopes that cold fusion would soon generate cheap, unlimited, safe electricity have been dashed repeatedly so that, for a time, even responsible scientists risked their reputations by talking about it.

The hype around AGI has died down compared to a few years ago. That makes me glad, because it creates a better environment for doing the work required to make progress toward it. I continue to believe that some combination of learning algorithms, likely yet to be invented, will get us there someday. Sometimes I wonder whether scaling up certain existing unsupervised learning algorithms would allow neural networks to learn more complex patterns, for instance self-taught learning and self-supervised learning. Or — to go farther out on a limb — sparse coding algorithms that learn sparse feature representations. I look forward also to a foundation model that can learn rich representations of the world from hundreds of thousands of hours of video.

If you dream of making progress toward AGI yourself, I encourage you to keep dreaming! Maybe some readers of The Batch one day will make significant contributions toward this grand challenge.

Keep learning!

Andrew

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox