A Smoldering Conflict Flares The debate between AI symbolism and connectionism, explained

Published
Reading time
2 min read
Illustration of two people playing a snowball fight

A year-long Twitter feud breathed fresh life into a decades-old argument over AI’s direction.

What happened: Gary Marcus, a New York University professor, author, entrepreneur, and standard bearer of logic-based AI, waged a tireless Twitter campaign to knock deep learning off its pedestal and promote other AI approaches.

Driving the story: Marcus’ incessant tweets reignited an old dispute between so-called symbolists, who insist that rule-based algorithms are crucial to cognition, and connectionists, who believe that wiring enough neurons together with the right loss function is the best available path to machine intelligence. Marcus needled AI practitioners to reacquaint itself with the symbolist approach lest connectionism’s limitations precipitate a collapse in funding, or AI winter. The argument prompted sobering assessments of AI’s future and culminated in a live debate on December 23 between Marcus and deep learning pioneer and Université de Montréal professor Yoshua Bengio. The conversation was remarkably civil, and both participants acknowledged the need for collaboration between partisans on both sides.

  • Marcus kicked off his offensive in December 2018 by challenging deep learning proponents over what he termed their “imperialist” attitude. He went on to goad Facebook’s Yann LeCun, a deep learning pioneer, to choose a side: Did he place his faith in pure deep learning, or was there a place for good old-fashioned AI?
  • OpenAI made headlines in October with a hybrid model. Its five-fingered robot hand solved the Rubik’s Cube puzzle through a combination of deep reinforcement learning and Kociemba’s algorithm. While Marcus pointed out that Kociemba, not deep learning, computed the solution, others asserted that the robot could have learned this skill with further training.
  • Microsoft stepped into the breach in December with what it calls neurosymbolic AI, a set of model architectures intended to bridge the gap between neural and symbolic representations.
  • As the year drew to a close, the NeurIPS conference highlighted soul searching in the AI community. “All of the models that we have learned how to train are about passing a test or winning a game with a score, [but] so many things that intelligences do aren’t covered by that rubric at all,” Google researcher Blaise Agüera y Arcas stated in a keynote.

Behind the news: Animosity between the symbolists and connectionists dates back more than a half-century. Perceptions, a 1969 broadside against early neural networks, helped trigger the first AI winter. The second, nearly two decades later, came about partly because symbolic AI relied on LISP computers that became obsolete with the advent of personal computers. Neural nets began to gain ground in the 1990s and achieved dominance amid the last decade’s explosion of computing power and data.

Where things stand: We look forward to exciting times ahead as connectionists and symbolists put their heads together, or until one faction wipes out the other.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox