Neuroevolution, which combines neural networks with ideas drawn from Darwin, is gaining momentum. Its advocates claim that they can achieve faster, better results by generating a succession of new models, each slightly different than its predecessors, rather than relying on a purpose-built model.

What’s new: Evolutionary strategies racked up a number of successes in the past year. They contributed to DeepMind’s AlphaStar, which can beat 99.8 percent of players of StarCraft 2, and to models that bested human experts in the videogames Montezuma’s Revenge and Pitfall Harry. An article in Quanta surveys the field, focusing on neuroevolution pioneer and Uber senior researcher Kenneth Stanley.

How it works: Traditionally, evolutionary approaches have been used to generate algorithms that solve a specific problem or perform best on a particular task. The best solutions are randomly mutated to find variations that improve performance. Neuroevolution applies random mutations to neural network weights and sometimes activation functions, hyperparameters, or architectures. Good models emerge over many iterations, sometimes crossing traits among many behavioral niches.

  • Uber AI Labs developed an algorithm called Paired Open-Ended Trailblazer and used it to evolve populations of virtual bipedal robots as well as obstacle courses for the robots to master (shown in the animation above). As the bots learned how to walk over, say, hills, the algorithm randomly moves them to environments where they encountered trenches. Agent-obstacle pairs are mutated, ranked for fitness and novelty, and then interbred. Ultimately, the agents learn skills they couldn’t learn through direct optimization.
  • DeepMind used evolutionary techniques along with deep learning and reinforcement learning to sharpen AlphaStart’s StarCraft 2 skills. The researchers bred models not to defeat one another outright but to employ off-kilter tactics and exploit weak points encountered in previous matches. The resulting model proved to be robust against a wide variety of strategies.

Yes, but: Evolutionary strategies require huge amounts of computation, even by the power-hungry standards of deep learning. Weights and other variables evolve randomly, so finding good models can take a long time. The random path itself is a drawback. Although researchers may set out to solve one problem, the evolutionary process may lead in other directions before wending its way back to the intended path — if it ever does.

Why it matters: Neuroevolution is a radical departure from typical neural networks and, by some accounts, a useful complement. Evolutionary approaches assign a far larger role to randomness, and randomly beneficial effects can compound over generations to find solutions or generate networks more effective than a human would have designed.

We’re thinking: Randomized search algorithms are a powerful approach to optimization, but their relation to biological evolution has been a subject of debate. With rising computational power and more complex challenges, such algorithms — whether evolutionary or not — may be poised to grow.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox