Simulation Substitutes for Data

  • Share to Facebook
  • Share to Twitter
  • Share to LinkedIn
  • Share via Email

The future of machine learning may depend less on amassing ground-truth data than simulating the environment in which a model will operate.
What happened: Deep learning works like magic with enough high-quality data. When examples are scarce, though, researchers are using simulation to fill the gap.

Driving the story: In 2019, models trained in simulated environments accomplished feats more complex and varied than previous work in that area. In reinforcement learning, DeepMind’s AlphaStar achieved Grandmaster status in the complex strategy game StarCraft II — able to beat 99.8 percent of human players — through tens of thousands of virtual years competing in a virtual league. OpenAI Five similarly trained a team of five neural nets to best world champions of Dota 2. But those models learned in a virtual world to act in a virtual world. Other researchers transferred skills learned in simulations to the real world.

  • OpenAI’s Dactyl robot hand spent the simulated equivalent of 13,000 years in virtual reality developing the dexterity required to manipulate a Rubik’s Cube puzzle. Then it applied those skills to a physical cube. It was able to solve the puzzle in 60 percent of tries when unscrambling the colored sides required 15 or fewer twists of the cube. Its success rate dropped to 20 percent when solving the puzzle required more moves.
  • Researchers at CalTech trained a recurrent neural network to differentiate overlapping and simultaneous earthquakes by simulating seismic waves rippling across California and Japan and using the simulations as training data.
  • Amazon’s Aurora self-driving vehicle unit runs hundreds of simulations in parallel to train its models to navigate urban environments. The company is training Alexa’s conversational faculties, delivery drones, robots for its fulfillment centers in a similar way.

Where things stand: Simulation environments like Facebook’s AI Habitat, Google’s Behavior Suite for Reinforcement Learning and OpenAI’s Gym offer resources for mastering tasks like optimizing textile production lines, filling in blank spots in 3D imagery, and detecting objects in noisy environments. On the horizon, models could explore molecular simulations to learn how to design drugs with desired outcomes.

  • Share to Facebook
  • Share to Twitter
  • Share to LinkedIn
  • Share via Email