Dear friends,

I’m thrilled to announce the first data-centric AI competition! I invite you to participate.

For decades, model-centric AI competitions, in which the dataset is held fixed while you iterate on the code, have driven our field forward. But deep learning has matured to the point that, for many applications, an open-source model works just fine — if we can prepare the right data to train it. What we urgently need now are methods, tools, and platforms for getting the data we need efficiently and systematically.

This competition, a collaboration between Landing AI and DeepLearning.AI, offers an opportunity to develop methods for improving data.

In the grand tradition of MNIST, the dataset assembled by Yann LeCun and his colleagues that has driven much model-centric progress, this competition will use a new dataset called Roman MNIST. It’s a noisy collection of handwritten Roman numerals to serve as a starting point for making a dataset for this task.

Can you develop a dataset that results in the best performance on this problem?

The competition will end on September 4, 2021 — the birthday of John McCarthy, who coined the term artificial intelligence. The winners will be invited to join me at a private roundtable event to share ideas about how to grow the data-centric movement, and I will highlight their work here in The Batch.

I’m grateful to Chris Re at Stanford and D Sculley at Google for advising us on this competition, and to everyone who contributed their thoughts on social media.

There will be more data-centric AI competitions in the future. But if you join this one with me, you’ll be able to tell your friends that you were there at the very beginning of the data-centric AI movement! You’ll find further information here.

Keep preparing data!

Andrew

News

A neural network wrote the blueprint for upcoming computer chips

Computers Making Computers

A neural network wrote the blueprint for upcoming computer chips that will accelerate deep learning itself.

What’s new: Google engineers used a reinforcement learning system to arrange the billions of minuscule transistors in an upcoming version of its Tensor Processing Unit (TPU) chips optimized for computing neural networks. The system generated the design in six hours rather than the usual span of weeks, as detailed in Nature.

Key insight: Designing a chip is like playing a board game. A silicon wafer’s area resembles a board, parameters like macro counts and netlist topologies resemble pieces, and evaluation metrics resemble victory conditions. Reinforcement learning (RL) excels at meeting such challenges: Think of DeepMind’s AlphaGo — the RL model that, in 2015, became the first computer program to beat a Go master on a full-size board without a handicap.

How it works: Google introduced its approach in a paper published last year.

  • The authors pretrained a graph neural network for 48 hours on a dataset of 10,000 chip designs, generating transferrable representations of chips.
  • Although the pretraining was supervised, the loss function was based on RL. The input was the state associated with a given design, and the label was the reward for reduced wire length and congestion.
  • They fine-tuned the system for 6 hours using reinforcement learning.

Results: The researchers compared their system’s output to that of a human team who had designed an existing TPU. Their approach completed the task in a fraction of the time, and it either matched or outperformed the human team with respect to chip area, wire length, and power consumption.

Behind the news: Google introduced the first TPU in 2015, and today the chips power Google services like search and translation and are available to developers via Google Cloud. Launched last month, the fourth-generation TPU can train a ResNet-50 on ImageNet in 1.82 minutes.

Why it matters: AI-powered chip design could cut the cost of bespoke chips, leading to an explosion of special-purpose processing for all kinds of uses.

We’re thinking: Reinforcement learning is hot, and we’ve seen companies announce “RL” results that would be described more accurately as supervised learning. But this appears to be a genuine use of RL ideas, and it’s great to see this much-hyped approach used in a valuable commercial application.


Covid-19 over a graph

AI Against Covid: Progress Report

A new report assessed how AI has helped address Covid-19 and where it has fallen short.

What’s new: Machine learning systems haven’t lived up to their promise in some areas, but in others they’ve made a substantial impact, biomedical engineer Maxime Nauwynck wrote in The Gradient, an online journal of machine learning.

Application areas: The author surveyed only systems specifically designed or adapted to fight Covid-19.

  • Clinical Applications: In the pandemic’s early months, hundreds of research papers described systems allegedly capable of diagnosing the illness from lung scans. Few made it into clinical practice. Most were tripped up by poorly constructed public datasets, unexplainable output, or inadequate quality control.
  • Epidemiology: Early AI models were hobbled by lack of data, but public health officials in the U.S. and UK ultimately developed ensemble systems to track the disease’s spread and anticipate its impacts.
  • Treatments: The FDA granted emergency approval to treatments developed by biomedicine startups BenevolentAI and AbCellera. Both companies used AI to aid drug discovery. Moderna credits AI with helping it develop one of the first vaccines with extraordinary speed.
  • Information: Chatbots helped overburdened health workers in China and the U.S. manage the deluge of patient questions, appointment scheduling, and other services.
  • Public Safety: Computer vision systems are helping cities and businesses monitor social distancing. In France, systems detect whether individuals are wearing masks in public places.

Behind the news: AI-powered health monitoring systems from BlueDot and Healthmap made headlines early last year when they reported a novel disease outbreak in the Wuhan area one week before the World Health Organization issued its first warnings.

Why it matters: While AI is no panacea, this inventory makes clear that the technology has made significant contributions to the fight against Covid-19.

We’re thinking: When new technology meets a previously unknown illness, there are bound to be hits and misses. The successes should help us prepare for — or, better yet, avoid — the next contagion.


A MESSAGE FROM DEEPLEARNING.AI

The Batch Image 4

Coming soon! “Machine Learning Modeling Pipelines in Production,” Course 3 in the Machine Learning Engineering for Production (MLOps) Specialization, launches on Coursera on June 30, 2021. Pre-enroll now!


Two images showing the process of turning handwriting into text

The Writing, Not the Doodles

Systems designed to turn handwriting into text typically work best on pages with a consistent layout, such as a single column unbroken by drawings, diagrams, or extraneous symbols. A new system removes that requirement.

What’s new: Sumeet Singh and Sergey Karayev of Turnitin, a company that detects plagiarism, created a general-purpose image-to-sequence model that converts handwriting into text regardless of its layout and elements such as sketches, equations, and scratched-out deletions.

Key insight: Handwriting recognition systems typically use separate models to segment pages into blocks of words and turn the writing into text. Neural networks allow an end-to-end approach. Convolutional neural networks are good at processing images, and transformers are good at extracting information from sequences. A CNN can create representations of text in an image, and a transformer can turn those representations into text.

How it works: The system feeds pages through an encoder based on a 34-layer ResNet followed by a transformer-based decoder.

  • The researchers trained the system on five datasets including the IAM-database of handwritten forms and Free Form Answers, which comprises scans of STEM-test answers including equations, tables, and drawings.
  • They augmented IAM by collaging words and lines at random and generated synthetic data by superimposing text from Wikipedia in various fonts and sizes on different background colors. In addition, they augmented examples by adding noise and changing brightness, contrast, scale, and rotation at random.
  • The data didn’t include labels for sketches, equations, and scratched-out deletions, so the system learned to ignore them. The variety of layouts encouraged the system to learn to transcribe text regardless of other elements.

Results: On IAM, the author’s system achieved a character error rate of 6.3 percent, while an LSTM designed for 2D achieved 7.9 percent. On Free Form Answers, it achieved a character error rate of 7.6 percent. Among Microsoft’s Cognitive Services, Google’s Cloud Vision, and Mathpix , the best achieved 14.4 percent.

Why it matters: End-to-end approaches to deep learning have been overhyped. But, given the large amount of data, including easily synthesized data, available for handwriting recognition, this task is an excellent candidate for end-to-end learning.

We’re thinking: But can it decipher your doctor’s scrawl?


A self-riding bicycle

A Bicycle Built for Zero

Self-driving cars, get ready to share the road with self-riding bikes.

What’s new: Beijing-based machine learning researcher Zhihui Peng built a riderless bike that stays upright, navigates, and avoids collisions, Synced Review reported. You can watch Peng’s video presentation here.

How he did it: Zhihui calls his design eXtremely Unnatural Auto-Navigation (Xuan).

  • The bike’s sensors include a depth-sensing camera, lidar, and accelerometer. Battery-powered motors keep it rolling, turn the handlebars, and spin a gyroscope that maintains its balance.
  • Obstacle avoidance, path planning, and object following models run on a Huawei Ascend 310 processor mounted behind the seat. Zhihui developed them using Huawei’s Ascend software stack and used Robotic Operating System to control communications between the bike’s subsystems.
  • The bike steered itself through several tests. It remained balanced even when it hit another object and when Zhihui put a bag on its handlebars.

Behind the news: Zhihui was inspired by a 2016 April Fool’s Day prank played by Google. In a video that announced “Google’s Self-Driving Bike,” the company made it appear as though a two-wheeler had driven itself through the streets of Amsterdam.

Why it matters: Self-driving bikes aren’t necessarily a joke. A self-driving motorcycle helped to attract attention to the 2004 Darpa Grand Challenge, which kick-started the current self-driving movement. Zhihui’s contraption is a DIY project, but it may prefigure summonable e-bikes, autonomous food deliveries, or steering control for long-distance cyclists who need a break from the handlebars.

We’re thinking: We look forward to the self-pedaling unicycle.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox