Not Your Father’s GPU Nvidia released two chips designed for AI.

Published
Reading time
2 min read
Intel AI chip

Intel, which dominates the market for general-purpose processors, is shipping its long-awaited AI chips.

What happened: The chip giant announced that two so-called neural network processors are available to data-center customers.

How they work: One of the new chips is intended for training deep learning models, the other for inferencing. They’re designed to balance computational horsepower, communications speed, and memory capacity.

  • The NNP-T1000, also called Spring Crest, takes on the Nvidia GPUs that process many AI training workloads. The new chip focuses on matrix multiplication, linear algebra, and convolution. It’s designed to scale out efficiently from small clusters to supercomputers and comes in the form of a card that plugs into a rack.
  • The NNP-I1000, also known as Spring Hill, is a modified version of Intel’s latest 10th-generation Core design. It trades some parts of that architecture for specialized inference engines. It scores competitively on the MLPerf benchmark running a ResNet50 compared to Nvidia’s T4 inference chip. It comes in the form of a sleeve that can be plugged into a general-purpose server.
  • At a separate event, Intel announced its first data-center GPU, known as Ponte Vecchio, scheduled for delivery in 2021 — a direct shot at Nvidia’s market.

Behind the news: While Intel chips process most AI inferencing in data centers, Nvidia leads in GPUs that speed up AI training. In 2016, Intel acquired Nervana, a startup devoted to next-generation AI chips. Meanwhile, however, the field has become crowded. Specialized designs have proliferated at a host of startups like Cerebras and tech giants like Google, while Qualcomm has been building inferencing capability into chips for low-powered devices like smartphones.

Why it matters: There’s no such thing as too much processing power for machine learning. The faster we can train models, the more data we can absorb, and the faster we can innovate new network architectures and applications. And the faster users can run our models, the more value we can deliver. As for chip makers, they recognize that AI is the future: Neural networks’ voracious appetite for processing power likely will drive silicon sales for years.

We’re thinking: Large cloud providers are consolidating computation, and that’s having a big impact on the chip business. Their concentrated buying power puts them in a strong position to demand lower prices. The cloud companies also want to make sure they have alternative providers of deep learning chips, so they’ll buy chips from several vendors rather than only the top one. All this is playing out against a backdrop of rapid growth of AI workloads. Expect intense competition and in the years ahead.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox