Dear friends,

I’m writing this in Orlando, Florida, where I just spoke at the A3 Business Forum, a group that works to advance industrial automation through AI, robotics, and other tools. This was my first large conference since the pandemic started, and it was good to get out and meet more people (taking appropriate health precautions, of course).

I was heartened by the number of AI people at A3. I met entrepreneurs working on computer vision systems for warehouse logistics (for example, finding and moving packages automatically), automated inspection (which I spoke about), controlling fleets of mobile robots, and building factory simulations.

Some trends that I took away from the conference:

  • Many attendees observed that manufacturing and industrial automation are still in an early phase of adopting cloud computing and AI, and the number of viable use cases is still small but growing.
  • Several CEOs commented on the high cost of customizing systems for different environments and seemed to be considering vertical platforms — where the customer does the customization — as a promising solution.
  • Some executives in manufacturing and AI told me about overhyped AI applications that had failed and poisoned the well for other teams now trying to follow. This speaks to the importance of avoiding hype.
  • The supply-chain disruptions you read about in the news are real! I heard many stories about nearly-finished products that would have shipped months ago if they weren’t missing a part. It made me feel grateful that, in the software world, we can easily supply as many copies as a customer wishes to purchase.

I was pleased to find, in an audience of manufacturing professionals, many learners taking online AI courses. On the flip side, I’m enjoying the opportunity to learn the lingo and techniques of industrial automation. And there is much for all of us to learn! For example, despite having developed and implemented sophisticated computer vision algorithms, many AI practitioners don’t yet appreciate the importance of imaging system design — to make sure your image data is of high quality — as part of building a practical system.

Applied AI is inherently interdisciplinary. Melonee Wise, an old friend and roboticist who recently sold her company Fetch Robotics, gave me permission to share that her biggest regret was taking too long to bring in someone with warehouse experience. Let’s approach our work with an awareness that knowledge of other fields is critical to building useful systems. Stay curious and . . .

Keep learning!

Andrew

News

Chips at Risk

The hardware that runs the latest AI systems faces rising uncertainty as models grow larger and more computationally intensive.

What’s new: The U.S. Commerce Department sounded an alarm over bottlenecks in the availability of semiconductor chips, the integrated circuits at the heart of virtually all digital devices. The supply of advanced microprocessors that drive cutting-edge AI is vulnerable, The New York Times reported.

How it works: Geopolitical tensions, rising costs, and supply-chain disruptions threaten the supply of AI chips.

  • Geopolitical tensions. Amid friction over trade, security, and dominance in high-tech, the U.S. has hobbled China’s ability to manufacture chips. In recent years, the U.S. has restricted trade with companies that make crucial chip-fabrication tools. A new round of U.S. sanctions targets China’s effort to build its own manufacturing equipment. Meanwhile, China is asserting its sovereignty over Taiwan, home of Taiwan Semiconductor Manufacturing Company (TSMC), which manufactures AI chips for Amazon, Google, and Nvidia as well as chip-design startups like Cerebras and Graphcore.
  • Rising costs. Expanding the capacity to make such chips is extraordinarily expensive. A plant under construction by U.S. chip leader Intel may cost as much as $100 billion. Last year, TSMC raised its prices for advanced chips by 10 percent, the largest such price hike in a decade.
  • Supply-chain disruptions. A recent government report found that, while the Covid-19 pandemic drove up demand for semiconductors, a panoply of disasters — including blackouts, fires, shutdowns, and storms — curtailed supply. U.S. lawmakers are pushing legislation that would fund U.S.-based manufacturing plants such as Intel’s and other measures intended to boost the national semiconductor industry, such as easing immigration rules.

Why it matters: So far, the post-pandemic semiconductor shortage mostly has affected chips that rely on older manufacturing methods, such as those used in automobiles, medical devices, radio-frequency identification, and optical sensors. As AI grows ever more hungry for processing power, a sustained shortage of advanced chips could be a significant barrier to progress in the field and beyond.
We’re thinking: International cooperation generally fosters prosperity. In AI, it's essential to progress.


A Kinder, Gentler Language Model

OpenAI unveiled a more reliable successor to its GPT-3 natural language model.

What’s new: InstructGPT is a version of GPT-3 fine-tuned to minimize harmful, untruthful, and biased output. It's available via an application programming interface at a cost between $0.0008 and $0.06 per thousand tokens depending on speed and capability.

How it works: The developers improved the quality of GPT-3’s output using a combination of supervised learning and reinforcement learning from human feedback, in which humans rank a model’s potential outputs and a reinforcement learning algorithm rewards the model for producing material similar to high-ranking outputs.

  • The training dataset began with prompts created by hired contractors, some of them based on input by GPT-3 users, such as “tell me a story about a frog” or “explain the moon landing to a six-year-old in a few sentences.” The developers split the prompts into three parts and created responses in different ways for each part.
  • Human writers wrote responses to the first set of prompts. The developers fine-tuned a pretrained GPT-3, which would become InstructGPT, to generate the existing response to each prompt.
  • The next step was to train a model to generate higher rewards for better responses. Given the second set of prompts, the fine-tuned model generated multiple responses. Human raters ranked each response. Given a prompt and two responses, a reward model (another pre-trained GPT-3) learned to compute a higher reward for the higher-rated response and a lower reward for the other.
  • The developers used the third set of prompts to further fine-tune the language model using the reinforcement learning method Proximal Policy Optimization (PPO). Given a prompt, the language model generated a response, and the reward model granted a commensurate reward. PPO used the rewards to update the language model.

Results: InstructGPT outperformed GPT-3 on TruthfulQA, which tests how often a model generates falsehoods, 0.196 to 0.233 (lower is better). It also beat GPT-3 on RealToxicityPrompts, which tests a model’s propensity to produce toxic language, 0.413 to 0.224 (higher is better). Contractors rated InstructGPT’s output higher-quality than GPT-3’s, despite the former model only having 1.3 billion parameters — 100 times fewer than GPT-3’s 175 billion parameters.

Behind the news: GPT-3’s training dataset — in particular, massive quantities of text scraped from the web — has been linked output that stereotype certain social groups, denigrate women, and encourage self-harm. OpenAI previously tried to detoxify GPT-3 by fine-tuning it on PALMS, a dataset curated according to measures of human rights and human equality.

Why it matters: OpenAI’s language models have powered educational tools, virtual therapists, writing aids, role-playing games, and much more. Social biases, misinformation, and toxicity in such contexts are unhelpful at best, harmful at worst. A system that avoids such flaws is likely to be both less dangerous and more useful.

We’re thinking: Makers of foundation models, general-purpose models that can be fine-tuned for specialized applications, have a special responsibility to make sure their work doesn’t contain flaws that proliferate in fine-tuned versions. OpenAI’s ongoing effort to improve GPT-3 is a hopeful sign that the AI industry can manage such models responsibly.


A MESSAGE FROM DEEPLEARNING.AI

Join us on February 16, 2022, for a live, interactive session! Learn the top skills needed for a career in machine learning and artificial intelligence. Find out how to transition your career into these areas.


Fake Faces Are Good Training Data

Collecting and annotating a dataset of facial portraits is a big job. New research shows that synthetic data can work just as well.

What's new: A team led by Erroll Wood and Tadas Baltrušaitis at Microsoft used a 3D model to generate an effective training set for face parsing algorithms intended to recognize facial features. The FaceSynthetics dataset comprises 100,000 diverse synthetic portraits in which each pixel is annotated according to parts of the face.

Key insight: Face datasets annotated with facial features are expensive and time-consuming to build. Beyond the ethical issues that arise in collecting pictures of people, they require that every pixel of every image be labeled. Creating high-quality synthetic images can be similarly difficult, since a digital artist must design each face individually. A controllable 3D model can ease the burden of producing and labeling realistic portraits.

How it works: The authors used a high-quality 3D model of a face, comprising over 7,000 polygons and vertices as well as four joints, that changes shape depending on parameters defining a unique identity, expression, and pose. They fit the model to the average face derived from 500 scans of people with diverse backgrounds.

  • Given the average face, the authors derived the identity, pose, and expression from each of the 500 scans. They added further expressions from a dataset of 27,000 expression parameters. Meanwhile, artists produced a library of skin textures, facial expressions, facial hair, clothing, and accessories.
  • To create novel faces, the authors fit a distribution to match that of the real-world identity parameters and sampled from it. Then they applied elements from the library to render 100,000 face images.
  • They trained a U-Net encoder-decoder to classify each pixel as belonging to the right or left eye, right or left eyebrow, top or bottom lip, head or facial hair, neck, eyeglasses, and so on. The loss function minimized the difference between predicted and ground-truth labels.
  • Given real-life faces from the Helen dataset, the authors used the U-Net to classify each pixel. Then, given the U-Net's output, they trained a second U-Net to transform the predicted classifications to be similar to the human labels. This label adaptation step helped the system’s output to match biases in the human-annotated test data (for example, where a nose ended and the rest of the face began).

Results: The authors compared their system to a U-Net trained using images in Helen. Their system recognized the part of the face each pixel belonged to with an overall F1 score (a number between 0 and 1 that represents the balance between precision and recall, higher is better) of 0.920. The comparison model scored 0.916. This result fell somewhat short of the state of the art, EAGRNet, which achieved an F1 score of 0.932 in the same task.

Why it matters: Synthetic data is handy when the real thing is hard to come by. Beyond photorealistic, annotated faces, the authors’ method can produce similarly high-quality ultraviolet and depth images. It can also generate and label images outside the usual data distribution in a controllable way.

We're thinking: The authors generated an impressive diversity of realistic faces and expressions, but they were limited from the library of 512 discrete hairstyles, 30 items of clothing, and 54 accessories. We look forward to work that enables a 3D model to render these features as well.


Roadblocks to Regulation

Most U.S. state agencies use AI without limits or oversight. An investigative report probed reasons why efforts to rein them in have made little headway.

What’s new: Since 2018, nearly every proposed bill aimed at studying or controlling how state agencies use automated decision systems, or ADS, has failed to be enacted, according to The Markup, a nonprofit investigative tech-journalism site. Insiders blame big tech.

Why it hasn’t happened: Reporters interviewed lawmakers and lobbyists about dozens of stalled bills. They found that bureaucracy and lobbying have played major roles in blocking legislation.

  • Bureaucratic roadblocks: Lawmakers reported difficulty finding out from government agencies which AI tools they were using and how. This is partly due to the agencies’ lack of cooperation and partly because the lawmakers don’t understand the technology well enough to probe the full range of potential uses.
  • Industry resistance: Tech companies and their lobbyists have stymied passage of bills by arguing that their provisions are overly broad and would impact non-AI systems like traffic light cameras, DNA tests, and gunshot analysis. In California, an alliance of 26 tech groups derailed a bill that would have asked contractors to submit an impact report when making a bid. They argued that the legislation would limit participation, discourage innovation, and cost taxpayers.

Behind the news: Although U.S. states are mostly free to use AI, several of them impose limits on private companies.

  • Last year, New York City passed a law that requires private employers to audit automated hiring systems for gender and racial bias before putting them to use. The law, which goes into effect in 2023, also requires employers to notify candidates when they automate hiring and to offer an alternative.
  • A Colorado law set to take effect in 2023 will ban insurance companies from using algorithms that discriminate against potential customers based on factors including age, race, and religion. The law also establishes a framework for evaluating whether an insurance algorithm is biased.
  • Last year, Illinois required Facebook to pay $650 million to state residents. A 2008 law limits how companies can obtain and use personal information; in this case, image data used by Facebook’s now-defunct face recognition feature.

Why it matters: China, the European Union, and the United Kingdom have announced laws designed to rein in AI’s influence in business, society, and other domains. The lack of such limits in the U.S. make it an outlier. On one hand, this leaves the authorities free to experiment and perhaps discover productive use cases. On the other, it invites abuse — or simply lack of quality control over a technology that has great potential for both good and ill.

We’re thinking: Regulation done badly is a drag on progress. Done right, though, it can prevent harm, level the playing field for innovators, and ensure that benefits are widespread. The AI community should push back against special interests — even when we would profit — that stymie regulation that would be good for society.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox