Dear friends,

On Halloween, the veil lifts between the spirit and AI worlds, allowing the two to pass through one another. The resulting paranormal — or, as AI practitioners call it, paragaussian — phenomena raise questions like these:

What do you call it when it takes repeated practice to make a scary jack-o’-lantern?
A learning carve.

Responsible AI requires being candid about what it can do. Who’s the best person to help with this?
Dr. Frank-enstein.

Illustration of Andrew Ng dressed as a panda holding a bag full of candy

The ghost of a machine learning engineer visited a museum and defaced all the paintings. Why?

She was implementing image wreck-ognition.

On Halloween night, when kids in costume go from house to house and only get unpopped popcorn, what do you call it?

Kernel trick, or treat.

Keep spooking!
Andrew

P.S. When my daughter Nova was six months old, I bought her a panda stuffed animal. She liked it, and after many panda-related requests, guess what my Halloween costume is? The lesson for me is: Be careful what presents you give, lest they lead to panda-monium.

Be Very Afraid . . .

Something Wicked This Way Comes

The days grow short, trees shed their leaves, and shadows loom in the failing light. Halloween is upon us, and once again we’re beset by thoughts that all is not well in our world. We sense, lurking in the dusk, the presence of weaponized drones that attack on their own volition, disease-carrying models that breed like rats, algorithms that drive people mad with power. Let us step boldly into the darkness and lift a flaming PyTorch to light the way.


Don’t Be Evil

Tech companies generally try to be (or to appear to be) socially responsible. Would some rather let AI’s negative impacts slide?

The fear: Companies with the know-how to apply AI at scale dominate the information economy. This gives them an overpowering incentive to release harmful products and services, jettison internal checks and balances, buy or lie their way out of regulations, and ignore the trail of damage in their wake.

Horror stories: When you move fast and break things, things get broken.

  • Documents leaked by a former Facebook product manager have prompted scrutiny from the company’s oversight board and government officials. The leaks reveal, among other things, that the social network’s XCheck program exempts many politicians, celebrities, and journalists from its content moderation policies, enabling them to spread misinformation and incitements to violence with impunity.
  • Google parted acrimoniously with Timnit Gebru, former co-lead of its Ethical AI division, after she produced research critical of the company’s natural language models. Soon afterward, it fired her colleague Margaret Mitchell. Observers have said the company’s ethical AI effort is “in limbo.”
  • Tesla, whose self-driving features have been implicated in numerous accidents, is recruiting beta testers for its next-generation software. Applicants must allow the company to monitor their driving, and the company says it accepts only drivers who demonstrate perfect safety — but Twitter posts revealed that it accepted a low-scoring investor. The U.S. National Highway Transportation and Safety Administration has opened an investigation into the software’s role in 11 crashes with emergency vehicles.

Is a corporate dystopia inevitable? So far, most government moves to regulate AI have been more bark than bite.

  • The European Union proposed tiers of restriction based on how much risk an algorithm poses to society. But critics say the proposal defines risk too narrowly and lacks mechanisms for holding companies accountable.
  • U.S. lawmakers have summoned Big Tech executives to testify on their companies’ roles in numerous controversies, but regulations have gained little traction — possibly due to the vast sums of money the companies spend on lobbying.

Facing the fear: Some tech giants have demonstrated an inability to restrain themselves, strengthening arguments in favor of regulating AI. At the same time, AI companies themselves must publicly define acceptable impacts and establish regular independent audits to detect and mitigate harm. Ultimately, AI practitioners who build, deploy, and distribute the technology are responsible for ensuring that their work brings a substantial net benefit.


Killer Robots Are Here

War is already bad enough. What happens when human combatants are replaced by machines?

The fear: Autonomous weapons will become an inevitable aspect of warfare. AI that can’t reliably tell friend from foe will strike mistaken targets, kill civilians, and attack enemies who have surrendered. Systems trained to react to threats quickly will escalate conflicts. Humans won’t be held accountable for automated atrocities.

Horror stories: While world leaders debate the ethics of fully autonomous weapons, killer robots are already on the march.

  • Last spring, the Libyan Government of National Accord reportedly used autonomous quadcopters to attack retreating insurgents. The drones identify targets using face and object recognition. They dive toward enemy combatants and detonate an onboard explosive device as they collide.
  • In January, an expert panel convened by the U.S. government advised that the military has a “moral imperative” to pursue research into autonomous weapons.
  • The U.S. Defense Advanced Research Projects Agency (DARPA) recently tested swarms of autonomous air- and ground-based drones designed to locate and attack people hiding in buildings.

Quivering in your (combat) boots? Efforts to automate weaponry have a long history. Lately, AI has found its way into command and control systems. It’s not too late to establish an international ban on autonomous weapons, but the door is closing fast.

  • Leaders of 30 countries support a global ban on autonomous weapons. China, Russia, and the U.S. have blocked the effort so far.
  • Drones are relatively inexpensive, and AI systems are becoming easier to develop. There’s little to stop a determined enemy from using them.
  • The short film Slaughterbots (2017) dramatized the ease with which autonomous weapons could be used to crack down on political opponents, journalists, and dissidents.

Facing the fear: Countries need ways to defend themselves. An effective ban on autonomous weapons must start with a clear line between what is and isn’t acceptable. Machine learning engineers should play a key role in drawing it.


New Models Inherit Old Flaws

Is AI becoming inbred?

The fear: The best models increasingly are fine-tuned versions of a small number of so-called foundation models that were pretrained on immense quantities of data scraped from the web. The web is a repository of much that’s noble in humanity — but also much that’s lamentable including social biases, ignorance, and cruelty. Consequently, while the fine-tuned models may attain state-of-the-art performance, they also exhibit a penchant for prejudice, misinformation, pornography, violence, and other undesirable traits.

Horror stories: Over 100 Stanford University researchers jointly published a paper that outlines some of the many ways foundation models could cause problems in fine-tuned implementations.

  • A foundation model may amplify biases in the data used for fine-tuning.
  • Engineers may train a foundation model on private data, then license the work to others who create systems that inadvertently expose personal details.
  • Malefactors could use a foundation model to fine-tune a system to, say, generate fake news articles.

How firm is the foundation? The Stanford paper stirred controversy as critics took issue with the authors’ definition of a foundation model and questioned the role of large, pretrained models in the future of AI. Stanford opened a center to study the issue.

Facing the fear: It’s not practical to expect every user of a foundation model to audit it fully for everything that might go wrong. We need research centers like Stanford’s — in both public and private institutions — to investigate the effects of AI systems, how harmful capabilities originate, and how they spread.


A MESSAGE FROM DEEPLEARNING.AI

DeepLearning.AI has updated the Natural Language Processing Specialization with new and improved content. We partnered with Hugging Face to create lectures and labs to give you more hands-on experience with transformer models! Enroll now


Democracies Embrace Surveillance

What if AI-enabled monitoring isn’t just for dictators and despots?

The fear: Under the pretext of maintaining law and order, even countries founded on a commitment to individual rights allow police to take advantage of smart-city infrastructure and smart-home devices. The ability to spy on citizens is rife with moral hazards and opens the door to authoritarian control.

Horror stories: Law enforcement agencies worldwide have found AI-driven surveillance irresistible. Reports of deals between police and vendors portend further invasive practices to come.

  • In the U.S., thousands of state and local police officers have used Clearview AI to identify faces without obtaining permission from their superiors (or people whose photos trained the system).
  • Flock Safety, a U.S. maker of license plate readers, offers access to a nationwide network of cameras. Over 400 police agencies had signed on as of late 2019.
  • A London face recognition system draws on cameras throughout the city to alert nearby police officers when it identifies a person of interest.
  • Police in India allegedly have used face recognition to target protestors of a controversial citizenship law. Legal inquiries have raised questions about the system’s accuracy.

Panopticon now? Most Americans believe that, in the hands of law enforcement, face recognition will make society safer. Yet such systems are notoriously prone to misuse, inaccuracy, and bias. Several U.S. cities and states have passed laws that restrict or ban police use of face recognition, and others are considering similar legislation. The European Parliament recently passed a nonbinding ban on the practice.

Facing the fear: Society should guarantee basic rights to privacy. That said, the impulse to ban face recognition carries its own danger. Ceding AI development to repressive regimes risks a proliferation of systems that enable repressive uses. Instead, elected leaders should establish rules to ensure that such systems are transparent, auditable, explainable, and secure.


Artistry Is Obsolete

Is human creativity being replaced by the synthetic equivalent?

The fear: AI is cranking out increasingly sophisticated visual, musical, and literary works. AI-generated media will flood the market, squeezing out human artists and depriving the world of their creativity.

Horror stories: The most compelling AI-generated art today requires people who curate a system’s inputs and outputs to ensure that automated creations have a recognizable aesthetic character. Tomorrow is up for grabs.

  • Music is increasingly automated. At the frontier, there’s the singing, composing, marimba-playing robot Shimon; the computer-assisted completion of Beethoven's unfinished Tenth Symphony; and OpenAI’s Jukebox, which synthesizes alternate-reality hits by everyone from Elvis Presley to Rage Against the Machine.
  • AI is transforming words into images. In a typical setup, CLIP, a model that matches text with images, receives a text description and directs a generative adversarial network (GAN) to produce an image that fits. Digital artist Martin O’Leary used this technique to turn Samuel Taylor Coleridge’s epic poem “Kubla Khan” into a scrolling montage.
  • Multimedia artist Ross Goodwin loaded a laptop with an LSTM trained to convert images to words, attached it to camera output, and instructed it to compose prose while he drove across the country. The resulting novel, called 1 The Road, garnered acclaim.

The end of art history? AI-generated art has edged its way into both fine-art and commercial worlds.

  • In 2018, a GAN-produced portrait sold at auction  for $432,500.
  • Companies like Soundraw enable video producers, YouTube creators, and Spotify artists to generate custom music from a web page.
  • Brooksby.ai sells novels written by a recurrent neural network that was fine-tuned on the Project Gutenberg database of classic books. A GAN produces the covers, and a regression model trained on data from Amazon.com prices them.

Facing the fear: AI makes a wonderful complement to human creativity, producing variations, offering alternatives, or supplying a starting point for traditional artistic exploration. On the other hand, the best current models can produce output that, to an untrained eye or ear, comes close to human artworks. And they’re only going to get better.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox