Dear friends,

Last week, I mentioned that one difference between traditional software and AI products is the problem of unclear technical feasibility. In short, it can be hard to tell whether it’s practical to build a particular AI system. That’s why it’s worthwhile to quickly assess technical feasibility before committing resources to build a full product.

If you have no data or only a handful of examples (enough to get a sense of the problem specification but too few to train an algorithm), consider the following principles:

  • For problems that involve unstructured data (images, audio, text), if even humans can’t perform the task, it will be very hard for AI to do it.
  • A literature review or analysis of what other teams (including competitors) have done may give you a sense of what’s feasible.

If you have a small amount of data, training on that data might give you some signals. At the proof-of-concept stage, often the training and test sets are drawn from the same distribution. In that case:

  • If your system is unable to do well on the training set, that’s a strong sign that the input features x do not contain enough information to predict y. If you can’t improve the input features x, this problem will be hard to crack.
  • If the system does well on the training set but not the test set, there’s still hope. Plotting a learning curve (to extrapolate how performance might look with a larger dataset) and benchmarking human-level performance (HLP) can give a better sense of feasibility.
  • If the system does well on the test set, the question remains open whether it will generalize to real-world data.
Magnifying glass over the words Unclear, Technical and Feasibility

If you’re building a product to serve multiple customers (say, a system to help different hospitals process medical records) and each customer will input data from a different distribution (say, each hospital has a different way of coding medical records), getting data from a few hospitals will also help you assess technical feasibility.

Given the heightened technical risk of building AI products, when AI Fund (Deeplearning.AI’s sister company that supports startups) looks at a company, it pays close attention to the team’s technical expertise. Teams with higher technical expertise are much more likely to get through whatever technical risk a business faces.

Keep learning!

Andrew

News

Animation of thousands of flights over the globe

Flight Paths Optimized

An AI system is helping aircraft avoid bad weather, restricted airspace, and clogged runways.

What’s new: Alaska Airlines will route all its flights using a system from Airspace Intelligence called Flyways.

How it works: The system evaluates weather data, federal airspace closures, and the routes of all planned and active flights in the U.S. to find the most efficient paths for aircraft to reach their destinations.

  • In a six-month trial last year, Alaska dispatchers accepted one-third of the system’s recommendations, shaving off an average of 5.3 minutes from 63 percent of flights. That saved an estimated 480,000 gallons of fuel, reducing the airline’s carbon dioxide emissions by 4,600 tons.
  • The system constantly monitors each plane’s route while it’s in the air, sending color-coded alerts to human dispatchers. A red light suggests that a flight should be rerouted due to weather or safety issues. A green light flashes if the re-route is for fuel efficiency. A purple light means a flight needs to avoid restricted airspace.
  • Alaska Airlines signed a multi-year agreement with Airspace Intelligence. Terms of the deal were not disclosed.

Behind the news: AI is making inroads into several areas of air transport.

  • FedEx partnered with Reliable Robotics to build self-piloting Cessnas that carry cargo to remote areas.
  • California startup Merlin plans to build a fleet of autonomous small planes to deliver cargo and fight fires.
  • A number of drone delivery services are getting ready to take flight, pending permission from the U.S. Federal Aviation Administration.

Why it matters: Commercial air travel got walloped by the pandemic. Streamlining operations may be necessary to revive it, according to the U.S. Travel Association.

We’re thinking: Unlike cars and trucks, airplanes can’t easily go electric, so they’re stuck with fossil fuels for the foreseeable future. Cutting their carbon emissions will benefit everyone.


Animation showing methaphorically AI taking over people

Is Ethical AI an Oxymoron?

Many people both outside and inside the tech industry believe that AI will serve mostly to boost profits and monitor people — without regard for negative consequences.

What’s new: A survey by Pew Research Center and Elon University asked 602 software developers, business leaders, policymakers, researchers, and activists: “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” 68 percent said no.

What they found: Respondents provided a brief written explanation of their thoughts. Some of the more interesting responses came from the pessimists:

  • Ethical principles need to be backed up by engineering, wrote Ben Shneiderman, computer scientist at the University of Maryland. For example, AI systems could come with data recorders, like the black boxes used in aviation, for forensic specialists to examine in the event of a mishap.
  • Because most applications are developed by private corporations, Gary Bolles of Singularity University argued, ensuring that AI benefits humankind will require restructuring the financial system to remove incentives that encourage companies to ignore ethical considerations.
  • The ethical outcomes of most systems will be too indirect to manage, according to futurist Jamais Cascio. For instance, stock-trading algorithms can’t be designed to mitigate the social impacts of buying shares in a certain company.
  • Many respondents point out the lack of consensus regarding values to be upheld by ethical AI. For instance, should AI seek to maximize human agency or to mitigate human error?

Yes, but: Some respondents expressed a rosier view. Michael Wollowski, a professor of computer science at Rose-Hulman Institute of Technology, said, “Since the big tech companies (except for Facebook) by and large want to do good (well, their employees by and large want to work for companies that do good), they will develop their systems in a way that they abide by ethical codes. I very much doubt that the big tech companies are interested (or are able to find young guns [who are interested]) in maintaining an unethical version of their systems.”

Behind the news: Many efforts to establish ethical AI guidelines are underway. The U.S. military adopted its own code early last year, and the EU passed guidelines in 2019. The UN is considering rules as well. In the private sector, major companies including Microsoft and Google have implemented their own guidelines (although the latter’s reputation has been tarnished by the departure of several high-profile ethics researchers.)

Why it matters: Those who fund, develop, and deploy AI also shape its role in society. Understanding their ideas can help ensure that this technology makes things better, not worse.

We’re thinking: Often we aren’t the ones who decide how the technology will be used, but we can decide what we will and won’t build. If you’re asked to work on a system that seems likely to have a negative social impact, please speak up and consider walking away.


A MESSAGE FROM DEEPLEARNING.AI

We’re thrilled to launch “Modeling Pipelines for Production Machine Learning,” Course 3 in the Machine Learning Engineering for Production (MLOps) Specialization on Coursera! Enroll now


Woman at an Amazon Go using Just Walk Out technology

No Cashier? No Problem

Amazon doubled down on technology that enables shoppers in brick-and-mortar stores to skip the checkout line.

What’s new: Amazon opened its first full-scale supermarket that monitors which items customers place in their cart and charges them automatically when they leave. It calls the system Just Walk Out.

How it works: At the 25,000-square-foot Amazon Fresh supermarket in Belleveue, Washington, overhead cameras equipped with computer vision identify items customers put in their cart. In addition, weight-detecting sensors log whenever they move items from or back to store shelves. Back-end systems track the data to manage inventory.

  • Shoppers who have registered with Amazon can choose the automated checkout system as they enter the store by scanning a QR code, credit card, or hand.
  • If they use the same method to exit the store, the system will charge their account. (The store also has traditional checkout lanes for old-fashioned shoppers.)
  • Amazon licensed its Just Walk Out technology to other stores including Hudson Markets, OTG Cibo Express, and Delaware North.

Behind the news: Amazon previously has deployed the technology in 26 convenience stores in the UK and U.S., most of which are much smaller than its new emporium.

  • At some stores, the company also uses Dash carts that charge customers automatically via sensors that monitor what goes in and out.
  • Rival companies AiFi, Grabango, and Standard Cognition license similar technology for checkout-free shopping.

Why it matters: The big-store rollout suggests that Amazon is confident that Just Walk Out will scale. The company’s addition of Dash carts at some locations had prompted speculation that the storewide surveillance system could only work in small markets with limited inventory, according to The Verge.

We’re thinking: This technology may help relieve the current shortage of retail workers. In the longer term, though, it's part of a trend toward automation that’s bound to impinge on jobs. Such developments make it all the more urgent that society at large offer training and reskilling to anyone who wants them.


Few-shot Learning with a Universal Template (FLUTE)

Pattern for Efficient Learning

Getting high accuracy out of a classifier trained on a small number of examples is tricky. You might train the model on several large-scale datasets prior to few-shot training, but what if the few-shot dataset includes novel classes? A new method performs well even in that case.

What’s new: Eleni Triantafillou of Google and Vector Institute, along with colleagues at both organizations, designed Few-shot Learning with a Universal Template (FLUTE).

Key insight: Training some layers on several tasks while training others on only one reduces the number of parameters that need to be trained for a new task. Since fewer parameters need training, the network can achieve better performance with fewer training examples.

How it works: The authors trained a ResNet-18 to classify the eight sets in Meta-Dataset: ImageNet, Omniglot, Aircraft, Birds, Flowers, Quickdraw, Fungi, and Textures. Then they fine-tuned the model on 500 examples and tested it separately on Traffic Signs, MSCOCO , MNIST, CIFAR-10, and CIFAR-100.

  • The authors trained the model’s convolutional layers on all training sets. Prior to training on each set, they swapped in new batch normalization layers. These were Feature-wise Linear Modulation (FiLM) layers, which scale and shift their output depending on the dataset the input belongs to. They also swapped in a fresh softmax layer.
  • Prior to fine-tuning on each test set, the authors initialized the FiLM layers as follows: They trained a set encoder to find the training dataset most similar to the test set. A so-called blender network weighted the FiLM layer parameter values according to the set encoder’s output. Then it combined the weighted parameters in all first layers, all second layers, and so on.
  • The authors fine-tuned the FiLM layers to minimize nearest-centroid classifier loss: Using up to 100 labeled examples in each class (capped at 500 total), the authors created a centroid for each class, an average of the network’s outputs for all examples in that class. Then, using individual examples, they trained the FiLM layers to minimize the distance between the output and the centroid for the example’s class.
  • The model classified test examples by picking the class whose centroid was most similar to the example’s output.

Results: Averaged across the five test sets, FLUTE’s 69.9 percent accuracy exceeded that of other few-shot methods trained on the same datasets. The closest competitor, SimpleCNAPs, achieved 66.8 percent accuracy.

Why it matters: The combination of shared and swappable layers constitutes a template that can be used to build new classifiers when relatively few examples are available.

We’re thinking: We will con-template the possibility of using this approach for tasks beyond image classification.


A MESSAGE FROM FOURTHBRAIN

FourthBrain’s next Machine Learning Engineer class is starting soon. The program is 16 weeks, live, online, and instructor-led with personalized attention to give you all the tools to help land a job as an MLE. Join the live info session on July 7th featuring a Q&A with students in the program.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox