Dear friends,

It can take 6 to 24 months to bring a machine learning project from concept to deployment, but a specialized development platform can make things go much faster

My team at Landing AI has been working on a platform called LandingLens for efficiently building computer vision models. In the process, I’ve learned important lessons about how such platforms can help accelerate the machine learning project lifecycle:

  • Data collection: Ambiguity in labels (what is the “correct” value of y?) plagues many projects. If the labels are inconsistently defined, it’s impossible to achieve a high test-set accuracy. But it’s difficult to find these inconsistencies manually and to convince stakeholders (often subject-matter experts) to resolve them. An MLOps platform can identify problems and encourage consistency.
  • Model training: The ability to write code to train a model in TensorFlow or PyTorch is a valuable skill. But even for skilled engineers, it’s faster to use a no-code platform that lets you do this via mouse clicks (to manage data augmentation, link the data and model, manage GPU training resources, keep track of data/model versions, and provide visualizations and metrics for error analysis).
  • Production deployment: Many teams can execute a successful proof of concept and achieve high-test set accuracy. But to secure budgets and approval for deployment, a small demo can help others see a project’s value. A platform can make it easy to implement a demo that runs not just in a Jupyter notebook but in a lightweight deployment environment such as a mobile app or simple edge device.

It used to take me months to deploy a model. With a no-code platform, I can train a RetinaNet demo, carry out error analysis, use a data-centric approach to clean up inconsistent data, retrain, and deploy to an edge device — all in 60 minutes. I get a thrill every time I go through the machine learning project lifecycle so quickly.

Platforms like this can help a variety of AI projects across all industries. LandingLens works well for visual inspection in areas as diverse as automotive, semiconductor, and materials, I’m hoping to make it more widely available. Its sweet spot is computer vision problems (detection or segmentation) with 30 to 10,000 images. If you have a business problem in computer vision that falls in this sweet spot, I’d like to hear from you. Please get in touch by filling out this form.

Keep learning!

Andrew

News

Animation of SourceAI working

Robocoders

Language models are starting to take on programming work.

What’s new: SourceAI uses GPT-3 to translate plain-English requests into computer code in 40 programming languages. The French startup is one of several companies that use AI to ease coding, according to Wired.

How it works: Companies have trained language models to anticipate programmers’ needs.

  • SourceAI, currently in beta test, enables users to describe the function they want, then select a programming language. Between 80 and 90 percent of code generated by the beta version works as intended, founder Furkan Bektes told The Batch. He plans to charge $0.04 to $0.10 per piece of code.
  • GPT-3 also powers Debuild, which builds web applications like buttons and text input fields based on plain English descriptions.
  • Belgian startup Tabnine has a GPT-2-powered tool that automatically suggests follow-on lines of code as programmers type.

Behind the news: Other companies are also using machine learning to increase coders’ productivity and sniff out bugs.

  • Facebook’s Aroma lets developers search code databases for snippets similar to whatever they’re working on.
  • Intel’s Machine Inferred Code Similarity is a similar tool that compares pieces of code to determine their function.
  • DeepMind published a model that rewrites human-generated code to make it run more efficiently.

Why it matters: In the hands of a skilled programmer, such tools can save time, freeing up brainpower for more complex tasks. In the hands of the newbie, they make it possible to create applications with little experience and — with diligent attention — gain skills more quickly.

We’re thinking: No AI system should replace a sacred rite of passage for neophyte coders: print (“Hello World!”).


Series of videos showing AI-powered surveillance inside a bank

Banking on Computer Vision

AI-powered surveillance is becoming a staple in U.S. banks.

What’s new: Several banks are using cameras equipped with computer vision to bolster security and boost employee productivity, according to Reuters.

What’s up: The companies have a variety of aims and approaches.

  • JPMorgan Chase is testing systems from vendors including AnyVision and Vintra at several Chase branches in Ohio. The systems collect data on customer and employee behavior to improve staff scheduling and interior layouts, the company said.
  • City National Bank of Florida plans to use face recognition at 31 branches to identify employees, customers, and eventually suspects on government watch lists.
  • An unnamed bank in the southern U.S. uses such systems to alert employees to issues such as suspicious loiterers and open safes.

Behind the news: The latest moves build on earlier attempts by financial institutions to take advantage of image recognition technology.

  • Before settling on a private vendor, JPMorgan Chase put together its own system drawing on technology from Amazon Web Services, Google, and IBM.
  • Bank of America bought AI-powered surveillance cameras in the early 2010s to catch people loitering in ATM kiosks.
  • Wells Fargo in 2007 used CrimeDex, a crime-prevention network that reportedly offered “facial recognition technology and the ability to search videos such as ATM surveillance records” and listed “14,000 suspects,” to identify a thief who taken $400,000 from automated teller machines.

Why it matters: If banks can get regulators and consumers to accept AI-assisted surveillance in branch offices, it will add momentum to wider adoption of the technology.

We’re thinking: Many of these use cases seems more like surveillance than security. Without sufficient sensitivity to public concerns, such efforts is likely to inspire backlash. Organizations that aim to take advantage of this technology: Tread cautiously.


A MESSAGE FROM DEEPLEARNING.AI

Specialization Banner-2-1

Building a career in deep learning? Training models is important, but you also need to know production engineering. That’s why we’re launching Machine Learning Engineering for Production (MLOps) on May 12, 2021. Sign up to learn about upcoming launches


Animation showing a methaphorical transition from AI to a green environment

Greener Machine Learning

A new study suggests tactics for machine learning engineers to cut their carbon emissions.

What’s new: Led by David Patterson, researchers at Google and UC Berkeley found that AI developers can shrink a model’s carbon footprint a thousand-fold by streamlining architecture, upgrading hardware, and using efficient data centers.

What they did: The authors examined the total energy used and carbon emitted by five NLP models: GPT-3, GShard, Meena, Switch Transformer, and T5. They reported separate figures for training and inference. Generally, they found that inference consumes more energy than training.

  • The authors point to several model-design strategies that trim energy use. Transfer learning, for instance, eliminates the need to train new models from scratch. Shrinking networks through techniques such as pruning and distillation can increase energy efficiency by a factor of 3 to 7.
  • Hardware makes a difference, too. Chips designed specifically for machine learning are both faster and more efficient than GPUs. For instance, a Google TPU v2 ran a transformer 4.3 times faster and used 1.3 times less energy than an Nvidia P100.
  • Cloud computing centers with servers optimized for machine learning are twice as efficient as traditional enterprise data centers. Data centers using renewable energy sources are greener, and centers built near their energy source bring further savings, as transmitting energy over long distances is relatively expensive and inefficient.

Behind the news: The authors joined the Allen Institute and others in calling for greener AI. To this end, MLCommons, the organization behind the MLPerf benchmark, recently introduced new tools to measure a model’s energy consumption alongside traditional performance metrics.

Why it matters: Training and deploying a large model can emit five times more carbon dioxide than a single car over the course of its lifetime. As AI becomes more widespread, energy efficiency becomes ever more important.

We’re thinking: There are bigger levers for reducing carbon emissions, such as transitioning the world away from coal power. Still, as a leading-edge industry, AI has an important role in building a the green future.


Minecraft video capture

3D Object Factory

In the open-ended video game Minecraft, players extract blocks of virtual materials from a 3D environment to assemble objects of their own design, from trees to cathedrals. Researchers trained neural networks to generate these structures.

What’s new: Shyam Sudhakaran and researchers at University of Copenhagen, University of York, and Shanghai University used a neural cellular automaton algorithm to construct 3D objects. The work demonstrates the potential for such algorithms to generate structures in three dimensions, as typically they’re limited to two.

Key insight: A cellular automaton generates complex patterns on a 2D grid by changing each cell’s state iteratively based on simple rules that depend on the states of its neighbors. A neural cellular automaton updates cells depending on the output of a neural network and the states of neighboring cells. Using 3D convolutions enables a neural cellular automaton to generate patterns in 3D.

How it works: The authors trained several 3D convolutional neural networks to reproduce structures found on the community website Planet Minecraft. Each different structure required its own model. The structures comprised 50 block types mostly corresponding to materials (stone, glass, metals, and so on), including piston blocks that push or pull adjacent blocks to produce animated objects. The system spawned block types directly without needing to virtually mine them out of the virtual ground.

  • The authors initialized a single block in a 3D grid.
  • The network updated each cell in the grid depending on whether a neighboring cell was activated. The updates ran for a set number of steps, growing the structure at each step.
  • The loss function encouraged the generated structure to match the original in block type and placement.

Results: The authors reported few quantitative results. However, the trained models grew static structures like castles, temples, and apartments that appear to be accurate inside and out. One model learned to grow an animated caterpillar.

Why it matters: Cellular automata may have certain benefits. For instance, if part of the resulting structure is destroyed, the automaton can use what’s left to regenerate the missing part. This approach can produce resilient digital 3D structures with no human intervention after the first step.

We’re thinking: Machine learning engineers looking for an excuse to play Minecraft need look no further!


A MESSAGE FROM FOURTHBRAIN

FourthBrain is launching its MLOps And Systems bootcamp training program! Our live instructors will prepare you with MLOps tools, skills, and best practices for deploying, evaluating, monitoring, and operating production ML systems. Live info session on May 19, 2001. Register now

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox