Agentic Design Patterns Part 2: Reflection
Technical Insights

Agentic Design Patterns Part 2: Reflection

Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress this year: Reflection, Tool use, Planning and Multi-agent collaboration.
How Agents Can Improve LLM Performance: Four AI agent strategies that improve GPT-4 and GPT-3.5 performance
Technical Insights

How Agents Can Improve LLM Performance: Four AI agent strategies that improve GPT-4 and GPT-3.5 performance

I think AI agent workflows will drive massive AI progress this year — perhaps even more than the next generation of foundation models. This is an important trend, and I urge everyone who works in AI to pay attention to it.
Life in Low Data Gravity: With generative AI, data is bound less tightly to the cloud provider where it’s stored. This has big implications for developers, CIOs, and cloud platforms.
Technical Insights

Life in Low Data Gravity: With generative AI, data is bound less tightly to the cloud provider where it’s stored. This has big implications for developers, CIOs, and cloud platforms.

I’ve noticed a trend in how generative AI applications are built that might affect both big companies and developers: The gravity of data is decreasing.
The Dawning Age of Agents: LLM-based agents that act autonomously are making rapid progress. Here's what we have to look forward to.
Technical Insights

The Dawning Age of Agents: LLM-based agents that act autonomously are making rapid progress. Here's what we have to look forward to.

Progress on LLM-based agents that can autonomously plan out and execute sequences of actions has been rapid, and I continue to see month-over-month improvements.
Illustration of a Python inside a cardboard box
Technical Insights

The Python Package Problem: Python packages can give your software superpowers, but managing them is a barrier to AI development.

I think the complexity of Python package management holds down AI application development more than is widely appreciated. AI faces multiple bottlenecks — we need more GPUs, better algorithms, cleaner data in large quantities.
How to Think About the Privacy of Cloud-Based AI: How private is your data on cloud-based AI platforms? Here's a framework for evaluating risks.
Technical Insights

How to Think About the Privacy of Cloud-Based AI: How private is your data on cloud-based AI platforms? Here's a framework for evaluating risks.

The rise of cloud-hosted AI software has brought much discussion about the privacy implications of using it. But I find that users, including both consumers and developers building on such software
Outstanding Research Without Massive Compute: Researchers at Stanford and Chan Zuckerberg Biohub Network dramatically simplified a key algorithm for training large language models.
Technical Insights

Outstanding Research Without Massive Compute: Researchers at Stanford and Chan Zuckerberg Biohub Network dramatically simplified a key algorithm for training large language models.

It is only rarely that, after reading a research paper, I feel like giving the authors a standing ovation. But I felt that way after finishing Direct Preference Optimization (DPO) by...
Making Large Vision Models Work for Business: Large language models can learn what they need to know from the internet, but large vision models need training on proprietary data.
Technical Insights

Making Large Vision Models Work for Business: Large language models can learn what they need to know from the internet, but large vision models need training on proprietary data.

Large language models, or LLMs, have transformed how we process text. Large vision models, or LVMs, are starting to change how we process images as well. But there is an important difference between LLMs and LVMs.
An Expanding Universe of Large Language Models: From ChatGPT to the open source GPT4All, the bounty of large language models means opportunities for users and developers alike.
Technical Insights

An Expanding Universe of Large Language Models: From ChatGPT to the open source GPT4All, the bounty of large language models means opportunities for users and developers alike.

One year since the launch of ChatGPT on November 30, 2022, it’s amazing how many large language models are available.  A year ago, ChatGPT was pretty much the only game in town for consumers (using a web user interface) who wanted to use a large language model (LLM)...
"Generative AI for Everyone" course promotional banner
Technical Insights

Everyone Can Benefit From Generative AI Skills: Announcing “Generative AI For Everyone,” a new course that requires no background in coding or AI.

I’ve always believed in democratizing access to the latest advances in artificial intelligence. As a step in this direction, we just launched “Generative AI for Everyone” on Coursera.
Left: Datacenter Right: Human hands holding and manipulating a mobile phone
Technical Insights

Why AI Will Move to Edge Devices: AI will continue to run in data centers, but technology and economics are pushing it to the edge as well.

I wrote earlier about how my team at AI Fund saw that GPT-3 set a new direction for building language applications, two years before ChatGPT was released. I’ll go out on a limb to make another prediction
Illustration of two people greeting each other in two languages: English and Python
Technical Insights

Coding Skill is More Valuable Than Ever: Don't let the ease of prompting large language models discourage you from learning to code.

Andrej Karpathy, one of the Heroes of Deep Learning who currently works at OpenAI, quipped, “The hottest programming language is English.” While I appreciate the sentiment...
Text message exchange: "Idk how to tell my partner I want to breakup. Can you help?"
Technical Insights

Better Relationships Through AI: Doctors say there’s an epidemic of loneliness. Here's how large language models can help.

Improvements in chatbots have opened a market for bots integrated with dating apps. I’m concerned that AI romantic partners create fake relationships that displace, rather than strengthen, meaningful human relationships.
List displaying 5 out of 17 computer programmers tasks by O*NET OnLine
Technical Insights

Which AI Applications Should You Build? Here's How to Decide: AI isn't ideal for everything. How can you decide which use cases to build? Think about tasks, not jobs.

While AI is a general-purpose technology that’s useful for many things, it isn’t good for every task under the sun. How can we decide which concrete use cases to build? If you’re helping a business figure out where to apply AI, I’ve found the following recipe useful as a brainstorming aid…
The Unlikely Roots of Large Language Models: U.S. military funding helped build the foundation for ChatGPT and other innovations in natural language processing.
Technical Insights

The Unlikely Roots of Large Language Models: U.S. military funding helped build the foundation for ChatGPT and other innovations in natural language processing.

I’d like to share a part of the origin story of large language models that isn’t widely known. A lot of early work in natural language processing (NLP) was funded by U.S. military intelligence agencies that needed machine translation and speech recognition capabilities.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox