Bravo to AI Companies That Agreed to Voluntary Commitments! Now Let's See Action The commitment by major AI companies to develop watermarks to identify AI-generated output is a test of the voluntary approach to regulation.

Published
Reading time
2 min read
Bravo to AI Companies That Agreed to Voluntary Commitments! Now Let's See Action: The commitment by major AI companies to develop watermarks to identify AI-generated output is a test of the voluntary approach to regulation.

Dear friends,

Last week, the White House announced voluntary commitments by seven AI companies, as you can read here. Most of the points were sufficiently vague that it seems easy for the White House and the companies to declare success without doing much that they don’t already do. But the commitment to develop mechanisms to ensure that users know when content is AI-generated, such as watermarks, struck me as concrete and actionable. While most of the voluntary commitments are not measurable, this one is. It offers an opportunity, in the near future, to test whether the White House’s presently soft approach to regulation is effective.

I was pleasantly surprised that watermarking was on the list. It’s beneficial to society, but it can be costly to implement (in terms of losing users).

As I wrote in an earlier letter, watermarking is technically feasible, and I think society would be better off if we knew what content was and wasn’t AI-generated. However, many companies won’t want it. For example, a company that uses a large language model to create marketing content may not want the output to be watermarked, because then readers would know that it was generated by AI. Also, search engines might rank generated content lower than human-written content. Thus, the government’s push to have major generative AI companies watermark their output is a good move. It reduces the competitive pressure to avoid watermarking.

All the companies that agreed to the White House’s voluntary commitments employ highly skilled engineers and are highly capable of shipping products, so they should be able to keep this promise. When we look back after three or six months, it will be interesting to see which ones:

  • Implemented a robust watermarking system
  • Implemented a weak watermarking system that’s easy to circumvent by, say, paying a fee for watermark-free output
  • Didn’t implement a system to identify AI-generated content

To be fair, I think it would be very difficult to enforce watermarking in open source systems, since users can easily modify the software to turn it off. But I would love to see watermarking implemented in proprietary systems. The companies involved are staffed by honorable people who want to do right by society. I hope they will take the announced commitments seriously and implement them faithfully.

I would love to get your thoughts on this as well. How can we collectively hold the U.S. government and AI companies to these commitments? Please let me know on social media!

Keep learning,

Andrew

P.S. A new short course, developed by DeepLearning.AI and Hugging Face, is available! In “Building Generative AI Applications with Gradio,” instructor Apolinário Passo shows you how to quickly create fun demos of your machine learning applications. Prompting large language models makes building applications faster than ever, but how can you demo your work, either to get feedback or let others to experience what you’ve built? This course shows you how to do it by writing only Python code.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox