Making AI Fair, Accountable, and Reliable

Published
Reading time
2 min read
Andrew Ng speaking at the National Intergovernmental Audit Forum about auditing AI systems

Dear friends,

I spoke last week at the National Intergovernmental Audit Forum, a meeting attended by U.S. federal, state, and local government auditors. (Apparently some of the organizers had taken AI for Everyone.) Many attendees wanted to know how AI systems can be rolled out in a responsible and accountable way.

Consider the banking industry. Many regional banks are under tremendous competitive pressure. How well they assess risk directly affects their bottom line, so they turn to credit scoring systems from AI vendors. But if they don’t have the technical expertise to evaluate such models, a hasty rollout can lead to unintended consequences like unfairly charging higher interest rates on loans to minority groups.

For AI systems to enjoy smooth rollouts, we need to (a) make sure our systems perform well and pose minimal risk of unintended consequences and (b) build trust with customers, users, regulators, and the general public that these systems work as intended. These are hard problems. They require not just solving technical issues but also aligning technology with society’s values, and expectations.

An important part of the solution is transparency. The open source software movement has taught us that transparency makes software better. And if making source code publicly available means that someone finds an embarrassing security bug, so be it! At least it gets fixed.

With the rise of AI, we should similarly welcome third-party assistance, such as allowing independent parties to perform audits according to a well established procedure. That way, we can identify problems and fix them quickly and efficiently.

After my presentation, the moderator asked me how auditors can avoid getting into adversarial relationships with AI vendors. Instead, we need to build collaborative relationships. By collaborating, we can help make sure the criteria used to judge our systems is reasonable and well specified. For instance, what are the protected groups we need to make sure our systems aren’t biased against? We can also better avoid “gotcha” situations in which our systems are assessed according to arbitrary, after-the-fact criteria.

The AI community has a lot of work to do to ensure that our systems are fair, accountable, and reliable. For example, Credo AI (disclosure: a portfolio company of AI Fund, a sister organization to deeplearning.ai) is building tools that help audit and govern AI systems. Efforts like this can make a difference in designing and deploying AI systems that benefit all people.

Keep learning!

Andrew

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox