Why AI Projects Fail, Part 5 Change Management

Published
Reading time
2 min read
Road sign with the text "new way"

Dear friends,

My last two letters explored robustness and small data as common reasons why AI projects fail. In the final letter of this three-part series, I’d like to discuss change management.

Change management isn’t an issue specific to AI, but given the technology’s disruptive nature, we must pay attention to it if we want our projects to succeed. An AI system that, say, helps doctors triage patients in an emergency room affects many stakeholders, from doctors to the intake nurses to the insurance underwriters. To keep projects on track, people must be brought onboard and systems must be adjusted.

I recently saw a union block even small-scale experiments because of fear that AI would automate jobs away. This was unfortunate, because the AI system being contemplated would have made employees more valuable without reducing employment. A change management process could have made the stakeholders comfortable with experimenting and helped them understand why it was worthwhile rather than threatening.

Many engineers underestimate the human side of change management. Some tips:

  • Budget enough time. Change management requires asking lots of questions, assessing how various roles will change, and explaining to many people what the AI will do.
  • Identify all stakeholders. Either communicate with them directly or find ways to have colleagues talk to them. Many organizations make decisions by consensus, and it is important to minimize the odds of any stakeholder blocking or slowing down implementation. We also need to build trust among stakeholders that the AI will work.
  • Provide reassurance. Where possible, explain to people how their work may change and how the new system will benefit them.
  • Explain what’s happening and why. There is still significant fear, uncertainty and doubt (FUD) about AI. I have seen that providing a basic education — along the lines of the AI for Everyone curriculum — eases these conversations. Other tactics including explainability, visualization, rigorous testing, and auditing also help build trust in an AI system and convince our customers (and ourselves!) that it really works.
  • Right-size the first project. If it is not possible to start with a complex deployment that affects a lot of people, consider starting with a smaller pilot (The AI Transformation Playbook includes helpful perspective on this) that affects a smaller number of stakeholders, and is thus easier to get buy in.

As we have seen with self-driving cars, building an AI system often involves solving a systems problem. That requires reorienting not only stakeholder roles and organizational structures, but also many things around the AI, like setting expectations with other drivers, pedestrians, and first responders and updating procedures around road maintenance and construction. Addressing the systems problem will increase the odds of your project succeeding.

If you understand the problems of robustness, small data, and change management, and if you can spot these problems in advance and pre-empt them, you’ll be well ahead of the curve in building a successful AI project.

Building AI projects is hard. Let’s keep pushing and share what we learn with each other, so we can keep moving the field forward!

Keep learning!

Andrew

Read part 1 of this series now.

Read part 2 of this series now.

Read part 3 of this series now.

Read part 4 of this series now.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox