This post was authored by Artificial Intelligence Team member Sean Griffin and is also being shared on our Data Privacy + Cybersecurity Insider blog. If you’re interested in getting updates on developments affecting data privacy and security, we invite you to subscribe to the blog.

Artificial Intelligence (AI) can offer manufacturers and other companies necessary assistance during the current workforce shortage. It can help workers answer questions from customers and other workers, fill skill gaps, and even help get your new employees up to speed faster. However, using AI comes with challenges and risks that companies must recognize and address.

For example, AI can produce a compelling and utterly wrong statement – a phenomenon called “hallucination.” If your car’s GPS has ever led you to the wrong location, you have experienced this. Sometimes, this happens because the AI was given bad information, but even AI supplied with good information can hallucinate, to your company’s detriment. And your employees cannot produce good work with bad information any more than an apple tree can produce pears.

Also, many real-world situations can confuse AI. AI can only recognize a pattern it has seen before, and if it encounters something it has not seen before, it can react unpredictably. For example, putting a sticker on a stop sign can flummox an AI system, and it can confidently misidentify images. Misidentifying images in real-world situations can cause problems if organizations employ facial or image recognition technology.

These problems can be managed, however. Through AI governance, companies can mitigate these issues to use AI safely, productively, and effectively. 

For example, AI can only supplement human thought, not replace it. So, appropriate AI usage requires humans to monitor what AI is doing. Your company should no more have AI running without human monitoring than you would follow your GPS’s instructions into a lake. Without appropriate monitoring, your AI can easily start hallucinating and promulgating incorrect information across your organization, or it can perpetuate biases that your company is legally obligated to avoid.

This monitoring will have to take place in the context of written policies and procedures. Just like you would tell your teenager how to drive a car before letting them behind the wheel, you should have written policies in place to inform your employees on the safest, most effective use of AI. These procedures will need buy-in from your organization’s relevant stakeholders and will need to be reviewed by legal counsel knowledgeable about AI. Your organization will have to leverage its culture to ensure that the key personnel know about the plan and can implement it properly.

Also, your company will need to have an AI incident response plan. We tell teenagers what to do if they have an accident, and the same proactive preventative strategy applies to AI. An incident response plan will inform your company how to address problems before they arise rather than forcing you to scramble in real-time to scrap together a suboptimal solution to a foreseeable problem. Should litigation or a government enforcement proceeding follow an AI incident, a written incident response plan can offer welcome guidance and protection.

Like a car, AI can make you more productive and get you to where you’re going faster. Also, like a car, AI can land you in a wreck if you’re not careful. Your company can enjoy the benefits and manage AI’s risks with thoughtful AI governance.