10 things you should include in your AI policy
Plan for all possibilities, including the worst
Things happen. No matter how good and comprehensive an AI policy is, there will be violations, and there will be problems. A company chatbot will say something embarrassing or make a promise the company can’t keep because the right guardrails weren’t activated.
“You hear some interesting and fun examples of where AI has gone wrong,” says Priest. “But it’s a very minor part of the conversation, because there are reasonable ways to manage those risks. And if there’s any volume of those risks manifesting, you activate countermeasures at the architectural layer, at the policy layer, and at the training layer.”
And just as a company needs to have technical measures in place for when AI goes off track, an AI policy also needs to include incident response in case the problem is bigger, and management response for cases in which employees, customers, or business partners deliberately or accidentally violate the policy.
For example, employees in a particular department might routinely forget to review documents before they are sent to customers, or a business unit might set up a shadow AI system that ignores data privacy or security requirements.
“Who do you call?” asks Shellman’s Desai.
There needs to be a process, and training, to ensure that people are in place to deal with violations and have the power they need to set things right. And if there’s a problem with an entire AI process, there needs to be a way for the system to be shut off without doing damage to the company.
Plan for change
AI technology moves quickly. That means that much of what goes into a company’s AI policy needs to be reviewed and updated on a regular basis.
“If you design a policy that doesn’t have an ending date, you’re hurting yourself,” says Rayid Ghani, a professor at Carnegie Mellon University. That might mean that certain provisions are reviewed every year — or every quarter — to make sure they’re still relevant.
“When you design the policy, you have to flag the things that are likely to change and require updates,” he says. The changes could be a result of technological progress, or new business needs, or new regulations.
At the end of the day, an AI policy should spur innovation and development, not hinder it, says Sinclair Schuller, principal at EY. “Whoever is at the top — the CEO or the CSO — should say, ‘we’re going to institute an AI policy to enable you to adopt AI, not to prevent you from adopting AI’,” he says.