How to Approach Artificial Intelligence from a Compliance Perspective

I recently sat down with NAVEX Global to discuss how Compliance should approach the use of artificial intelligence (AI). As businesses begin to grapple with its use, Compliance needs to get in the conversation now. Here’s the blog they created from our conversation.

AI is challenging. Many of the stories you’ve read include both the good (optimizing business operations, incorporating predictive analytics, etc.) and the bad (large language model (LLM) “hallucinations”, plagiarism concerns, misinformation, bias, etc.). Between the future AI regulatory landscape and the powerful ways AI can enhance business operations, it will continue to be a topic of conversation for business leaders – and compliance professionals should be at the center of those discussions.

Most risk and compliance leaders realize the potential for AI to enhance the business – but with great technology capabilities comes the need for proper governance.

There is no “one size fits all" approach to many things in business and the same is certainly true for AI. So, how should organizations approach the journey to ethical AI given their use cases, business needs, etc.?


The first thing to do from a compliance perspective is to perform a risk assessment. We start by asking if AI is currently being used in the business, and if so, how? Is there documentation? Are there current rules about its use?

Next, we ask whether there are planned AI projects. We then ask if there is documentation or any rules/policies that will apply.

Lastly, we evaluate what needs to be done to make sure the use of AI does or will follow:

➡️ The law

➡️ The company’s values

➡️ Ethical business practices.

When it comes to AI, the law is changing all the time. The proposed EU Artificial Intelligence Act will create regulatory obligations. The U.S. recently had congressional hearings on the subject and is considering how to implement effective regulations. Compliance must pay close attention to these new laws so that it can advise appropriately about planned activities. The EU GDPR already applies to some AI-related activities like automatic review of resumes to find the best candidates – there is a lot to follow.

The bottom line is that it’s critical for compliance to know the planned business activities so it can respond quickly and appropriately. Just because an activity is legal doesn’t mean that it fits the company’s values or ethical business practices. AI activities need to be reviewed using those criteria as well.

How can businesses effectively establish guardrails for their AI program that balances the excitement for AI’s possibilities with responsible use of the technology?


Any guardrail put around AI needs to include reference to the three issues named above – compliance with:

➡️ The law

➡️ The company’s values

➡️ Ethical business practices.

Guardrails should never be too prescriptive because business practices and activities in this space change constantly. People engaged in trying to use AI need to be trained to consider these three focal points, and they should be written into a policy document or advisory note that is publicly available on an intranet site or other repository.

Do organizations have an ethical obligation to help their employees understand the planned impacts of AI in the organization? If so, what’s the best way to approach this communication/conversation?


Absolutely – any organization using or considering using AI has an ethical obligation to help employees understand the planned impacts of AI in the organization.

They also have an obligation to train employees on the pitfalls of the use of AI and red flags to look out for based on the business activities occurring or being considered. Training can be done in person, via eLearning, or through webinars.

Communication can be made in myriad ways. The important thing is reinforcement. People learn through the reinforcement of key ideas, and it is critical for the company to ensure everyone understands their ethical obligations with respect to AI usage.

Original blog post can be found here.

Share the blog!

Kristy Grant-Hart

Kristy Grant-Hart

Kristy Grant-Hart is the founder and CEO of Spark Compliance.
She's a renowned expert at transforming compliance departments into in-demand business assets.