top of page

AI Governance: From Buzzwords to Best Practices


AI will most likely win the buzzword award for 2023. ChatGPT and Google Bard have opened the eyes of millions to the potential benefits of AI. Additionally, AI introduces opportunities for organizations to exponentially increase efficiency and cut costs; unfortunately, AI also introduces new risks to these same organizations.


In March 2023, over 30,000 individuals, including well known technology leaders, signed an open letter asking organizations to pause their work on advancing AI beyond the capabilities of ChatGPT-4 for at least six months. In their letter, they called for policy makers and AI developers to work together to accelerate the development of strong AI governance. They claimed governance should include the oversight and tracking of high-risk AI systems, research of watermarking technologies to distinguish reality from fiction, robust auditing systems in place, and to enforce risk management of AI-specific risks.


While generative AI has caused quite a stir today, regulations around AI have been in the works for quite some time. The European Union (EU), per usual, arrived first at the scene with their wide-sweeping AI Act. Penalties under this law could cost organizations up to 30M euros or 6% of their revenue for non-compliance. Regulators over the financial sectors in the US and the UK have also declared that AI models need the same level of attention and rigor as any other model undergoing model risk management. In addition, the White House has released an AI Bill of Rights, specifically intended to help policy makers draft effective AI regulations, hinting that more regulations are coming to the AI space.


Why AI Governance is Needed


In short, the purpose of AI governance is to avoid and mitigate harm by building trustworthy AI. Organizations serious about AI governance should consider taking a “do no harm” oath regarding AI. When AI is used to make decisions that affect humans, harm may befall your customers, employees, community, or society. AI governance needs to address the potential impacts and harm to groups during the entire lifecycle of AI.


Trustworthy AI has different definitions based on who you ask, but most have the same general premise. The EU AI Act defines trustworthy AI as “legally compliant, technically robust, and ethically sound.” The National Institute of Technology and Standards (NIST) outlines characteristics of trustworthy AI in the AI Risk Management Framework (AI RMF), such as valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair – with harmful bias managed.


While we’re speaking of NIST, Archer customers should check out the Archer NIST AI Risk Management Framework app-pack on the Archer Exchange. It enables you to utilize the NIST AI Risk Management Framework to assess your AI implementations and determine the posture of your current AI implementation through a comprehensive risk assessment. It helps you design and implement effective risk mitigation strategies to address the gaps from the current implementation to the target implementation.


The idea is that building and using trustworthy AI reduces harm. That’s what we are striving for when instituting AI Governance.



How to Govern AI at Your Organization


If you have been in risk management for a while, you can guess what general steps are required. At a high level, a general framework of AI governance would include identification and documentation of your AI systems, risk analysis and evaluation, implementation and testing of controls, and ongoing monitoring. Let’s break these down.


#1 Identification

To start managing AI systems, you have to know what AI systems you are using. NIST and EU AI Act provide good definitions of AI. Basically, any system using machine learning, logic-based, knowledge-based, or statistical approaches are considered to be AI.


That covers a lot. And that is much more than just ChatGPT.


When you document your AI systems, it’s critical you collect and document specific information. Important details include:

  • Context – the intended purpose, benefits, norms and expectations, people involved, settings in which it’s deployed, goals, instructions on use, etc.

  • Development details – methods and steps used to develop the AI system, key design choices, system architecture, data requirements, validation and testing information, etc.

  • Monitoring information – the incident management process, key performance indicators, review cycles, etc.

  • Risks and impacts – identified risks, how risks are managed, potential impacts to consumers, employees, society, communities, organizations, etc.

  • Change management – historical log of changes to the AI system


For more information, review the “map” categories in the NIST AI RMF, as well as the EU AI Act section on technical documentation and summary data sheet.


#2 Risk Assessment

The purpose of assessing the risk of your AI systems is to understand the potential harm it could cause and to know the level of controls you should apply.


Typical information system risk assessments prioritize systems based on the data classification housed and processed within the system, as well as the functional importance of the system to the organization. This same thought process applies for AI systems, but organizations should also take into consideration the usage of the system as well.


The EU AI Act for example outright bans certain uses of AI, or AI systems that cause specific impacts. Any systems that might exploit vulnerable groups or violates rights in any way are prohibited in the market. Using AI to socially score an individual or perform real-time biometric identification in public spaces is also prohibited.


High risk AI systems might include systems that assist with education, like determining which students to admit to your school, which ones get into certain programs, etc. Any system used for hiring or firing would be considered high risk. Systems that determine who gets access to essential services, like determining your future credit score, would be considered high risk.


AI systems that don’t make predictions or decisions are generally less risky.


For more information, review the NIST AI RMF “Measure” categories, the EU AI Act on risk levels, the NIST Risk Management Framework, or regulations on Model Risk.


#3 Implement and Assess Controls

It is recommended to put in place strong controls at every stage of the AI lifecycle. This includes stages like design, development, evaluation and testing, deployment, operation, and eventual retirement.


Generally, controls should be put in place to respond to and manage identified risks during your risk assessments. The objective is to maximize the benefits of AI, while minimizing the negative impacts. Examples of controls include, but are not limited to:

  • Drafting policies that cover AI values and governance

  • Conducting ethical assessments

  • Keeping up-to-date technical documentation

  • Enforcing data governance

  • Continuously identifying and managing risks and impacts

  • Conducting model reviews, validation, and performance monitoring

  • Creating clear deployment strategies

  • Implementing strong change management

  • Setting clear decommission strategies for AI systems


NIST recommends implementing and testing these types of controls based on the risk level of your AI systems. Under the EU AI Act, high-risk AI systems must undergo a conformity assessment to prove that their system has conformed to the highest standard of controls. This conformity assessment covers topics as shown above and more. Without a conformity assessment, you cannot deploy your AI system in the EU market. It’s expected that the US will have similar requirements in future legislation.


#4 Ongoing Monitoring

Once the risk analysis, evaluation, and control selection has been completed, organizations should continuously monitor their AI systems in production. Ongoing monitoring includes activities like control reassessment, regular reviews, incident tracking and management, and risk identification.


Organizations should be proactive in reporting incidents to the proper stakeholders, as there has been greater emphasis on incident disclosure requirements. Trust that it’s better to be ahead of the curve in this space than behind.


Organizations should be tracking their own incidents and managing them in an effective way. When logging and reporting incidents, organizations should track things such as the incident summary, reporter, source system, dates of occurrence, impacts of the incident, and the affected stakeholders. These incidents will need to be shared both internally and externally in many cases, so organizations should plan now on their communication strategy.


Conclusion


Risk managers can leverage current frameworks in place to help govern AI, but will need to adapt to the unique challenges presented by AI. By identifying AI systems, prioritizing them based on risk, applying controls, and monitoring their systems, organizations can build and use more trustworthy AI and avoid negative impacts and harm.


Teams working to manage risks posed from AI will also need to be very agile in the rapidly developing regulatory space. For example, the current version of the NIST AI Framework, most model-related regulations, and even the EU AI Act were written to help mitigate risks from traditional AI, not generative AI (GAI). GAI presents its own unique challenges and risks. While these regulations and frameworks have lots of overlap, organizations that don’t adapt to these new AI technologies expose themselves to very large risks. Risk teams need to be looking ahead at what is to come and start their efforts now to institute proper AI governance.


bottom of page