We have all seen the movies. Robots wreaking havoc and taking the reins of civilization. As a fan of science fiction, I have read many tales of artificial intelligence manifesting in some form or fashion. It is interesting that most of those tales portray two sides to the binary characters. One side is benevolent bringing progress and prosperity to humankind; the other is a malevolent force that threatens society’s very existence.
With all of the talk of ChatGPT, Google’s Bard, Microsoft’s Bing AI and others, the topic of how artificial intelligence (AI) will affect the world has jumped to the forefront. Reminiscent of when Deep Blue beat Kasparov in chess, this discussion is another reminder that technology doesn’t walk through the information age – it leapfrogs. But the recent AI advancements are just the latest version of machine learning and technology modelling that, in recent years, have been transforming industries from healthcare to finance. However, with the headline power of AI, there are certainly risks associated with its development and deployment that risk management professionals must have squarely on the radar.
The good news is that the ball is rolling in helping define approaches. NIST launched the Trustworthy and Responsible AI Resource Center on March 30th. This new effort will facilitate implementation of, and international alignment with, the NIST AI Risk Management Framework released in January of this year. Another source is the US Department of Energy publication the AI Risk Management Playbook (AIRMMP). These are just the tip of the iceberg of emerging guidance on AI risk. The topic of risks in AI and machine learning has been covered by a host of academic research and will continue to be a source of investigation as new models and techniques emerge.
From a GRC/Integrated Risk Management (IRM) perspective, AI risk has several basic touchpoints. For example:
Policies and Standards: The bedrock of governance and compliance are policies and standards. Usage of any type of AI must be covered in corporate policies to establish control requirements.
Security Controls: AI systems can be vulnerable to cyber-attacks, just like any other computer system. If an AI system is hacked, it could lead to sensitive data being stolen, manipulated, or even destroyed. Malicious actors could also use AI to launch attacks, such as creating deepfakes to spread disinformation or manipulating financial markets.
Compliance and Risk Assessments: More than likely, your assessment processes cover many bases – from regulatory requirements to internal control compliance. The laundry list of topics to consider continues to increase. The frameworks referenced above are excellent starting points to begin incorporating simple questions to identify potential use of machine learning and AI to get ahead of the game.
Data Governance: Part of IRM is understanding how data flows through the organization. Data Governance may first affect privacy efforts but increasingly needs to account for all types of data as well as how that data is being used. This should also now include any use of machine learning or AI to ensure those efforts are being monitored for risks such as bias.
As organizations contemplate how AI can further business objectives, risk, compliance and security teams must be preparing for the inevitable. On the one hand it isn’t something that new – these functions have faced technology advancements before. On the other hand, this is a completely new animal. The funny thing about AI is that you can actually ask it what its risk is. I can’t think of any other risk that can answer the question “What risk do you pose?”
As ChatGPT told me:
While AI has the potential to revolutionize industries and improve people's lives, there are also risks associated with its development and deployment. These risks include bias, security, unemployment, autonomy, and lack of accountability. As AI continues to develop, it is important that we are aware of these risks and take steps to mitigate them. This includes developing AI systems that are transparent, accountable, and ethical, and ensuring that humans remain in control of AI systems.
I couldn’t have said it better myself.
For more information read IDC’s report on the modern needs of risk management.