Enabling AI adoption at scale through enterprise risk management framework – Part 1

TutoSartup excerpt from this article:
Responsible AI can be achieved through effective governance, and with the rapid adoption of generative AI, this governance has become a business imperative, not just an IT concern… By implementing systematic governance approaches at the enterprise level, organizations can balance innovation with …

According to BCG research, 84% of executives view responsible AI as a top management responsibility, yet only 25% of them have programs that fully address it. Responsible AI can be achieved through effective governance, and with the rapid adoption of generative AI, this governance has become a business imperative, not just an IT concern. By implementing systematic governance approaches at the enterprise level, organizations can balance innovation with control, effectively managing the risks while harnessing the transformative potential of generative AI.

While generative AI technologies offer compelling capabilities, they also introduce new types of risks that need business oversight and management. Financial institutions face real challenges—AI-driven financial analysis tools could make investment recommendations based on biased data, leading to significant losses, while generative AI-powered customer service systems might inadvertently expose confidential customer information. The unprecedented scale and speed at which generative AI operates makes robust business controls essential. However, with the right governance approach and strategic oversight, these risks are manageable.

Part 1 of this two-part blog post guides business leaders, Chief Risk Officers (CROs), and Chief Internal Auditors (CIAs) through three critical questions:

  • What specific or unique risks does generative AI introduce and how can they be managed?
  • How should your enterprise risk management framework (ERMF) evolve to support generative AI adoption?
  • How can you build sustainable generative AI governance in an ever-changing world—what should be on your checklist?

To address these questions, organizations can use established frameworks and standards including:

These frameworks provide valuable guidance for organizations looking to implement responsible and governed AI practices.

Role of GRC leaders, CROs, and CIAs

Governance, risk and control (GRC) functions led by business leaders, CROs and CIAs are well-positioned to advance generative AI innovation in financial services institutions. These functions have successfully managed complex risks in banks for years, and their existing expertise, proven approaches, and established risk frameworks provide a strong foundation for guiding generative AI adoption. They collaborate across the three lines of defense: business leaders making implementation decisions and managing associated risks (first line), risk and compliance functions providing frameworks and oversight (second line), and internal audit providing independent assurance (third line).

If generative AI risks, both perceived and real, are managed through enterprise-wide governance practices rather than isolated project-by-project approaches, organizations can use the advantages offered by generative AI over the long term. This requires integration with the ERMF, with some practices fitting into existing structures while others need deliberate adjustments to ERMF itself to address generative AI’s unique characteristics.

New frontiers in generative AI risk management

The traditional risk landscape at the enterprise level was based on a paradigm in which risks are predicted from past exposures. Preventive controls help stop unwanted things from happening, detective controls discover when bad things slip through the preventive controls, and corrective controls take remediation actions.

Much of this paradigm is still valid in the world of generative AI. For example, access to generative AI applications needs to be managed carefully to avoid unauthorized use. All three types of the preceding controls should help prevent unauthorized use, identify potential breaches, and remedy unauthorized access when detected.

However, additional focus and attention are required in the following areas when implementing generative AI solutions:

  • Non-deterministic outputs – The non-deterministic nature of generative AI outputs poses a specific challenge. While the probabilistic nature of these systems is often useful, the risk of inaccurate output from the black box can have serious business implications, and organizations need to take conscious actions to address these risks. Organizations can address this through Amazon Bedrock Guardrails Automated Reasoning checks, which use mathematically sound verification to help prevent factual errors and hallucinations.
  • Deepfake threat – Generative AI’s ability to create authentic-looking images and documents extends beyond traditional fraudulent activities. It elevates the threat to an entirely new level, creating eerily realistic content with unprecedented ease—hence the term deepfake. This poses significant challenges for organizations in verifying document authenticity, particularly in processes like Know Your Customer (KYC).
  • Layered opacity – While enterprises are learning about generative AI, they must address risks from multi-layered AI systems where each layer generates content and makes decisions based on potentially unexplainable models, hampering traceability. For example, consider generative AI outputs from a third-party system serving as inputs to internal AI systems, creating a chain of interdependent decisions. This lack of transparency in critical decisions affecting organizational performance and customer treatment could have profound implications for enterprise trustworthiness, brand reputation, and regulatory compliance.

The following table outlines key generative AI risk areas and their potential business impacts. In Part 2, we explain how organizations can address these risks through their ERMF. Effectively managing these risks through enterprise-wide governance not only protects the organization but also forms the foundation for responsible AI adoption. Robust risk management and governance are essential prerequisites for achieving responsible AI outcomes.

For a comprehensive foundation in responsible AI implementation, see the AWS Responsible Use of AI Guide, which aligns with the governance principles that we discuss throughout this article.

Risk areaDescription Potential risk impact
FairnessAre the underlying data and algorithms fair and unbiased? Are the outputs leading to fair outcomes for different groups of stakeholders?
  • Discrimination lawsuits
  • Loss of trust
  • Business loss because of exclusion of segments
ExplainabilityCan stakeholders understand the black box behavior and evaluate system outputs?
  • Legal liabilities and regulatory sanctions due to inability to explain decisions
  • Incorrect business decisions
Privacy and securityAre the systems aligned with privacy regulations and security requirements?
  • Fines arising from data breaches
  • Loss of trust
  • Damage because of security incidents
SafetyAre there controls to help prevent harmful system output and misuse?
  • Harmful content generation
  • Customer harm
  • Reputational damage
ControllabilityAre there mechanisms to monitor and steer AI system behaviour, including detection of model and data drifts?
  • Undetected degradation of service
  • Business disruption because of unreliable decisions
  • Customer harm
  • Inefficiencies arising from remediation
Veracity and robustnessCan the system maintain correct outputs even with unexpected or adversarial inputs?
  • Incorrect business decisions
  • System failures under stress
  • Loss of operational reliability
GovernanceAre there documented accountabilities across the AI supply chain including model providers and deployers? Are users adequately trained to use systems?
  • Confusion in crisis management
  • Personal liability for executives
  • Regulatory censure for governance failures
  • System misuse by untrained staff
TransparencyCan stakeholders make informed choices about their engagement with the AI system?
  • Loss of customer trust
  • Regulatory non-compliance
  • Stakeholder dissatisfaction

Remitly’s implementation of Amazon Bedrock Guardrails to protect customer personally identifiable information (PII) data and reduce hallucinations demonstrates how financial institutions can effectively manage privacy and veracity risks in generative AI applications, addressing several of the risk areas outlined above.

Conclusion

In this post, we introduced the critical importance of responsible AI governance for enterprises adopting generative AI at scale. We explored the unique risks that generative AI presents, including non-deterministic outputs, deepfake threats, and layered opacity. We outlined key risk areas such as fairness, explainability, privacy and security, safety, controllability, veracity and robustness, governance, and transparency. These risks underscore the need for a robust enterprise risk management framework tailored to the challenges of generative AI.

We emphasized the crucial role of GRC leaders, CROs, and CIAs in advancing generative AI innovation while managing associated risks. By using established frameworks like the AWS Cloud Adoption Framework for AI, ISO/IEC 42001, and the NIST AI Risk Management Framework, organizations can implement responsible and governed AI practices.

In Part 2 of this series, we explore how organizations can adapt their enterprise risk management framework to address these risks effectively, including specific considerations for cloud and generative AI implementation. We’ll provide detailed guidance on making your ERMF generative AI-ready and outline practical steps for sustainable risk management.


Additional reading

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Milind Dabhole

Milind Dabhole

Milind is a Principal Customer Solutions Manager focusing on enterprise innovation and risk governance. Before joining AWS, he spent over two decades in financial services, holding senior roles across first, second, and third lines of defense at global financial institutions. At AWS, he advises C-suite executives on cloud and AI transformation strategies that balance innovation with robust controls.

Stephen James Martin

Stephen James Martin

Steve is the Head of Financial Services Compliance and Security for EMEA and APAC. Steve Joined AWS after working for over 20 years in financial service in senior leadership roles with responsibility across Asia, the Middle East, and Europe. At AWS, he supports customers as they use the scale, security, and agility of AWS to transform the industry.

Enabling AI adoption at scale through enterprise risk management framework – Part 1
Author: Milind Dabhole