Written by: Dylan Sekuterski, Matt MacDonald
Artificial intelligence (AI) is transforming the way businesses operate, creating unprecedented opportunities for innovation and growth. However, the rapid advancement of AI has developed concerns about the ethical and responsible use of this powerful technology. To address these concerns, the National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework (RMF) version 1.0. This newly released guidance offers a resource to the organizations designing, developing, deploying, or using AI systems to help manage its risks and promote trustworthy and responsible development as well as use of AI systems.
The intent of the AI RMF 1.0 guidance provides a comprehensive approach for managing risks associated with the deployment and use of AI systems. It aims to assist businesses in integrating risk management practices into their AI initiatives and ensure that AI systems are developed and used responsibly, ethically, and with accountability. The AI RMF 1.0 guidance emphasizes the importance of governance and leadership commitment in managing AI risks. It highlights the need for businesses to establish clear roles, responsibilities, and accountabilities for AI governance at different levels within the organization. It underscores the significance of risk assessment and management throughout the AI system lifecycle, from development to deployment and beyond. Additionally, it emphasizes the need for businesses to identify, assess, and mitigate risks associated with data quality, bias, transparency, explainability, and security in AI systems.
The core functions for evaluating and assessing risks associated with AI implementation as explained by the AI RMF 1.0 Framework are Govern, Map, Measure, and Manage.
Govern
The Govern function outlines processes and schemes to identify and manage risks that AI systems can pose, aligns AI risk management with organizational principles and priorities, and addresses legal and other ongoing issues. Govern is a cross-cutting function that should be integrated into other functions and attention to governance is required throughout an AI system’s lifespan. Strong governance can drive internal practices and norms, and documentation can enhance transparency and accountability. Executing the Govern function should result in a risk-focused culture.
Map
The Map function establishes the context to frame risks related to an AI system. The complexity and interdependencies of the AI lifecycle make it difficult to reliably anticipate the impacts of AI systems. Therefore, anticipating, assessing, and addressing potential sources of negative risk is crucial. The Map function gathers contextual knowledge and enables negative risk prevention, informing decisions for processes such as model management and an initial decision about the appropriateness or need for an AI solution. The outcomes of the Map function are the basis for the Measure and Manage functions. After completing the Map function, Framework users should have sufficient contextual knowledge about AI system impacts to inform an initial go/no-go decision about whether to design, develop, or deploy an AI system. It is important for Framework users to continue applying the Map function as context, capabilities, risks, benefits, and potential impacts evolve over time.
Measure
The Measure function uses various tools and methodologies to analyze, assess, benchmark, and monitor AI risks and related impacts. It includes testing AI systems before deployment and regularly while in operation, and tracking metrics for trustworthy characteristics, social impact, and human-AI configurations. Processes for an independent review can improve the effectiveness of testing and mitigate internal biases and conflicts of interest. The outcome of the Measure function will be utilized in the Manage function to assist risk monitoring and response efforts. It is important to continue applying the Measure function as knowledge, methodologies, risks, and impacts evolve over time.
Manage
The Manage function involves allocating risk resources based on the results of the Govern and Measure functions and creating plans for responding to and recovering from incidents or events related to AI systems. Expert consultation and input from relevant AI actors are used to decrease the likelihood of negative impacts. Documentation practices established in Govern and utilized in Map and Measure increase transparency and accountability, and processes for assessing emergent risks are in place. After completing the Manage function, plans for prioritizing and monitoring risks will be in place, and Framework users will have enhanced capacity to manage risks and allocate resources. It is important for Framework users to continue to apply the Manage function to deployed AI systems as methods, contexts, and risks evolve over time.
For startups or other businesses using AI-driven technology, here are some key takeaways to consider when deciding whether to use the AI RMF 1.0 guidance for their risk management program:
- Compliance with regulatory requirements: If your business operates in a regulated industry or is subject to specific AI-related regulations, adopting the AI RMF may be necessary to ensure compliance with regulatory requirements. The Framework may provide guidance on best practices for managing AI risks, which can help your business demonstrate compliance with relevant regulations and avoid potential penalties or legal issues.
- Ethical and responsible AI practices: The AI RMF emphasizes the importance of ethical and responsible AI practices, including addressing issues such as data quality, bias, transparency, explainability, and security. By adopting the Framework, your business can proactively integrate these principles into your AI initiatives, which can contribute to building trust with customers, investors, and other stakeholders.
- Risk mitigation and reputation management: The AI RMF provides a comprehensive approach for identifying, assessing, and mitigating risks associated with AI systems. By incorporating the Framework into your risk management program, your business can proactively manage and mitigate potential risks related to AI deployment, which can help protect your reputation and minimize potential negative impacts on your brand.
- Competitive advantage: Embracing responsible and accountable AI practices can position your business as a leader in the industry and provide a competitive advantage. By adopting the AI RMF and adhering to its guidance, your business can differentiate itself by demonstrating a commitment to responsible AI governance. This can be attractive to customers, investors, and other stakeholders who prioritize ethical and responsible technology use.
- Organizational culture and long-term sustainability: Incorporating the AI RMF into your risk management program can help foster a culture of responsible AI within your organization. By establishing clear roles, responsibilities, and accountabilities for AI governance, your business can embed ethical and responsible AI practices into your organizational culture, contributing to long-term sustainability and success.
The AI Risk Management Framework version 1.0 promotes the importance of human oversight and accountability in the AI decision-making processes. It encourages businesses to ensure that human involvement is appropriately integrated into AI systems to avoid undue reliance on AI and to maintain human accountability for decisions made using AI outputs. It is important to note that the decision to adopt the AI RMF should be based on a thorough assessment of your business’s unique needs, industry regulations, and risk management requirements. Consulting with legal, compliance, and AI experts can provide valuable insight and guidance to determine the most appropriate approach for your business.