As AI becomes more prevalent in business, it’s crucial to have a solid risk management plan in place. OCD Tech summarizes some components of the recently published NIST AI Risk Management Framework.
On February 2023, The National Institute of Standards and Technology unveiled the first version of its NIST AI Risk Management Framework, a guidance document for helping organizations manage risks posed by artificial intelligence systems.
Although compliance with the AI RMF is voluntary, the new framework represents an important moment for companies and other organizations looking for information and direction on how to manage AI risks at a time when the regulatory and legislative scrutiny over AI is only bound to increase.
Relevance and framework components
The AI RMF offers a powerful and relevant tool to organizations, equipping them to address the increasingly ubiquitous nature of AI throughout society, multiple industries, and many aspects of organizational activity. As AI technology evolves and become more sophisticated and further integrated into organizational processes and systems, its impact will grow exponentially.
Developing the capability to identify, assess, and manage risks that impact operations, business activities, and objectives ensures that organizations are designed for and operating with optimized efficiency, productivity, and competitiveness. Adopting an integrated approach to enterprise risk management ensures that relevant AI risks are identified and managed in a systematic and consistent manner and enables organizations to become both sustainable and resilient.
The AI RMF adopts fundamental principles of risk management within the context of AI and identifies four “core” functions, with specific actions and outcomes further described for each:
Governance. A risk management culture must be cultivated across the lifecycle of AI systems, including appropriate structures, policies, and processes. Risk management must be a priority for senior leadership, who can set the tone for organizational culture, and for management, who aligns the technical aspects of AI risk management with organizational policies.
Mapping. This function establishes the context to frame risks related to an AI system. Organizations are encouraged to: categorize their AI systems; establish goals, costs, and benefits compared to benchmarks, map risks, and benefits for all components of the AI system; and examine impacts to individuals, groups, communities, organizations, and society.
Measurement. Using quantitative, qualitative, or hybrid risk assessment methods, organizations should analyze AI systems for trustworthy characteristics, social impact, and human-AI configurations.
Management. Identified risks must be managed, prioritizing higher-risk AI systems. Risk monitoring should be applied over time as new and unforeseen contexts, risks, needs, or expectations will emerge.
The comprehensive and holistic approach presented in the NIST AI Risk Management Framework can help such organizations consider AI and the associated risks and identify the tools and methods by which such risks can be better managed. For entities already familiar with NIST’s cybersecurity and privacy frameworks and similar processes, the structure of the AI RMF will be familiar and relatively easy to adopt and integrate with existing practices. Even organizations uncertain as to how AI is relevant to their business operations can still benefit from reading the AI RMF and accompanying tools, such as the Playbook and crosswalks.
Source: https://www.nist.gov/itl/ai-risk-management-framework