Navigating AI Trustworthiness: A Guide to Implementing NIST’s Risk Management Framework

With an increasing number of companies turning to artificial intelligence (AI) to bolster competitiveness and profitability, the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework can serve as an instrumental tool for understanding, managing and mitigating AI risk effectively. This framework, despite being created primarily for federal authorities, is being utilized by a wide range of public and private sector organizations.

NIST’s framework encourages organizations to view AI risks from various angles such as risk measurement, risk tolerance, risk prioritization, and incorporation of AI risks into wider organizational risk management. The framework urges firms to track all AI risks as part of enterprise risk management. For instance, it posits that risk tolerance and prioritization would differ substantially between a retailer and a healthcare provider, due to the significant differences in their operational environment and stakeholders.

Emphasizing on AI trustworthiness, the framework sheds light on vital attributes such as validation, reliability, safety, security, resilience, privacy enhancement, fairness, accountability, and transparency. It facilitates companies in understanding the intricate intersection of these aspects and calibrating AI systems to strike a balance between them.

The framework’s core principles revolve around governance, mapping, measurement, and management. Governance, the cornerstone of this approach, requires firms to establish guiding principles, policies, and procedures. This principle insists on identifying, supporting, and holding accountable the people who will shoulder the responsibility for managing AI risk.

An important consideration of the governance principle is to determine where the responsibility for AI risk management should be housed within a firm. Similar to privacy roles, it may reside within the legal department, fall under the purview of the chief information officer, or constitute a unique role. The framework pushes companies to base this decision on thorough consideration of the business’s long-term requirements and the roles’ technical demands.

Considering the inherent risks and opportunities linked to AI, the NIST AI Risk Management Framework is poised as a robust starting point. It aids organizations in framing the right questions to understand the nuances of AI trustworthiness and risk mitigation within the unique context of their use case and industry.

This balanced view could prove invaluable, making it integral for corporations planning to harness AI or those already knee-deep in AI-integrated operations. You will undoubtedly regret having to explain to the board why the management team relied on information from an AI tool that was not appropriately vetted when it was purchased from third-party, making the NIST AI Risk Management Framework a prerequisite for all future AI endeavors.

About the Authors:

Justin Daniels is a shareholder at Baker Donelson and provides corporate advice to growth-oriented and middle-market domestic and international technology businesses.

Amy Chipperson is general counsel to Axtria, Inc., a global software and data analytics provider to the life sciences industry.