As the use and application of artificial intelligence (AI) systems increases, addressing the trust aspect of these systems is key to widespread adoption.
Examples include the effects of data or algorithmic bias, ensuring data privacy, and also the lack of transparency and accountability.
A new international standard is being developed by IEC and ISO, which will provide guidelines on managing risk faced by organizations during the development and application of AI techniques and systems. It will assist organizations in integrating risk management for AI into significant activities and functions, as well as describe processes for the effective implementation and integration of AI risk management.
The application of these guidelines will be able to be customized to any organization and its context.
Risk management in the AI context
New technologies bring new challenges, where the unknown is greater than the known. Risk management can help deal with uncertainty in areas where no recognized measures of quality have been established.
“For a specific AI product or AI service, a risk management process ensures that ‘by design’ throughout the product or service lifecycle, stakeholders with their vulnerable assets and values are identified, potential threats and pitfalls are understood, associated risks with their consequences (or impact) are assessed, and conscious risk treatment decisions based on the organization’s objectives and its risk tolerance are made”, said Wael William Diab, Chair of ISO/IEC JTC 1/SC 42, the IEC and ISO joint technical committee for artificial intelligence.
In the case of AI systems, risk management would address:
- Engineering pitfalls and assess typical associated threats and risks to AI systems with their mitigation techniques and methods, by allowing for identification, classification and treatment of risks to (and from the use of) AI systems.
- Establishment of trust in AI systems through transparency, verifiability, explainability, controllability, by using a well understood and documented risk management process.
- AI system’s robustness, resiliency, reliability, accuracy, safety, security, privacy, etc., by providing transparency to the treatment of risks to the identified stakeholders.
Who stands to benefit?
|Beneficiary||Benefits||Example of organizations/ businesses|
|Industry and commerce (large)||
Increase trust in use of AI-based solutions in
Providers can increase market acceptance of such solutions by documenting proper risk management during design, development and deployment of their products.
Customers can use standard to implement processes appropriate to deal with risks of use of AI based technologies.
Provider offering trained machine learning
Provider offering AI based services
Any industry using AI Technologies – fintech, automotive, health, aerospace, etc.
|Industry and commerce (SMEs)||As well as benefits listed above, new market opportunities occur for SMEs by providing tools and services related to risk management||
Providers for evaluation services for AI
systems (quality, resilience, robustness, etc.)|
Providers for AI component evaluation tools
|Government||Effective risk management enables employment of AI based solutions for citizen services more efficiently||Administrations tasked with: Better urban traffic control with optimized environmental profileUrban planningRural climate monitoring/analysisDisease control|
|Consumers||Increase trust in AI-based solutions through increased accountability, prohibition of leakage of PII||Voice controlled assistantsRecommenders and shopping assistanceSmart homes|
|Academic & research bodies||Increased application of AI based solutions depends on availability of effective risk controls. This creates research incentives for risk management relation topics and AI in general|
|NGOs||Increase trust in dependability and freedom from bias increases applicability of AI for variety of NGOs||NGOs tasked with: Analysis of migration routesEnvironmental monitoringDisease control|
More about the standard
The new standard builds on the principles and guidelines described in ISO 31000 (Risk management – Guidelines), which help organizations with their risk analysis and assessments. It also provides guidance that arises when AI is applied to existing processes in any organization or when an organization provides an AI system for use by others. This project is envisioned to provide a platform and framework for the trustworthiness standardization.
The newly approved work item proposal for the standard is being developed by ISO/IEC JTC 1/SC 42, as part of its comprehensive programme of work that is looking at standardization of the entire AI ecosystem.
Image by Adam Radosavljevic from Pixabay