Today, people must work with non-human entities, which are incorporating diverse artificial intelligence (AI) technologies.
For example, in healthcare, robotic arms carry out surgery and other procedures, while doctors use analysis from big data, mined by machine learning algorithms, to help diagnose diseases.
In manufacturing plants, robots and people work in close proximity side by side, each carrying out specific activities.
On the ground and in the air, the growing use of automated systems, include advanced driver assistance and other features in cars, while modern airline autopilot and safety systems are deployed in planes. Both rely on sensor data processing algorithms to analyze data gathered from many sensors around the vehicles and aircraft in order to ensure safe, efficient journeys.
Challenges and the need for standards
In these and many more cases, we are literally putting our lives in the “hands” of the technology, which we must trust with our personal safety and well-being. This is why it is imperative that nothing goes wrong.
A key barrier to adoption of artificial intelligence is concerns about the trustworthiness of the system. The standardization work being carried out by IEC and ISO not only tries to identify and put a framework around these emerging issues, it also provides technical approaches to mitigating the concerns and links to the non-technical requirements such as ethical and societal challenges.
Video interview with experts working on standards for trustworthiness
In the following video interview Wael Diab and two of his colleagues, Mikael Hjalmarson, Editor ISO/IEC JTC 1/SC 42 ethical and societal concerns project (ISO/IEC TR 24368) and David Filip, Convenor, ISO/IEC JTC 1/SC 42 Working Group 3, Trustworthiness, discuss the work they are doing for trustworthiness of systems using AI technologies.
“This revolutionary approach of looking at the full AI ecosystem will enable wide scale adoption of AI and the promise it has as a ubiquitous technology enabling the digital transformation”, says Wael Diab, who the leads the group doing this work.
What is trustworthiness and how can we ensure it?
The work has already identified certain characteristics of trustworthiness, such as accountability, bias, controllability, explainability, privacy, robustness, resilience, safety and security. However, it always comes back to transparency or transparent verifiability of AI systems. In other words, is there someone who can assess the system for vulnerabilities or unintended consequences? In order to be trustworthy, people need to be able to understand the algorithm’s internal workings.
Achieving trustworthiness will require humans to be part of the process, to vet and control what the underlying AI algorithms are and that the associated training data don’t introduce unfair or otherwise unwanted bias.