We live in an increasingly digitalized world, with a growing number of intelligent and connected products and services. From smart homes and connected healthcare devices, to cars that are becoming increasingly autonomous, we must put our trust in the technology all around us.
Artificial intelligence is accelerating the digital transformation a wide variety of use cases and applications that promise to transform our daily lives to the better. Nonetheless, it further brings into focus the need to address trustworthiness, ethical and societal concerns.
Digital Trust Forum releases initial strategy paper during Bosch ConnectedWorld 2020
The Digital Trust Forum (DTF) acknowledges the need for well-defined responsibilities and governance as a foundation for trust in AI and IoT, collectively known as AIoT. A key question it addresses is a strategy of how to enable trust in AIoT-enabled systems by defining quality parameters, fulfilling regulatory and other requirements, defining, maintaining and observing policies, and monitoring compliance.
Michael Bolle, Bosch group CDO/CTO commented: “Digital Trust is a key enabler for the next step in the evolution of the AIoT. The Digital Trust Forum is paving the way for making intelligent IoT systems trustworthy. The forum is piloting concrete use cases to show how abstract and high-level regulations can be mapped to digital policies for automatic processing, creating a pragmatic way for trust enforcement.”
DTF released its initial strategy paper during the Bosch ConnectedWorld conference in Berlin on 20 February. The paper offers a pragmatic roadmap for aligning regulatory perspectives, industry views, and offers a strategy built on agile and dynamic ways of how AI and IoT solutions are often created.
The role of IEC and ISO International Standards
IEC and ISO are developing international standards on AI through a joint committee (ISO/IEC JTC 1/SC 42) which is considering the entire AI ecosystem.
SC 42 develops horizontal AI standards, which include a suite of trustworthiness projects such as an overview of the topic, specific aspects related to AI, such as unintended bias and robustness of neural networks, and an AI risk management framework that builds on the generic ISO 31000 risk management standard for AI to address trustworthiness issues. The AI focused 42 trustworthiness standards complement a rich set of hundreds of published standards on information and cyber security, privacy and trustworthiness that have been developed by joint committees of IEC and ISO under the JTC 1 programme.
Additionally, SC 42 considers societal concerns and ethical aspects of AI across its entire work programme, such as through use cases and application guidance as well as rough specific projects such as mapping such requirements to the trustworthiness technical work.
The SC 42 portfolio of deliverables covers foundational aspects such as a framework for AI using machine learning, standards lifecycle and terminology that enables the diverse stakeholders to engage and interact. These are in addition to horizontal deliverables such as data aspects, trustworthiness, computational methods, governance implications of AI, use cases and applications of AI.
SC 42 is studying new areas such as a management systems standard (MSS), which if approved to proceed, would enable auditability / certification and AI systems engineering looking at addressing issues with real world development / deployment of trustworthy AI systems
“Our work is enables strategies such as that of the DTF through the an ecosystem approach that considers all aspects of the technology, it’s context of use as well as concurrently addressing trustworthiness, ethical and societal concerns in lockstep with the technical development,” said Wael William Diab, Chair of SC 42. “SC 42’s broad portfolio of deliverables bridge the gap between contextual requirements such as application domain, regulatory, policy, ethical, societal and business with technical requirements and horizontal standards solutions. This will accelerate the broad adoption of AI applications that are trustworthy.”
The Digital Trust Forum is a global, independent initiative. It aims to create an ecosystem of leading industry organizations, to help shape a framework for enabling trust in the digital world of intelligent and connected products, supported by IoT and AI-based systems.
Contributors to the paper include:
A Di Felice (Digital Europe), A Mitrakas (ENISA), A Nannara (TIOTA), Bassam Zarkout (IIC), C Bonefeld-Dahl (Digital Europe), C Neppel (IEEE), D Boswarthick (ETSI), F Ozog (Linaro), H Banthien (Platform Industrie 4.0), I Plöger (BDI), J Morrish (IIC), L Romero (ETSI), M Bell (Digital Europe), M Emele (Bosch), M Jochem (Bosch), M Milinkovich (Eclipse Foundation), R Riemenschneider (European Commission) and W Diab (ISO/IEC JTC 1/SC 42)