Robots come in different shapes and sizes and they are used increasingly in everyday situations.
Some people work next to a mechanical arm in manufacturing plants, while more and more people are undergoing different surgical procedures performed by robots.
Robots can also go into dangerous environments that humans either could not reach or could not do so safely, such as disaster areas.
Automated lawn mowers and vacuum cleaners move quietly around homes and gardens, while a number of companies are working to develop as human-like and friendly companion robots to eventually help take care of the elderly in their homes.
There are chat bots and virtual assistants, who some of us probably already can’t live without, and of course the race is on to reach fully automated vehicles, on the ground and in the air.
The benefits of using robots are significant. They can cut costs, streamline processes, they don’t require breaks or feeding, just power and recharging. They can work 24/7 in certain situations and they can do boring and dangerous tasks, which frees up people to do the more interesting and challenging work.
But what if that mechanical arm malfunctions and accidentally hurts a colleague or causes harm during surgery, and what if something goes wrong with a self-driving car or one of the automated systems in a plane?
It’s all about trust
In order for any of us to feel at ease with robots, we need to know that we can trust them and in order to be able to do this, their inner workings need to be transparent. This means that if something goes wrong, we are able to pinpoint what the problem is and find a solution so that it doesn’t happen again.
How standards can help
Standards are behind many systems that make our civilization work, such as railways or software programmes such as HTML 5, which makes it possible for a browser to work. As innovative technologies become part of many systems and devices manufacturers will need to ensure their products and services work reliably and safely.
“We are at an important time for writing the standards for innovative artificial intelligence (AI) technologies that in five or ten years will be taken for granted by all of us…This is why we need to get it right now and make sure we consider as many angles as possible, including ethical and societal concerns”, says Dr David Filip who is leading the standardization work on trustworthiness of AI.
Find out more in the e-tech interview.