Artificial intelligence is a collection of technologies that until recently were focused on large scale problems that were either too hard, or too complex to solve with conventional computing.
This is no longer the case because machine learning (ML) technology in particular has widened the applicability of AI.
From robotically-assisted surgery, virtual nursing assistants, dosage error reduction and connected devices to image analysis and clinical trials, AI technologies already play many different roles in the delivery of healthcare treatments, surgeries and services. They include improving diagnostics and helping doctors make better decisions for patients.
Health insurance is a critical part of the industry and is also making use of AI. For example, some software platforms use machine learning to identify and reduce inefficiencies in the claims management process such as fraudulent inaccurate billing or waste through under-utilization of services. Others help patients choose tailored insurance coverage to reduce healthcare costs and assist employers looking for group coverage options.
In the industrial manufacturing sector, AI is driving higher efficiencies by enabling robots to work alongside humans. Examples from the financial ecosystem include applications for credit decisions, risk management, personalized banking, cyber security and fraud detection.
Wikipedia defines ML as “the scientific study of algorithms and statistical models that computer systems use to progressively improve their performance on a specific task … without being explicitly programmed to perform the task.” ML works in a very different way to conventional computing.
Traditional programming boils down to creating a set of instructions to tell a computer how to perform a specific task. The computer adheres rigidly to the instructions and there is no room for flexibility when it comes to interpreting data. In contrast, the algorithms used for ML are not rule based and the input is raw data, including images, sounds or text.
To illustrate this point let’s suppose that you wanted to develop a programme to identify pictures of cats. In traditional programming you would have to tell the computer that cats have four legs, two eyes and are covered in fur. The problem is that in many pictures you won’t be able to see all of the legs or both eyes, or the cat might be hairless. There might be other animals in the pictures that the programme could mistake for cats.
In supervised ML, one of the three main categories of ML, you would provide the programme with as many pictures as possible of different kinds of cats in various positions. The system would learn by itself how to identify the various feline features without the need for explicit guidelines.
In addition to supervised learning, the other main categories of ML are unsupervised learning and reinforcement learning. Unsupervised learning uses unlabelled, uncategorised data without prior training. In reinforcement learning the system interacts with its environment and learns via the positive or negative feedback it receives for its actions.
Related content and resources
Download free the IEC White Paper: Artificial intelligence across industries
Find out about the joint committee set up by IEC and ISO to tackle the challenges of standardizing AI