Nov 1, 2023
More than a stochastic parrot
Jack Vos, Onesurance, in VVP 4, 2023
The rise of Artificial Intelligence (AI) seems to have made its way into the insurance sector through the ChatGPT hype, and the debate around its effects is anything but black and white. The question is how we can find a balance between the positive and negative aspects of AI's emergence, as no sector needs this balance as critically as the insurance industry.
To begin with, the concept of artificial intelligence is certainly not new. The term itself was first used during the 'Dartmouth Workshop' in 1956, where scientists gathered to discuss how machines could exhibit intelligent behavior. Since that time, we can distinguish three key phases in the development of AI within the insurance landscape:
• Rule-based systems (1960-1990). In the early years of AI, these systems only used manually entered rules to make simple decisions based on specific input. For example, accepting based on simple predefined rules. These systems were still far too limited to make complex decisions.
• Statistical modeling and data analysis (1990-2010). As computers became faster and analysis software became more intelligent, Machine Learning models could be used to discover patterns and trends in large amounts of insurance data. This was particularly helpful in risk assessment and fraud detection.
• Machine Learning and Predictive Analytics (2010-present). With the rise of more advanced Machine Learning techniques, such as neural networks and deep learning, we have been able to perform even more complex analyses. This includes predicting customer behavior, setting rates based on individual characteristics, and detecting fraudulent activities with higher precision. AI is also used to enhance customer service with chatbots, virtual assistants, and automated interactions.
And now, since November 2022, there is Chat-GPT, in which Microsoft wants to invest ten billion euros. This is a Large Language Model (LLM) that continually predicts the next word in a text, thus mimicking human language. Especially among seasoned data experts who have worked for years on refining algorithms and understanding big data, the recent AI hype caused by ChatGPT has sparked a mix of excitement and concern. Two tweets by key figures in the field of AI perfectly capture the contrast between the positive and negative aspects.
The Positive Side
Tweet 1 is from the well-known venture capitalist Marc Andreessen, in January 2023: “We are just entering an AI-powered golden age of writing, art, music, software, and science. It’s going to be glorious. World historical.”
The excitement around AI is certainly justified for the insurance sector too, as it has introduced new possibilities that once seemed unthinkable. AI promises efficiency and accuracy on a new level. AI can analyze large volumes of data and create insights regarding risk assessments, claims handling, or customer service, which would be impossible for human assessors. The benefits of AI are thus simply too enticing to ignore, and that is why more and more decision-makers are setting AI on the agenda. It enables insurance companies to achieve competitive advantages, enhance customer experience, and simultaneously reduce costs. In short: a unique opportunity to take the lead in a traditional industry that is subject to change.
The Negative Side
The contrast is stark with tweet 2: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. (…)”
The author of this tweet is Sam Altman, founder and CEO of OpenAI’s ChatGPT himself. Have you ever heard a CEO say about his own product that it is, for now, good at creating misplaced greatness? The concern is indeed justified because persuasiveness is, after all, the most important factor for erroneous information to do its devastating work.
The persuasiveness of ChatGPT is overrated in this regard. This means that anyone professionally engaged in truth (discovery) should be alert, because with the right questions, extremely credible and convincing nonsense can emerge.
According to Professor Terrence Seinowski, author of The Deep Learning Revolution, language models also reflect the intelligence and diversity of their interviewer. Sejnowski, for example, asked ChatGPT-3: “What is the world record for walking across the English Channel?” to which GPT-3 replied: “The world record for walking across the English Channel is 18 hours and 33 minutes”. The truth, that one cannot walk across the English Channel, was easily bent by GPT-3 to reflect Sejnowski’s question. The coherence of GPT-3's answer is fully dependent on the coherence of the question it receives. Suddenly, it becomes possible for GPT-3 to walk on water, all because the interviewer used the verb ‘walk’ instead of ‘swim’.
‘AI is like a parrot that repeats words without knowing their meaning’
There is a striking analogy that illustrates the drawbacks of AI: the stochastic parrot. Stochastic means 'random or based on chance', referring to processes where outcomes are not entirely predictable. AI, especially in the form of Generative AI like ChatGPT, essentially acts as a repeating mechanism without full understanding. Just like a parrot can repeat words without knowing their meaning, AI can reproduce (text) patterns without the ability to understand the underlying logic. This is worrisome, especially when we think of decisions with significant consequences, such as risk acceptance or assessing insurance claims. If AI is deployed to make these decisions without a thorough understanding of the context, and without human intervention, the so-called human in the loop, unintended errors can occur. This inherent unpredictability can lead to customer dissatisfaction, ethical issues, and ultimately undermine trust in the insurance industry. Add to that the complexity of AI implementation and concerns about privacy and security, and you understand why some decision-makers at insurance companies are hesitant to fully invest in AI.
Important Role of Specific AI
The heated debate about AI thus seems to focus mainly on Generative AI, of which ChatGPT is the most well-known example. Generative AI refers to a subset of artificial intelligence that uses algorithms to generate new, original, and creative output. The distinction between Generative AI and Specific AI is becoming increasingly relevant in the insurance sector. Specific AI is focused on solving specific problems or performing specific tasks. The main application of Specific AI in the insurance industry has for years been Predictive Analytics. Based on historical data, it can make highly accurate, mathematically supported predictions reliably, such as risk estimation, combined ratio development, claim size, or the most effective customer service. Reliability and accuracy are typical qualities demanded in the insurance industry.
Specific AI is therefore not a 'hype' and has been increasingly successfully applied in the past ten years, especially by the Data Masters in the market. Data Masters are companies that optimally utilize their resource data, among other things, through the use of data science and AI.
In addition to reliability and accuracy, a major advantage of Specific AI systems is that it allows developers to control and adjust the transparency and fairness of the algorithms precisely. This way, applications can be designed and calibrated to meet the ethical data standards set by the Association of Insurers. For the time being, even according to the CEO of ChatGPT himself, this is very challenging for Generative AI applications.
Balance Is Crucial for the Insurance Sector
Bottom-line, this is about the question: can we trust AI or (not yet)? A well-known definition of trust is: “the belief in a good reputation and honesty”. The Association of Insurers has drawn up ethical data frameworks to ensure that data-driven insurance applications are fair and respectful. Adhering to these frameworks should ensure that the use of AI does not lead to discrimination, for example. No one wants a second 'allowance affair' that could seriously damage the good name of the insurance company.
This brings an additional challenge, especially since AI is capable of discovering complex patterns that are invisible to the human eye. The ability to explain these patterns and uphold ethical standards is essential for maintaining trust in the sector.
Jack Vos: ‘Specific AI is not a hype.’
Here lies an important task and responsibility for the experienced data experts and AI strategists in our industry. With relevant knowledge and experience in AI and a thorough understanding of the insurance context, they can maintain the balance between technological innovation and ethical considerations. We should not only look at what AI can do for us but also at what human experts can contribute to a sustainable and balanced future. In AI, it’s not just about innovation and greater efficiency, but also about preserving the human factor and the trust that is so crucial in our industry. Finding this balance is a challenge, but also an obligation that we must take seriously.
The Key to Success
According to a recent study by Cap Gemini conducted among 204 insurers worldwide, only eighteen percent of insurers can call themselves Data Masters. More than 70 percent of insurers still belong to the Data Laggards. The differences are staggering: the revenue per FTE at a Data Master is 175 percent higher, and they are 63 percent more profitable than Data Laggards. Initiatives by Data Masters around data science and AI lead to a higher NPS, improved combined ratio, and an increase in premium income in more than 95 percent of cases.
Get Started with the KOAT Checklist
KOAT stands for Quality Unmanned Advice and Transaction Applications. 'Unmanned applications' remains a nice term for smart technology capable of replacing tasks, not people. The increasing use of such automated applications in the financial sector, combined with new (European) regulations, make quality control of unmanned applications increasingly important. SIVI has developed a platform called Onbemenste Toepassingen, which includes a knowledge base and a checklist, as a tool for all parties that develop and use unmanned applications. The sector shows its commitment to this platform with broad representation in the Advisory Committee on the Quality of Unmanned Applications. See www.sivi.org.
Source: the original article appeared in VVP and can be read online here.