Announcement: Specialized AI fund CuriosityVC becomes a strategic investor at Onesurance

Announcement: Specialized AI fund CuriosityVC becomes a strategic investor at Onesurance

Announcement: Specialized AI fund CuriosityVC becomes a strategic investor at Onesurance

Feb 20, 2024

AI in Consulting Practice #1: Getting Started with AI

Dennie van den Biggelaar, Onesurance, in Know Your Field! in VVP 1-2024

We begin this first edition of AI in the Advisory Practice at the beginning: what is it and how do you start? AI is a machine or software that performs tasks traditionally requiring human intelligence. Machine learning (ML) is a specific part of AI, allowing a machine or software to learn from historical predictions or actions on its own.


The most famous and talked-about example of ML software is ChatGPT, which is specifically designed to generate meaningful text for the user. However, there are countless other issues where machine learning can assist us. Yet, there is not always a ready-made solution like ChatGPT that you can use right away.

To build such a useful AI solution, you need to bring the right competencies together at the right time. It's the job of an AI strategist to determine, alongside a multidisciplinary team of business experts, ML engineers, data engineers, and data scientists, what you want to predict, how (accurately) this should happen, which techniques to use, and finally, how to operationalize and safeguard everything to actually achieve the desired results.

Predicting Policy Lapses

As an office, you want to ensure that the right clients get the right attention from your advisors at the right time, minimizing policy lapses. It is ideal if you know which clients have a high chance of canceling. But how do you convey this to the team?

It often happens that a client cancels a single policy. This is, in most cases, simply a change and you don't want to contaminate your ML model with it. Suppose a client cancels all policies within the main liability branch, but does not yet cancel the rest. Is this then a client who is at risk of leaving? And what if they also cancel everything within the main fire insurance branch but still have legal assistance and life insurance? Are there also policies transferred internally? How high is the cancellation rate really? All things you want to establish before putting a team of ML engineers to work.

Moreover, you need to consider your forecast horizon: how far ahead do you want to predict? Do you want to know which clients are going to cancel in the coming month or the next three, six, or twelve months? This might seem like a detail, but under the hood, this means you'll train a completely different ML model.

'Too little attention is the main reason clients cancel'

Finding Patterns

Once you have clearly defined what you want to predict, it's time to see if your data is sufficiently Accurate, Available, and Consistent (the 'data ABC'). The main reason clients cancel usually boils down to them receiving too little attention. The question, of course, is by whom, when, and why there is 'too little attention'. You don’t have this information in your data warehouse, so you need to construct it yourself through feature engineering. Which features have a significant effect on the chance of cancellation? This is an analytical and creative process where knowledge and experience from insurance experts and data scientists come together.

Once a solid first table with features is sculpted, you can finally begin with machine learning. Experience shows that predicting cancellations is best modeled with classification or survival analysis. There are hundreds of different ML techniques theoretically suitable for this. In your choice, it's important to consider: to what extent does the algorithm need to be explainable, how complex can the patterns be, or how much data is ABC?

'Based on precision, recall, and AUC scores, the best ML model is determined'

Validating Patterns

Once the 'machine' is set to work to find patterns that can make predictions, there always comes an exciting moment... How accurate are the different models? The ML engineer has an extensive toolbox for this. First, they keep a part of the data separate to test and validate a trained model. This guarantees the robustness of the found patterns and prevents a model from giving inaccurate predictions in the 'real world'. Afterwards, they look at the false positives and false negatives and what the costs are.

A false prediction that someone will cancel next month (false positive) isn't that bad. The advisor calls the client and concludes there's nothing wrong: it only costs them fifteen minutes of their time. If the algorithm incorrectly predicts that someone will remain loyal (false negative), it's much more costly: you lose a client.

Based on precision, recall, and AUC scores, the best ML model is determined. Additionally, it's possible to adjust algorithms to be stricter or less strict, to better fit the intended business process. This is called parameter tuning, and an experienced ML engineer knows how to do this responsibly.

How to Make It Usable?

Next, you integrate the algorithm into operational processes. How can the data move back and forth safely and efficiently, how can the advisor easily use the prediction? That is the work of data and software engineers. Finally, you want the advisor to provide feedback on the quality of the algorithm, allowing the algorithm to learn from the user. Thus, the algorithm becomes smarter and more effective as it is used more.

That's the real 'AI' component, but more on that in the next edition!


The original article appeared in the VVP and can be read online here.

©2024 Onesurance B.V.

©2024 Onesurance B.V.

©2024 Onesurance B.V.