Announcement: Specialized AI fund CuriosityVC becomes a strategic investor at Onesurance

Announcement: Specialized AI fund CuriosityVC becomes a strategic investor at Onesurance

Announcement: Specialized AI fund CuriosityVC becomes a strategic investor at Onesurance

Purple Flower
Purple Flower
Purple Flower

Jan 12, 2022

AI in the consulting practice: how to operationalize AI in your business processes?

At the request of VVP, the platform for financial service providers, our CTO and AI strategist Dennie van den Biggelaar explains how to apply specific AI and machine learning to 'advice in practice'. In various editions, the following topics will be highlighted: 

  • Getting started with specific AI and ML

  • Operationalizing in business processes

  • Integrating into existing IT landscape

  • Measuring = learning: KPIs for ML

  • Ethics, regulations, and society

  • AI and ML: a glimpse into the near future

In this second edition, we answer the question: how do you operationalize a trained algorithm?

Challenges in operationalizing AI in business processes

Imagine: together with your data science team, you've designed a promising AI algorithm to predict cancellations so that advisors can proactively act on it. This process was discussed in part 1 of this series 'AI in practice'. The potential is there, but soon you find out that in bringing it into practice, several complex hurdles must be overcome. What are these hurdles, and how can you bypass them?

Measurable results fail to appear

A clearly formulated goal precisely identifies what the AI algorithm needs to achieve and is aligned with business objectives. The scope, on the other hand, guides the project by defining the relevant data sources, budget, timelines, and expected results. What are the steps to achieve this?

A data science project is generally an investment where:

  1. It is unclear what it can deliver for you

  2. It is uncertain if your team can achieve it

Therefore, make the project as small and manageable as possible, without losing its value and impact if it succeeds. Try to achieve results quickly to prove you are on the right track.

If you do not achieve those results, evaluate with the team and make adjustments. Achieve them? First goal achieved! Turn it into a great story and present it to your business stakeholders, and discuss with them how you can scale it within your organization.

Question marks about consistent data quality

A common stumbling block is the quality and constant delivery of up-to-date data. Inconsistencies and missing values can jeopardize the accuracy of the AI model. The solution? A thorough exploration of which data is always Accurate, Available, and Consistent (the data ABC).

If essential data does not meet this to achieve your goal, then apply extensive data cleaning, such as handling missing values, extreme outliers, and incorrectly entered data. Afterward, you will need to structurally secure these cleaning steps in a data transformation pipeline and associated process, allowing you to add this data to your foundation for a reliable operational model.

There is insufficient trust in the AI model

Insufficient understanding and trust in ML models pose a barrier to acceptance by non-technical users. If not given enough attention, distrust and resistance can arise. A solution is to select transparent models with good explainability and smart methods that turn complexity into an understandable concept. Visualization and clear (process) documentation increase trust. Hence, the objection of a “black box” fades into the background.

And as with any change, it's also important to carefully take your colleagues through this process. Give them enough time to ask questions and get used to this new technology and its possibilities. Realize that their questions and feedback are essential input for you to make the intended application successful in practice.

Objections regarding security, privacy, and ethics

It goes without saying that the security and privacy of (customer) data are prerequisites to even start. Fortunately, a lot of new legislation has been implemented over the past 5 years, and organizations are increasingly applying this in practice and structurally.

Trust is not only an issue of legislation and technology. Also on an ethical front, you can expect objections from various angles:

  • Are we sure the algorithm is fair?

  • And what does that actually mean?

  • Are certain groups worse off in a situation with the algorithm?

  • Do we find that ethically responsible?

  • How do I prevent my algorithm from discriminating?

Fortunately, the Association of Insurers has established a number of guidelines that you can embed in your algorithm and approach. Want to make sure you don't overlook anything? Appoint one person responsible for

The feedback loop is missing

Listening to user experiences and leveraging this feedback creates a dynamic iterative cycle, allowing the model to evolve in line with business requirements. A structured feedback mechanism is crucial for the self-learning capability of the AI model. How you set this up correctly varies per AI application.

In the specific case of 'preventing cancellations', for example, have advisors record what they did with the prediction: called the customer, visited, or did nothing. This makes it measurable over time what the effect is on cancellations.

There is insufficient monitoring

The motto should be: “keep the algorithm on the leash”. What you don’t want are ‘hallucinations’ or unexpected performance degradation, for example in the event of a trend break.  This means that a careful monitoring and alert system must be in place to track model performance.  A sustainable application also requires detailed documentation of parameters and data used, so the model remains transparent and reproducible.

The model turns out to be non-scalable

An algorithm should ‘by design’ be part of a system with scalability in mind. This typically requires secure cloud solutions and scalable infrastructure such as MLOps technology (the ML variant of DevOps). Take growth forecasts into account and ensure a sufficiently flexible system that adapts to evolving business requirements. Making the right choices for integration with the IT landscape is essential (e.g., real-time or batch processing). But more on this in the next edition.

Last-but-not-least: insufficient involvement

According to Professor of Innovation Henk Volberda, the success of innovation is only 25% technical in nature and 75% dependent on human adoption.  Successful adoption starts with “CEO sponsorship”, water flows down from above. Leadership must ensure sufficient training, communication, and support when deploying an AI model. Invest enough time and energy to make this new technology part of your organization, from strategy to operation. Because that’s where the real return on investment is: the successful collaboration between human experts and AI technology.

“It’s easy to create a self-learning algorithm. What’s challenging is to create a self-learning organization.” – Satya Nadella, CEO of Microsoft

In short: devising, building, and validating a robust algorithm is just phase one of successfully putting AI into practice. In the next edition, we will delve into how you can integrate it with existing IT systems and workflows.


The original article was published in the VVP magazine.

©2024 Onesurance B.V.

©2024 Onesurance B.V.

©2024 Onesurance B.V.