Feb 11, 2025
Are we already trusting AI?
The Beursbengel has been around for almost ninety years and publishes 10 editions annually. In edition 937, the first column about AI appeared, written by our co-founder Jack Vos, with the title Do We Already Trust AI?
Not surprisingly, trust is one of the most important core values in the insurance industry. The question isn't if we should use AI, but how we deploy AI in a way
that earns trust. Read more in this article.
You may have noticed that AI is also making significant strides in the insurance sector. Innovative offices view the technology as the way to work more efficiently and intelligently, staying one step ahead of the competition. Yet the question remains: how do customers view these developments? Recent research by DNB and AFM paints a clear yet uncomfortable picture. Only a quarter of respondents view the use of AI by financial institutions positively, while 22 percent are negative about it. The majority are neutral or have no opinion, partly because few people know exactly how their bank, insurer, or pension fund uses AI.
“With Responsible AI, we can create real value with AI in a sustainable way.”
— Jack Vos, founder of Onesurance.ai
These figures not only highlight the gap between technology and consumer but also the challenge for the sector. Trust is the core of insurance. Customers need to be able to rely on decisions being fair, transparent, and explainable—whether made by people or technology. This requires careful choices in the application of AI.
Generative AI (gen AI), known from ChatGPT, has received all the attention since 2022. The possibilities seem endless: writing texts, generating creative outputs, and much more. However, gen AI often proves unsuitable in insurance. This form of AI is a black box even for experts, unexplainable, and produces variable results, making it unreliable for critical processes such as claims handling or acceptance. Moreover, using the large datasets required brings privacy risks that may conflict with the GDPR.
On the other hand, Predictive AI offers a more reliable alternative. This technology is accurate, consistent, and—crucially—much more explicable. With an audit trail, you can demonstrate which factors influence a decision. This provides transparency and scalability. Predictive AI is by no means a newcomer; the technology has already proven itself in the financial sector.
Yet we don't have to write off Gen AI. It's already valuable for supportive tasks, such as writing customer communications. For critical processes, human oversight remains essential. Here, AI can make a proposal, but humans have the final say, especially when it comes to rejecting applications or claims. This procedure, known as human in the loop, ensures that ethical principles like fairness and accountability are upheld.
Supervision also plays a crucial role. The DNB research shows that 62 percent of consumers are more positive about AI if it is strictly monitored. Therefore, AFM and DNB are working on new methods to effectively assess AI. One thing is certain: the technology must meet the industry's ethical standards. Good procedures must be set up with proactive monitoring of the solution for this purpose.
The question is not whether we should use AI, but how we should employ AI in a way that earns trust. With Responsible AI (RAI)—a combination of the right technology, human control, and ethics—we can create real value with AI in a sustainable way. As Warren Buffett once said: “It takes 20 years to build a reputation and five minutes to ruin it.” No matter how smart AI becomes, it's up to us to use it carefully and responsibly.
Source: AFM