Five Key Areas for AI Disruption in Insurance

Feb 27, 2024

With the advent of ChatGPT, Bard/Gemini and Co-pilot, Generative AI, and Large Language Models (LLMs) have been thrust into the spotlight.


AI is set to disrupt all industries, especially those that are predominately based on administrative support, legal, business, and financial operations, much like insurance and financial organisations.


“It is not the strongest species that survive, nor the most intelligent, but the ones most responsive to change." – Charles Darwin.


Charles’s quote has been summarised a lot in more recent culture, as “adapt or die”, harsh, but true. The first companies to not only integrate AI into their operations, but who leverage it to produce game changing results, will accelerate far ahead of their competitors.


The big consultancies have been throwing around predictions for AI in the insurance industry for the last few years. Interestingly enough, all publicly available figures are pre ChatGPT, which as we know was a pivotal moment in the use of AI in the workplace:


  • 60% of insurers believe AI could cut operational costs by at least 15%. (Source: Accenture in 2020)
  • AI could lead to cost savings of 26.7 billion dollars in the insurance industry. (Source: Capgemini in 2020)
  • AI could contribute $1.1 trillion to the global insurance industry by 2035. (Source: Infosys in 2020)
  • The use of AI-driven chatbots could save the insurance industry $1.3 billion annually by 2023. (Source: Juniper Research in 2019)
  • By 2024, more than 60% of auto insurance carriers will rely on AI for automated claims processing. (Source: Gartner in 2020)


As AI technology is rapidly changing, how much have these figures changed? Where will you focus to get your best bang for buck, balancing out your appetite for risk, and money to spend?



Five key areas

We’ve listed our five key areas when considering where to invest your time. These areas aren’t anything new, we’re not claiming to be revolutionary. Insurers have been looking to improve and automate as many processes as they can for some time now, with the availability of AI as another technology choice to do so.


Customer Service Chatbots

AI can be available 24/7 unlike us humans. With 100% availability, customers can solve their simpler queries by having a conversation with your trained chatbot. This can improve customer experience with reduced waiting times and more readily available information.


Chatbots can be used for supporting internal customer support staff, acting as a faster way to get to the information they’re looking for with a customer query, as well as potential solutions.


Fraud Detection

Identify patterns, detect anomalies, and flag suspicious behaviour. Flagged actions, outlying all the areas that have been highlighted as a risk can be sent to your team to take the appropriate next steps.


Customised Products

Have an advantage over your competitors with customers only paying for what they need. Premiums can be assessed alongside more enriched data sources, balancing risks related to a customer. Repetitive underwriter workflows can be automated, reducing the pressure more customised products can have on their workload.


Claims Processing

Analyse and review vast amounts of data from various sources, including unstructured data like images, notes, and other content. This capability allows for a more nuanced understanding of claims, leading to more accurate processing.


Forecast potential results, set rules where it can be automated, and rules where it requires intervention. Information and forecasts can be presented for the reviewer to settle the claim.


Application Modernisation

The impact of AI across developer productivity has been extraordinary. The legacy code at the core of many key applications represents a significant challenge to the evolution of enterprise systems.

 

It’s almost impossible to find developers with the skills and domain knowledge required to bridge between and replace legacy systems. AI provides modernisation teams with the tools they need to deeply understand old code bases and quickly translate them into modern languages and architectures.



Where to Start

Last year, IAG reported to have saved the company 150,000 work hours by deploying “bots”, a combination of both software automation, and AI. 150,000 hours roughly equates to 85 staff for an entire year, that’s a large portion of their workforce able to focus on more important tasks.


What piece of the pie do you want to carve out in this new world? Can you as an organisation risk being disrupted? What is your risk vs reward tolerance? How much are you willing to spend and do you have the capacity and capability?


We have worked with clients in identifying opportunities, structuring business cases, and well as facilitating hackathons in support of this technology. This combined with rapid prototyping puts customers in a position to start immediately realising benefits and learning how this all hangs together.


Leveraging existing LLM’s like ChatGPT is a low touch, low risk, and low-cost solution allowing you to start small and learn from the experience. We’ve worked alongside our customers on an LLM review aggregator available directly to customers, as well as creating AI assistants, working alongside their staff.


Building your own custom models will allow you to do more things, have more control over your data, but it does come at a higher price tag. Is your data even AI ready?


Are you still on legacy and outdated systems which you need to first take the time modernising? Are you riddled in technical debt? Can AI help you clean up and modernise those systems?


Whatever you do, don’t wait to get started, you don’t want to be the last to adopt.



Share This Post

Get In Touch

Recent Posts

By Joe Cooney 02 Apr, 2024
Red-team challenges have been a fun activity for PZ team members in the past, so we recently conducted a small challenge at our fortnightly brown-bag session, focusing on the burgeoning topic of prompt injection. Injection vulnerabilities all follow the same basic pattern – un-trusted input is inadvertently treated as executable code, causing the security of the system to be compromised. SQL injection (SQLi) and cross-site scripting (XSS) are probably two of the best-known variants, but other technologies are also susceptible. Does anyone remember XPath injection? As generative models get incorporated into more products, user input can be used to subvert the model. This can lead to the model revealing its system prompt or other trade secrets, reveal information about the model itself which may be commercially valuable, subvert or waste computation resources, perform unintended actions if the model is hooked up to APIs, or cause reputational damage to the company if the model can be coerced into doing amusing or inappropriate things. As an example, entrepreneur and technologist Chris Bakke was recently able to trick a Chevy dealership’s ChatGPT-powered bot into agreeing to sell him a Chevy Tahoe for $1 . Although the U.S. supreme court has yet to rule on the legal validity of a “no takesies backsies” contract (as an employee of X Chris is probably legally obligated to drive a Tesla anyway) it is not hard to imagine a future scenario with steeper financial consequences.
By Joe Cooney 22 Feb, 2024
One of the features of life working at PZ is our brown bag lunch and learn sessions; presentations by staff on topics of interest – sometimes, but not always technical, and hopefully amusing-as-hell. Yesterday we took a break from discussing the book Accelerate and the DORA metrics to take a whirlwind tour of the current state of play running “open source” generative AI models locally. Although this talk had been ‘in the works’ for a while, one challenge was that it needed to constantly be revised as the state of AI and LLMs changed. For example, the Stable Video Diffusion examples looked kind of lame in comparison to OpenAI’s Sora videos (released less than a week ago) and Groq’s amazing 500 token-per-second hardware demo on Monday/Tuesday , and the massive context size available now in the Gemini 1.5 models (released a few hours before OpenAI announced Sora...coincidence? An effort by OpenAI to steal back the limelight! Surely NOT!). And now a day later, with the paint still drying on a highly amusing slide-deck for the talk, Google releases their “open-source" Gemma models! The day itself presented an excellent example of why having more control of your models might be a good thing. ChatGPT 4 users began reporting “crazy” and highly amusing responses to fairly normal questions . We became alerted to this when one of our own staff reported on our internal Slack about a crazy response she received to a question about the pros and cons of some API design choices. The response she got back started normally enough, but then began to seem to channel Shakespeare’s Macbeth and some other olde English phrases and finished thusly. "Choose the right charm from the box* dense or astray, it’ll call for the norm. Your batch is yours to halter or belt. When in fetch, marry the clue to the pintle, and for the after, the wood-wand’s twist'll warn it. A past to wend and a feathered rite to tend. May the gulch be bygones and the wrath eased. So set your content to the cast, with the seal, a string or trove, well-deep. A good script to set a good cast. Good health and steady wind!" The sample JSON payload was also in keeping with the rest of the answer. { "htmlContent": "

Your HTML here

", "metadata": { "modifiedBy": "witch-of-the-wood", "safety": "sanitized", "mood": "lunar" } } Hubble, bubble, toil and trouble. Although there were no reports of the GPT4 API being affected by this (only ChatGPT) it might have given people developing automated stock trading bots using GPT4 a reason to pause and contemplate what might have been if their stock portfolio now consisted of a massive long position on Griselda’s Cauldron Supplies. As ChatGPT would say, Good health and steady wind.
Bay McGovern Patient Zero
By Demelza Green 11 Feb, 2024
Bay didn’t start her career out in software development. At school, Bay excelled at maths and physics, but adored writing, English and drama; lost in a world of Romeo and Juliet and epic fantasy.
By Demelza Green 04 Dec, 2023
Cybersecurity is everyone's business. Nearly every day when you open the tech news there is something covering a new esoteric vulnerability that researchers have discovered, massive data breach, or a cybersecurity attack. Some vulnerabilities that are discovered are truly remarkable. A recent discovery by researchers was that they were able to recover secret keys from non-compromised devices using video footage of their power LED obtained from a commercial video camera 16 meters away. Is it time to start putting black tape over all our power LEDs as well as our webcams? Boarding up the windows? Although these attention-grabbing attacks seem straight out of a James Bond or Mission Impossible movie, the reality is that many of the high-profile hacks you hear about using much more mundane methods and could have been prevented if good development security practices were in place. Shifting left on security and having a good grasp of OWASP principles is a great foundation, but so is the need to have a strong security culture, with a focus on continuous learning. We recently hosted a webinar with our senior developers Daniel Dekel and Joseph Cooney in partnership with the Johner Institute . Titled “Shift Left on Security”, the session highlighted the critical need to address security concerns right from the get-go in the software development life cycle. Here are some key takeaways they shared: Prioritise responsible handling and managing sensitive and personal data. Use secure frameworks and libraries, alert mechanisms and conduct threat modelling. Give your teams hands-on security training and implement best practice security policies. The webinar also covers details on a Red Team Workshop we conducted at Patient Zero, where participants worked in small teams to solve hacking challenges against the clock. A Red Team Workshop is a bit of training, a bit of teamwork, and a lot of fun, all centred around cybersecurity. Tune into the webinar to hear more insights about pushing left on security, and the Red Team Workshop we conducted.
More Posts
Share by: