AI regulation and the impact on the insurance industry
James Clark and Chris Halliday look at the EU AI Act – arguably the world's first comprehensive law specifically designed to focus on the regulation of AI systems – and its impact on the insurance industry.
The regulatory challenge around AI is as broad as its potential applications. The current approach to AI regulation in the UK, as outlined in the government's white paper “A Pro-Innovation Approach to AI Regulation”, is to rely to the greatest extent possible on existing regulatory frameworks. Particularly relevant for the insurance industry is financial services regulation and the role to be played by the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA).
The FCA and PRA will use their existing powers to monitor and enforce against the misuse of AI, applying existing principles: for example, consumer protection principles and the concept of treating customers fairly and how these might be impacted if an insurer is relying on an AI system and predictive models to make pricing decisions.
Embracing the future
Many businesses are already focused on how best to integrate AI into their operations and apply it to their existing business model, recognising the need to embrace rather than resist change.
In the insurance industry, not all of this is new. For many years, insurers have used algorithms and machine learning principles for risk assessment and pricing purposes. However, recent developments in the field of AI make these technologies increasingly powerful, coupled with the explosion in specific forms of AI – such as generative AI – that are more novel for insurance businesses. Consequently, a key challenge for the industry is working out how to adapt and upgrade existing practices and processes to take account of advancements in existing technology, whilst also embracing certain 'net new' innovations.
A risk-based approach
The EU AI Act is one of the world's first comprehensive horizontal laws that is designed specifically to focus on the regulation of AI systems, and the first to command widespread global attention. It is an EU law, but importantly also one that has extraterritorial effect. UK businesses selling into the EU or even businesses using AI systems where the outputs of those systems then have an impact on individuals in the EU are potentially caught by the law.
Crucially, the Act is a risk-based law that does not try to regulate all AI systems and which distinguishes between different tiers of risk in relation to the systems it does regulate. The most important category under the Act are those referred to as high-risk AI systems, where the vast majority of obligations lie. The insurance industry is specifically flagged as an area of concern for high-risk AI systems, with the Act explicitly identifying AI systems used in health and life insurance for pricing, underwriting and claims decisioning, due to the potential significant impact on individuals’ livelihoods.
As a result, insurers will need to get to grips with the new law, first working out in which areas they are caught as a provider or deployer, before planning and building the required compliance framework to address their relevant obligations.
Navigating a complex landscape
In order for the insurance industry and other sectors to make the most out of AI, businesses will first need to map out where they are already using AI systems The starting point is often to identify where within the organisation they are using AI systems that are higher risk because of the sensitivity of the data they're processing or the criticality of the decisions for which they are being used. Then, more granular risk assessment tools will be needed for specific use cases proposed by the business that can be assessed, including a deeper dive on the associated legal risks, combined with controls on how to build the AI system in a compliant way which can then be audited and monitored in practice.
The challenge is that use cases are emerging and demand is increasing rapidly. Therefore governance, business and AI teams need to work side-by-side to embed processes within the use case development, which can often save significant work later.
The EU AI Act is a complex piece of legislation and will be a significant challenge to construct the frameworks needed. Success will not only rely on the quality of data and models used and good governance, but also by adopting an approach to the new law that is proportionate and builds confidence to enable businesses to make the most of future AI opportunities.
Chris Halliday is global proposition leader for personal lines pricing, product, claims and underwriting at WTW. James Clark is partner in the data protection, privacy and cyber security group at DLA Piper, and a specialist in the regulation of AI.