How is AI reshaping the commercial insurance landscape?
Lockton Re, in collaboration with Lockton International and Armilla AI, said the rapid adoption of artificial intelligence (AI) across all industries is radically changing the landscape of business risk, necessitating the need for a new AI risk classification.
And Oliver Breaux, one of the report’s authors and head of the Center of Excellence in Cybersecurity at Lockton Re, said: “No economic sector is immune to the potential impact of artificial intelligence. As an industry, we have to prepare for how these rapidly evolving risks are covered in commercial insurance, and understand emerging claims patterns.”
Cyber risks
Artificial intelligence is used to enhance and accelerate cyber attacks, such as sophisticated cyber fraud and deepfake. And some cyber insurers have indicated that they cover AI risks when the underlying cause is a traditional cyber incident, such as data leaks or cyber attacks on AI infrastructure.
New types of insurance are also emerging to cover operational risks of AI, such as unauthorized access to large model environments (LLM), including reimbursement for model redevelopment costs after an accident.
Artificial Intelligence and Professional Responsibility (E&O) Errors
Traditional E&O policies were designed to deal with expected software failures such as software bugs, service interruptions, or failure to comply with contracts, while the probabilistic nature of AI makes predicting risks more difficult and creates new claim scenarios.
Specific pledges have emerged to cover model errors, such as wrong decisions or data bias, but they are often narrow and do not cover all possibilities.
Legal Responsibility of Managers and Employees (D&O) and Operational Risks
As AI is integrated into corporate strategies and operations, the level of legal exposure for managers increases, especially with regard to governance and disclosure of AI risks such as bias or reliance on a specific resource.
Conventional policies also do not ensure that AI errors are covered, and continue to apply exceptions to intended activities or misleading statements about AI capabilities.
Responsibility for Functional Practices (EPL) and Bias
The use of AI in recruitment has increased the risk of bias and discrimination, especially if models are trained in biased data, while most insurance policies have not addressed AI output coverage or incorporated it into qualified stakeholders.
Affirmative Coverage
A new category of insurance is emerging to address potential gaps in traditional business policies, as it aims to cover the responsibilities of errors resulting from artificial intelligence models, including cases that do not involve cyber attacks or malicious actors.
This type of insurance is based on evaluating each model individually by industry, output context, version used, and use status, allowing for custom pricing and better demonstrating coverage intent.
Systemic Risk
The report touched on the possibility of systemic risks as a result of companies relying on shared infrastructure and basic AI models.
“The challenge for the insurance industry is not whether AI will create systemic risk, but when, and can underwriting practices keep pace with this change,” said Baiju Devani, one of the report’s authors and chief technology officer at Armilla AI.
The report explained that traditional procedures for systematic control are not effective with artificial intelligence, as a defect in a commonly used model can lead to simultaneous losses in several companies, regardless of sector or geographical location.
comments