Transparency

Sixfold Resources

Embark on a Journey of Discovery: Uncover a Wealth of Knowledge with Our Diverse Range of Resources.

 How to Secure AI Compliance in Insurance
Transparency

How to Secure AI Compliance in Insurance

Sixfold's CEO and founder, Alex Schmelkin, along with special guests, discusses developments in AI regulation for the U.S. insurance industry and addresses common compliance concerns.

5 min read
Ana Clara Ribeiro

With the rise of AI solutions in the Insurance market, questions around AI regulations and compliance are increasingly at the forefront. Key questions such as “What happens when we use data in the context of AI?” and “What are the key focus areas in the new regulations?” are top of mind for both consumers and industry leaders.

To address these topics, Sixfold’s founder and CEO, Alex Schmelkin, hosted the webinar How to Secure Your AI Compliance Team’s Approval. Joined by industry experts Jason D. Lapham, Deputy Commissioner for P&C Insurance for the State of Colorado, and Matt Kelly, Data Strategy & Security Counsel at Debevoise & Plimpton, the discussion provided essential insights into navigating AI regulations and compliance.

Here are the key insights from the session:

AI Regulation Developments: Colorado Leads the Way in the U.S

“There’s a requirement in almost any regulatory regime to protect consumer data. But now, what happens when we start using that data in AI? Are things different?” — Alex Schmelkin

Both nationally and globally, AI regulations are being implemented. In the U.S., Colorado became the first state to pass a law and implement regulations related to AI in the insurance sector. Jason Lapham explained that the key components of this legislation revolve around two major requirements:

  1. Governance and Risk Management Frameworks: Companies must establish robust frameworks to manage the risks associated with AI and predictive models.
  2. Quantitative Testing: Businesses must test their AI models to ensure that outcomes generated from non-traditional data sources (e.g., external consumer data) do not lead to unfairly discriminatory results. The legislation also mandates a stakeholder process prior to adopting rules.

Initially, the focus was on life insurance, as it played a critical role in shaping the legislative process. The first regulation, implementing Colorado’s Bill 169, adopted in late 2023, addressed governance and risk management. This regulation applies to life insurers across all practices, and the Regulatory Agency received the first reports this year from companies using predictive models and external consumer data sources.

So, what’s the next move for the first-moving state in terms of AI regulations? Colorado Division of Insurance is developing a framework for quantitative testing to help insurers assess whether their models produce unfairly discriminatory outcomes. Insurers are expected to take action if their models do lead to such outcomes.

Compliance Approach: Develop Governance Programs

“When we’re discussing with clients, we say focus on the operational risk side, and it will get you largely where you need to be for most regulations out there.” — Matt Kelly

With AI regulations differing across U.S. states and globally, companies face challenges. Matt Kelly described how his team at Debevoise & Plimpton navigate these challenges by building a framework that prioritizes managing operational risk related to technology. Their approach involves asking questions such as :

  • What AI is being used?
  • What risks are associated with its use?
  • How is the company governing or mitigating those risks?

By focusing on these questions, companies can develop strong governance programs that align with most regulatory frameworks. Kelly advises clients to center their efforts on addressing operational risks, which takes them a long way toward compliance.

The Four Pillars of AI Compliance 

Across different AI regulatory regimes, four common themes emerge:

  1. Transparency and Accountability: Companies must understand and clearly explain their AI processes. Transparency is a universal requirement.
  2. Ethical and Fair Usage: Organizations must ensure their AI models do not introduce bias and must be able to demonstrate fairness.
  3. Consumer Protection: In all regulatory contexts, protecting consumer data is essential. With AI, this extends to ensuring models do not misuse consumer information.
  4. Governance Structure: Insurance companies are responsible for ensuring that they—and any third-party model providers—comply with AI regulations. While third-party providers play a role, carriers are ultimately accountable.

Matt Kelly emphasizes that insurers can navigate these four themes successfully by establishing the right frameworks and governance structures. 

Protection vs. Innovation: Striking the Right Balance 

“We tend not to look at innovation as a risk. We see it as aligned with protecting consumers when managed correctly.” — Matt Kelly

Balancing consumer protection with innovation is crucial for insurers. When done correctly, these goals align. Matt noted that the focus should be on leveraging technology to improve services without compromising consumer rights.

One major concern in insurance is unfair discrimination, particularly in how companies categorize risks using AI and consumer data. Regulators ask whether these categorizations are justified based on coverage or risk pool considerations, or whether they are unfairly based on unrelated characteristics. Aligning these concerns with technological innovation can lead to more accurate and fair coverage decisions while ensuring compliance with regulatory standards.

Want to learn more? 

Watch the full webinar recording and download Sixfold’s Responsible AI framework for Sixfold’s approach to safe AI usage. 

6 Common Myths About AI, Insurance, and Compliance
Transparency

6 Common Myths About AI, Insurance, and Compliance

I run into the same misconceptions about AI and insurance again and again. Let me try to put some of these common myths to bed once and for all.

5 min read
Alex Schmelkin

These days, my professional life is dedicated to one focused part of the global business landscape: the untamed frontier where cutting-edge AI meets insurance. 

I have conversations with insurers around the world about where it’s all going and how AI will work under new global regulations. And one thing never ceases to amaze me: how often I end up addressing the same misconceptions. 

Some confusion is understandable (if not inevitable) considering the speed with which these technologies are evolving, the hype from those suddenly wanting a piece of the action, and some fear-mongering from an old guard seeking to maintain the status quo. So, I thought I’d take a moment to clear the air and address six all-too-common myths about AI in insurance.

Myth 1: You’re not allowed to use AI in insurance

Yes, there’s a patchwork of emerging AI regulations—and, yes, in many cases they do zero-in specifically on insurance—but they do not ban its use. From my perspective, they do just the opposite: They set ground rules, which frees carriers to invest in innovation without fear they are developing in the wrong direction and will be forced into a hard pivot down the line. 

Sixfold has actually increased customers (by a lot) since the major AI regulations in Europe and elsewhere were announced. So, let’s put this all-too-prevalent misconception to bed once and for all. There are no rules prohibiting you from implementing AI into your insurance processes.

Myth 2: AI solutions can’t secure customer data

As stated above, there are no blanket prohibitions on using customer data in AI systems. There  are, however, strict rules dictating how data—particularly PII and PHI—must be managed and secured. These guidelines aren’t anything radically new to developers with experience in highly regulated industries. 

Security-first data processes have been the norm since long before LLMs went mainstream. These protocols protect crucial personal data in applications that individuals and businesses use every day without issue (digital patient portals, browser-based personal banking, and market trading apps, just to name a few). These same measures can be seamlessly extended into AI-based solutions.

Myth 3: “My proprietary data will train other companies’ models”

No carrier would ever allow its proprietary data to train models used by competitors. Fortunately, implementing an LLM-powered solution does not mean giving up control of your data—at least with the right approach. 

A responsible AI vendor helps their clients build AI solutions trained on their unique data for their exclusive use, as opposed to a generic insurance-focused LLM to be used by all comers. This also means allowing companies to maintain full control over their submissions within their environment so that when, for example, a case is deleted, all associated artifacts and data are removed across all databases.

At Sixfold, we train our base models on public and synthetic (AKA, “not customer”) data. We then copy these base models into dedicated environments for our customers and all subsequent training and tuning happens in the dedicated environments. Customer guidelines and data never leave the dedicated environment and never make it back to the base models.

Let’s kill this one: Yes, you can use AI and still maintain control of your data.

Myth 4: There’s no way to prevent LLM hallucinations

We’ve all seen the surreal AI-generated images lurching up from the depths of the uncanny valley—hands with too many fingers, physiology-defying facial expressions, body parts & objects melded together seemingly at random. Surely, we can’t use that technology for consequential areas like insurance. But I’m here to tell you that with the proper precautions and infrastructure, the impact of hallucinations can be greatly minimized, if not eliminated.

Mitigation is achieved using a myriad of tactics such as using models to auto-review generated content, incorporating user feedback to identify and correct hallucinations, and conducting manual reviews to ensure quality by comparing sample outputs against ground truth sets.

Myth 5: AIs run autonomously without human oversight

Even if you never watched The Terminator, The Matrix, 2001: A Space Odyssey, or any other movie about human-usurping tech, it’d be reasonable to have some reservations about scaled automation. There’s a lot of fearful talk out there about humans ceding control in important areas to un-feeling machines. However, that’s not where we’re at, nor is it how I see these technologies developing. 

Let’s break this one down.

AI is a fantastic and transformative technology, but even I—the number one cheerleader for AI-powered insurance—agree we shouldn’t leave technology alone to make consequential decisions like who gets approved for insurance and at what price. But even if I didn’t feel this way, insurtechs are obliged to comply with new regulations (e.g., the EU AI Act and the California Department of Insurance), that tilt towards avoiding fully automated underwriting and require, at the very least, that humans overseers can audit and review decisions. 

When it comes to your customers’ experience, AI opens the door to more human engagement, not less. In my view, AI will free underwriters from banal, repetitive data work (which machines handle better anyway) so that they can apply uniquely human skills in specialized or complex use cases they previously wouldn’t have had the bandwidth to address.

Myth 6: Regulations are still being written, it’s better to wait for them to settle

I hear this one a lot. I understand why people arrive at this view. My take? You can’t afford to sit on the sidelines!

To be sure, multiple sets of AI regulations are taking root at different governmental levels, which adds complexity. But here’s a little secret from someone paying very close attention to emerging AI rulesets: there’s very little daylight between them. 

Here’s the thing: regulators worldwide attend the same conferences, engage with the same stakeholders, and read the same studies & whitepapers. And they all watching what each other is doing. As a result, we’re arriving at a global consensus focused on three main areas: data security, transparency, and auditability. 

The global AI regulatory landscape is, like any global regulatory landscape, complex; but I’m here to tell you it’s not nearly as uneven or even close to unmanageable as you may fear. 

Furthermore, if an additional major change were to be introduced, it wouldn't suddenly take effect. That’s by design. Think of all the websites and digital applications that launched—and indeed, thrived—in the six-year window between when GDPR was introduced in 2012 to when it became enforceable in 2018. Think of everything that would have been lost if they had waited until GDPR was firmly established before moving forward.

My entire career has been spent in fast-moving cutting-edge technologies. And I can tell you from experience that it’s far better to deploy & iterate than to wait for regulatory Godot to arrive. Jump in and get started!

There are more myths to bust! Watch our compliance webinar

The regulations coming are not as odious or as unmanageable as you might fear—particularly when you work with the right partners. I hope I’ve helped overcome some misconceptions as you move forward on your AI journey.

Watch it here!

Want to learn more about AI insurance and compliance? Watch the replay of our compliance webinar featuring a discussion between myself; Jason D. Lapham, the Deputy Commissioner for P&C Insurance the Colorado Division of Insurance; and Matt Kelly a key member of Debevoise & Plimpton’s Artificial Intelligence Group. We're discussing the global regulatory landscape and how AI models should be evaluated regarding compliance, data usage, and privacy.

This article was originally posted on LinkedIn

AI in Insurance Is Officially “High Risk” in the EU. Now What?
Transparency

AI in Insurance Is Officially “High Risk” in the EU. Now What?

The new EU AI Act defines AI in insurance as “high risk.” Here’s what that means and how to remain compliant in Europe and around the world.

5 min read
Alex Schmelkin

The European Parliament passed the EU Artificial Intelligence Act in March, a sweeping regulatory framework scheduled to go into effect by mid-2026.

The Act categorizes AI systems into four risk tiers—Unacceptable, High, Limited, and Minimal—based on the sensitivity of the data the systems handle and the crucialness of the use case.

It specifically carves out guidelines for AI in insurance, placing “AI systems intended to be used for risk assessment and pricing in [...] life and health insurance” in the “High-risk” tier, which means they must continually satisfy specific conditions around security, transparency, auditability, and human oversight. 

The Act’s passage is reflective of an emerging acknowledgment that AI must be paired with rules guiding its impact and development—and it's far from just an EU thing. Last week, the UK and the US signed a first-of-its-kind bilateral agreement to develop “robust” methods for evaluating the safety of AI tools and the systems that underpin them. 

I fully expect to see additional frameworks following the EU, UK, and US’s lead, particularly within vital sectors such as life insurance. Safety, governance, and transparency are no longer lofty, optional aspirations for AI providers, they are inherent—and increasingly enforceable—facets of the emerging business landscape.

Please be skeptical of your tech vendors

When a carrier integrates a vendor into their tech stack, they’re outsourcing a certain amount of risk management to that vendor. That’s no small responsibility and one we at Sixfold take very seriously. 

We’ve taken on the continuous work of keeping our technology compliant with evolving rules and expectations, so you don’t have to. That message, I’ve found, doesn’t always land immediately. Tech leaders have an inherent “filter” for vendor claims that is appropriate and understandable (I too have years of experience overseeing sprawling enterprise tech stacks and attempting to separate marketing from “the meat”). We expect—indeed, we want—customers to question our claims and check our work. As my co-founder and COO Jane Tran put it during a panel discussion at ITI EU 2024:

“As a carrier, you should be skeptical towards new technology solutions. Our work as a vendor is to make you confident that we have thought about all the risks for you already.” 

Today, confidence-building has extended to ensuring customers and partners that our platform complies with emerging AI rules around the world—including ones that are still being written.

Balancing AI underwriting and transparency 

When we launched last year, there was lots of buzz about the potential of AI, along with lots of talk about its potential downside. We didn’t need to hire pricey consultants to know that AI regulations would be coming soon. 

Early on, we actively engaged with US regulators to understand their thinking and offer our insights to them as AI experts. From these conversations, we learned that the chief issue was the scaling out of bias and the impact of AI hallucinations on consequential decisions.

Sixfold CEO Alex Schmelkin (right) joined a panel discussion about AI in underwriting at the National Association of Insurance Commissioners (NAIC)’s national meeting in Seattle, WA.

With these concerns in mind, we proactively designed our platform with baked-in transparency to mitigate the influence of human bias, while also installing mechanisms to eliminate hallucinations and elevate privacy. Each Sixfold customer operates within an isolated, single-tenant environment, and end-user data is never persisted in the LLM-powered Gen AI layer so information remains protected and secure. We were implementing enterprise AI guardrails before it was cool.

I’ve often found customers and prospects are surprised when I share with them how prepared our platform is for the evolving patchwork of global AI regulations. I’m not sure what their conversations with other companies are like, but I sense the relief when they learn how Sixfold was built from the get-go to comply with the new way of things–even before they were a thing.

The regulatory landscape for AI in insurance is developing quickly, both in the US and globally. Join a discussion with industry experts and learn how to safely and compliantly integrate your next solution. Register for our upcoming webinar here >