Published on: 
November 15, 2023

Navigating Unwritten Regulations: How We Did It

5 min read

AI is the defining technology of this decade. After years of unfulfilled promises from Hollywood and comic books, the science fiction AI future we’ve long been promised has finally become business reality. 

We can already see AI following a familiar path through the marketplace similar to past disruptive technologies.

  • Stage one: it’s embraced by early adopters before the general public even knows it exists;
  • Stage two: cutting-edge startups tap these technologies to overcome long-standing business challenges; and then
  • Stage three: regulators draft rules to guide its usage and mitigate negative impacts.

There should be no doubt that AI-powered insurtech has accelerated through the first two stages in near record time and is now entering stage three.

AI underwriting solutions, meet the rule-makers

The Colorado Department of Regulatory Agencies recently adopted regulations on AI applications and governance in life insurance. To be clear, Colorado isn’t an outlier, it’s a pioneer. Other states are following suit and crafting their own AI regulations, with federal-level AI rules beginning to take shape as well.

The early days of the regulatory phase can be tricky for businesses. Insurers are excited to adopt advanced AI into their underwriting tech stack, but wary of investing in platforms knowing that future rules may impact those investments. 

We at Sixfold are very cognizant of this dichotomy: The ambition to innovate ahead, combined with the trepidation of going too far down the wrong path. That’s why we designed our platform in anticipation of these emerging rules. 

We’ve met with state-level regulators on numerous occasions over the past year to understand their concerns and thought processes. These engagements have been invaluable for all parties as their input played a major role in guiding our platform’s development, while our technical insights influenced the formation of these emerging rules.

Sixfold CEO Alex Schmelkin (right) joined a panel discussion about AI in underwriting at the National Association of Insurance Commissioners (NAIC)’s Summer 2023 national meeting in Seattle, WA.

To simplify a very complex motion: regulators are concerned with bias in algorithms. There’s a tacit understanding that humans have inherent biases, which may be reflected in algorithms and applied at scale.

Most regulators we’ve engaged with agree that these very legitimate concerns about bias aren’t a reason to prohibit or even severely restrain AI, which brings enormous positives like accelerated underwriting cycles, reduced overhead, and increased objectivity–all of which ultimately benefit consumers. However, for AI to work for everyone, it must be partnered with transparency, traceability, and privacy. This is a message we at Sixfold have taken to heart.

In AI, it’s all about transparency

The past decade saw a plethora of algorithmic underwriting solutions with varying degrees of capabilities. Too often, these tools are “black boxes” that leave underwriters, brokers, and carriers unable to explain how decisions were arrived at. Opaque decision-making no longer meets the expectations of today’s consumers—or of regulators. That’s why we designed Sixfold with transparency at its core.

Customers accept automation as part of the modern digital landscape, but that acceptance comes with expectations. Our platform automatically surfaces relevant data points impacting its recommendations and presents them to underwriters via AI-generated plain-language summarizations, while carefully controlling for “hallucinations.” It provides full traceability of all inputs, as well as a full lineage of changes to the UW model, so carriers can explain why results diverged over time. These baked-in layers of transparency allow carriers–and the regulators overseeing them–to identify and mitigate incidental biases seeping into UW models.

Beyond prioritizing transparency, we‘ve designed a platform that elevates data security and privacy. All Sixfold customers operate within isolated, single-tenant environments, and end-user data is never persisted in the LLM-powered Gen AI layer so information remains protected and secure.  

Even with platform features built in anticipation of external regulations, we understand that some internal compliance teams are cautious about integrating gen AI, a relatively new concept, into their tech stack. To help your internal stakeholders get there, Sixfold can be implemented with robust internal auditability and appropriate levels of human-in-the-loop-ness to ensure that every team is comfortable on the new technological frontier.

Want to learn more about how Sixfold works? Get in touch.

Sixfold emphasizes the importance of collaborating with regulators to create technology that benefits everyone.

We at Sixfold believe regulators play a vital role in the marketplace by setting ground rules that protect consumers. As we see it, it’s not the technologist’s place to oppose or confront regulators; it’s to work together to ensure that technology works for everyone. 

Subscribe for More

Loving this read? Subscribe to the Sixfold blog for the latest insights delivered straight to your inbox.

Share this post
Alex Schmelkin
Co-founder & CEO, Sixfold