Leading the AI way, Responsibly

Discover our security standards and compliance commitments.

We care about your data

Learning Purpose

Never Stored Or Used For Learning Purposes

Your data is never persisted at the LLM level, and never used for future reference or learning by the model.

Isolated And Separated

Isolated And Separated From Others

Each customer's data and resources are secluded in a single-tenant environment, separate from other users.

Ultimate Privacy

Locked Up Tight For Ultimate Privacy

Strong encryption protocols are employed to safeguard all customer data.

We are certified secure

Reports may be provided upon request

SOC 2 Type II
Hippa Compliant
HIPAA Complaint

We are compliant since day one

Full Data Lineage Provided

Sixfold is engineered to offer complete visibility into the origins of data used by our system, detailing its processing journey and utilization within our model.

Transparent Model Training

We meticulously document the development of our AI model, including its training methodologies, the algorithms employed, and the settings adjusted throughout its training phase.

Traceable And Explainable Sources

Our platform ensures all conclusions can be verified, incorporating an additional layer in the review process to confirm that summaries are reasonable, relevant, and compliant with guidelines.

Designed To Meet Regulations

In response to emerging regulations that demand AI systems to be both transparent and explainable, our product's foundational design embodies these principles.

Human Oversight Guaranteed

Our team actively monitors the platform's output, ready to investigate and rectify any anomalies, ensuring our commitment to reliability and accuracy.

We are different than off-the-shelf AI

Sixfold stands out from general-purpose AI solutions by ensuring your data and information remain secure and confidential within your organization

Discover all the ways we differ on safety measures:

Product Comparison

SixFold Logo

Off-the-shelf AI tools

Customers have dedicated single-tenant environments

Yes

No

Customers maintain ownership and control over their data

Yes

No

Data trains client-specific models, not generic LLMs

Yes

No

Designed to meet industry-specific privacy laws and regulations

Yes

No

In-depth transparency into the algorithms used

Yes

No

Custom data-retention rules to match your data governance

Yes

No

We develop relevant narratives

These days, my professional life is dedicated to one focused part of the global business landscape: the untamed frontier where cutting-edge AI meets insurance. 

I have conversations with insurers around the world about where it’s all going and how AI will work under new global regulations. And one thing never ceases to amaze me: how often I end up addressing the same misconceptions. 

Some confusion is understandable (if not inevitable) considering the speed with which these technologies are evolving, the hype from those suddenly wanting a piece of the action, and some fear-mongering from an old guard seeking to maintain the status quo. So, I thought I’d take a moment to clear the air and address six all-too-common myths about AI in insurance.

Myth 1: You’re not allowed to use AI in insurance

Yes, there’s a patchwork of emerging AI regulations—and, yes, in many cases they do zero-in specifically on insurance—but they do not ban its use. From my perspective, they do just the opposite: They set ground rules, which frees carriers to invest in innovation without fear they are developing in the wrong direction and will be forced into a hard pivot down the line. 

Sixfold has actually increased customers (by a lot) since the major AI regulations in Europe and elsewhere were announced. So, let’s put this all-too-prevalent misconception to bed once and for all. There are no rules prohibiting you from implementing AI into your insurance processes.

Myth 2: AI solutions can’t secure customer data

As stated above, there are no blanket prohibitions on using customer data in AI systems. There  are, however, strict rules dictating how data—particularly PII and PHI—must be managed and secured. These guidelines aren’t anything radically new to developers with experience in highly regulated industries. 

Security-first data processes have been the norm since long before LLMs went mainstream. These protocols protect crucial personal data in applications that individuals and businesses use every day without issue (digital patient portals, browser-based personal banking, and market trading apps, just to name a few). These same measures can be seamlessly extended into AI-based solutions.

Myth 3: “My proprietary data will train other companies’ models”

No carrier would ever allow its proprietary data to train models used by competitors. Fortunately, implementing an LLM-powered solution does not mean giving up control of your data—at least with the right approach. 

A responsible AI vendor helps their clients build AI solutions trained on their unique data for their exclusive use, as opposed to a generic insurance-focused LLM to be used by all comers. This also means allowing companies to maintain full control over their submissions within their environment so that when, for example, a case is deleted, all associated artifacts and data are removed across all databases.

At Sixfold, we train our base models on public and synthetic (AKA, “not customer”) data. We then copy these base models into dedicated environments for our customers and all subsequent training and tuning happens in the dedicated environments. Customer guidelines and data never leave the dedicated environment and never make it back to the base models.

Let’s kill this one: Yes, you can use AI and still maintain control of your data.

Myth 4: There’s no way to prevent LLM hallucinations

We’ve all seen the surreal AI-generated images lurching up from the depths of the uncanny valley—hands with too many fingers, physiology-defying facial expressions, body parts & objects melded together seemingly at random. Surely, we can’t use that technology for consequential areas like insurance. But I’m here to tell you that with the proper precautions and infrastructure, the impact of hallucinations can be greatly minimized, if not eliminated.

Mitigation is achieved using a myriad of tactics such as using models to auto-review generated content, incorporating user feedback to identify and correct hallucinations, and conducting manual reviews to ensure quality by comparing sample outputs against ground truth sets.

Myth 5: AIs run autonomously without human oversight

Even if you never watched The Terminator, The Matrix, 2001: A Space Odyssey, or any other movie about human-usurping tech, it’d be reasonable to have some reservations about scaled automation. There’s a lot of fearful talk out there about humans ceding control in important areas to un-feeling machines. However, that’s not where we’re at, nor is it how I see these technologies developing. 

Let’s break this one down.

AI is a fantastic and transformative technology, but even I—the number one cheerleader for AI-powered insurance—agree we shouldn’t leave technology alone to make consequential decisions like who gets approved for insurance and at what price. But even if I didn’t feel this way, insurtechs are obliged to comply with new regulations (e.g., the EU AI Act and the California Department of Insurance), that tilt towards avoiding fully automated underwriting and require, at the very least, that humans overseers can audit and review decisions. 

When it comes to your customers’ experience, AI opens the door to more human engagement, not less. In my view, AI will free underwriters from banal, repetitive data work (which machines handle better anyway) so that they can apply uniquely human skills in specialized or complex use cases they previously wouldn’t have had the bandwidth to address.

Myth 6: Regulations are still being written, it’s better to wait for them to settle

I hear this one a lot. I understand why people arrive at this view. My take? You can’t afford to sit on the sidelines!

To be sure, multiple sets of AI regulations are taking root at different governmental levels, which adds complexity. But here’s a little secret from someone paying very close attention to emerging AI rulesets: there’s very little daylight between them. 

Here’s the thing: regulators worldwide attend the same conferences, engage with the same stakeholders, and read the same studies & whitepapers. And they all watching what each other is doing. As a result, we’re arriving at a global consensus focused on three main areas: data security, transparency, and auditability. 

The global AI regulatory landscape is, like any global regulatory landscape, complex; but I’m here to tell you it’s not nearly as uneven or even close to unmanageable as you may fear. 

Furthermore, if an additional major change were to be introduced, it wouldn't suddenly take effect. That’s by design. Think of all the websites and digital applications that launched—and indeed, thrived—in the six-year window between when GDPR was introduced in 2012 to when it became enforceable in 2018. Think of everything that would have been lost if they had waited until GDPR was firmly established before moving forward.

My entire career has been spent in fast-moving cutting-edge technologies. And I can tell you from experience that it’s far better to deploy & iterate than to wait for regulatory Godot to arrive. Jump in and get started!

There are more myths to bust! Watch our compliance webinar

The regulations coming are not as odious or as unmanageable as you might fear—particularly when you work with the right partners. I hope I’ve helped overcome some misconceptions as you move forward on your AI journey.

Watch it here!

Want to learn more about AI insurance and compliance? Watch the replay of our compliance webinar featuring a discussion between myself; Jason D. Lapham, the Deputy Commissioner for P&C Insurance the Colorado Division of Insurance; and Matt Kelly a key member of Debevoise & Plimpton’s Artificial Intelligence Group. We're discussing the global regulatory landscape and how AI models should be evaluated regarding compliance, data usage, and privacy.

This article was originally posted on LinkedIn

The European Parliament passed the EU Artificial Intelligence Act in March, a sweeping regulatory framework scheduled to go into effect by mid-2026.

The Act categorizes AI systems into four risk tiers—Unacceptable, High, Limited, and Minimal—based on the sensitivity of the data the systems handle and the crucialness of the use case.

It specifically carves out guidelines for AI in insurance, placing “AI systems intended to be used for risk assessment and pricing in [...] life and health insurance” in the “High-risk” tier, which means they must continually satisfy specific conditions around security, transparency, auditability, and human oversight. 

The Act’s passage is reflective of an emerging acknowledgment that AI must be paired with rules guiding its impact and development—and it's far from just an EU thing. Last week, the UK and the US signed a first-of-its-kind bilateral agreement to develop “robust” methods for evaluating the safety of AI tools and the systems that underpin them. 

I fully expect to see additional frameworks following the EU, UK, and US’s lead, particularly within vital sectors such as life insurance. Safety, governance, and transparency are no longer lofty, optional aspirations for AI providers, they are inherent—and increasingly enforceable—facets of the emerging business landscape.

Please be skeptical of your tech vendors

When a carrier integrates a vendor into their tech stack, they’re outsourcing a certain amount of risk management to that vendor. That’s no small responsibility and one we at Sixfold take very seriously. 

We’ve taken on the continuous work of keeping our technology compliant with evolving rules and expectations, so you don’t have to. That message, I’ve found, doesn’t always land immediately. Tech leaders have an inherent “filter” for vendor claims that is appropriate and understandable (I too have years of experience overseeing sprawling enterprise tech stacks and attempting to separate marketing from “the meat”). We expect—indeed, we want—customers to question our claims and check our work. As my co-founder and COO Jane Tran put it during a panel discussion at ITI EU 2024:

“As a carrier, you should be skeptical towards new technology solutions. Our work as a vendor is to make you confident that we have thought about all the risks for you already.” 

Today, confidence-building has extended to ensuring customers and partners that our platform complies with emerging AI rules around the world—including ones that are still being written.

Balancing AI underwriting and transparency 

When we launched last year, there was lots of buzz about the potential of AI, along with lots of talk about its potential downside. We didn’t need to hire pricey consultants to know that AI regulations would be coming soon. 

Early on, we actively engaged with US regulators to understand their thinking and offer our insights to them as AI experts. From these conversations, we learned that the chief issue was the scaling out of bias and the impact of AI hallucinations on consequential decisions.

Sixfold CEO Alex Schmelkin (right) joined a panel discussion about AI in underwriting at the National Association of Insurance Commissioners (NAIC)’s national meeting in Seattle, WA.

With these concerns in mind, we proactively designed our platform with baked-in transparency to mitigate the influence of human bias, while also installing mechanisms to eliminate hallucinations and elevate privacy. Each Sixfold customer operates within an isolated, single-tenant environment, and end-user data is never persisted in the LLM-powered Gen AI layer so information remains protected and secure. We were implementing enterprise AI guardrails before it was cool.

I’ve often found customers and prospects are surprised when I share with them how prepared our platform is for the evolving patchwork of global AI regulations. I’m not sure what their conversations with other companies are like, but I sense the relief when they learn how Sixfold was built from the get-go to comply with the new way of things–even before they were a thing.

The regulatory landscape for AI in insurance is developing quickly, both in the US and globally. Join a discussion with industry experts and learn how to safely and compliantly integrate your next solution. Register for our upcoming webinar here >

It’s impossible in 2024 to be an insurance carrier and not also be an AI company. In this most data-focused of sectors, the winners will be the organizations making the best use of emerging AI tech to amplify capacity and improve accuracy.

This is a challenge and opportunity that Sixfold is uniquely suited to address thanks to our decades of collective industry and technological experience. We know insurers’ needs—intimately—and understand precisely how AI can overcome them. 

In previous posts, I’ve described how Sixfold uses state-of-the-art AI to ingest data from disparate sources, surface relevant information, and generate plain-language summarizations. Our platform, in effect, provides every underwriter with a virtual team of researchers and analysts who know exactly what’s needed to render a decision. But getting there is the rub. Training AI models (these “virtual teams”) to understand what information is relevant for specific product lines is no small task, but it’s where Sixfold excels.

To use AI is human, to create your own unique AI model is divine

Underwriting guidelines aren’t typically encapsulated in a single machine-readable document. They’re more likely to exist in an unordered web of internal documents and reflected in historic underwriting decisions. Distilling a diffuse cultural understanding into an AI model can take months using a traditional approach, but with Sixfold, it can be accomplished—and accomplished well—in as little as a few days.

Sixfold’s proprietary AI captures carriers’ unique risk appetite by ingesting a wide variety of inputs (be it a multi-hundred-page PDF of guidelines, a loose assortment of spreadsheets, or even past underwriting decisions) and translating it into an AI model that knows what information aligns with a positive risk signal, a negative one, or a disqualifying factor. 

With this virtual wisdom model in place, the platform can identify and ingest relevant data from submitted documents, supplement with information from public and third-party data sources, and generate semantic summaries of factors supporting its conclusions—all adhering to the carriers’ unique underwriting approach.

Frees human underwriters to do uniquely human tasks

It can take years for a human underwriter to master underwriting guidelines and rules, but that doesn’t mean human underwriters are no longer needed–quite the opposite. By offloading the administrative bulk to AI, underwriters can use their increased capacity to prioritize cases that align with their unique risk appetite. 

It can take years for a human underwriter to master underwriting guidelines and rules.

Consider a P&C carrier that prefers not to underwrite businesses that work with asbestos. When an application comes in, Sixfold’s platform processes all broker-submitted documents and supplements it with relevant data ingested from public and third-party data sources. If Sixfold were to then surface information about “assistance with obtaining asbestos abatement permits” from the applicant’s company website, it would automatically mark the finding as a negative risk signal (with clear sourcing and semantic explanation) in the underwriter-facing case dashboard. With Sixfold, underwriters can rapidly discern the applications that are incompatible with their underwriting criteria and quickly focus on cases aligned with their risk appetite.

Sixfold rapidly identifies disqualifying factors and frees underwriters to focus on applications aligned with their criteria.

Automating these previously resource-intensive data processing workflows allows carriers to obliterate traditional limits on underwriting capacity. The question for the industry has rapidly moved from “is automation possible?” to “how quickly can we get it implemented?” Sixfold’s purpose-built platform empowers customers to leapfrog competitors relying on traditional approaches to training AI underwriting models. We get you there faster–this is our superpower.

Users can peek behind the AI and review the positive and negative risk signals the system has been trained to look for.


We see responsibility as an absolute must

What is Responsible AI?
Pink LinePink Line

Responsible AI refers to the practice of designing, developing, and deploying artificial intelligence systems in a manner that is ethical, transparent, accountable, and fair. A rule we live by!

What are the benefits of responsible AI in underwriting?
Pink LinePink Line

Sixfold is an AI solution purpose-built for insurance underwriting. The platform streamlines the underwriting process, enabling underwriters to focus on decision-making rather than manual tasks. As a result, Sixfold improves the accuracy and transparency of decisions while simultaneously increasing underwriting capacity.

How can AI systems be designed to promote responsible underwriting practices?
Pink LinePink Line

An AI solution, integral to Sixfold's philosophy, ensures ethical underwriting by:

  1. Preventing bias: It strictly avoids discrimination based on age, gender, ethnicity, or other non-relevant factors.
  2. Ensuring transparency: The system keeps detailed records of its decision-making for audits and reviews, promoting accountability.
  3. Informing users: It clearly explains data usage and the rationale behind insurance decisions, enhancing user trust.
  4. Upholding standards: The AI adheres to all privacy, legal, and ethical guidelines, reflecting Sixfold's commitment to integrity and fairness in underwriting.
What is an example of responsible AI?
Pink LinePink Line

Sixfold embodies an AI solution crafted with a deep commitment to responsibility. This dedication translates into a focus on fairness, accountability, transparency, and ethical practices throughout our AI technology.