Leading the AI way, Responsibly

Discover our security standards and compliance commitments.

We care about your data

Learning Purpose

Never Stored Or Used For Learning Purposes

Your data is never persisted at the LLM level, and never used for future reference or learning by the model.

Isolated And Separated

Isolated And Separated From Others

Each customer's data and resources are secluded in a single-tenant environment, separate from other users.

Ultimate Privacy

Locked Up Tight For Ultimate Privacy

Strong encryption protocols are employed to safeguard all customer data.

We are certified secure

Reports may be provided upon request

SOC 2 Type II
Hippa Compliant
HIPAA Complaint

We are compliant since day one

Full Data Lineage Provided

Sixfold is engineered to offer complete visibility into the origins of data used by our system, detailing its processing journey and utilization within our model.

Transparent Model Training

We meticulously document the development of our AI model, including its training methodologies, the algorithms employed, and the settings adjusted throughout its training phase.

Traceable And Explainable Sources

Our platform ensures all conclusions can be verified, incorporating an additional layer in the review process to confirm that summaries are reasonable, relevant, and compliant with guidelines.

Designed To Meet Regulations

In response to emerging regulations that demand AI systems to be both transparent and explainable, our product's foundational design embodies these principles.

Human Oversight Guaranteed

Our team actively monitors the platform's output, ready to investigate and rectify any anomalies, ensuring our commitment to reliability and accuracy.

We are different than off-the-shelf AI

Sixfold stands out from general-purpose AI solutions by ensuring your data and information remain secure and confidential within your organization

Discover all the ways we differ on safety measures:

Product Comparison

SixFold Logo

Off-the-shelf AI tools

Customers have dedicated single-tenant environments

Yes

No

Customers maintain ownership and control over their data

Yes

No

Data trains client-specific models, not generic LLMs

Yes

No

Designed to meet industry-specific privacy laws and regulations

Yes

No

In-depth transparency into the algorithms used

Yes

No

Custom data-retention rules to match your data governance

Yes

No

We develop relevant narratives

Last month, the European Parliament passed the EU Artificial Intelligence Act, a sweeping regulatory framework scheduled to go into effect in 2025.

The Act categorizes AI systems into four risk tiers—Unacceptable, High, Limited, and Minimal—based on the sensitivity of the data the systems handle and the crucialness of the use case.

It specifically carves out guidelines for AI in insurance, placing “AI systems intended to be used for risk assessment and pricing in [...] life and health insurance” in the “High-risk” tier, which means they must continually satisfy specific conditions around security, transparency, auditability, and human oversight. 

The Act’s passage is reflective of an emerging acknowledgment that AI must be paired with rules guiding its impact and development—and it's far from just an EU thing. Last week, the UK and the US signed a first-of-its-kind bilateral agreement to develop “robust” methods for evaluating the safety of AI tools and the systems that underpin them. 

I fully expect to see additional frameworks following the EU, UK, and US’s lead, particularly within vital sectors such as life insurance. Safety, governance, and transparency are no longer lofty, optional aspirations for AI providers, they are inherent—and increasingly enforceable—facets of the emerging business landscape.

Please be skeptical of your tech vendors

When a carrier integrates a vendor into their tech stack, they’re outsourcing a certain amount of risk management to that vendor. That’s no small responsibility and one we at Sixfold take very seriously. 

We’ve taken on the continuous work of keeping our technology compliant with evolving rules and expectations, so you don’t have to. That message, I’ve found, doesn’t always land immediately. Tech leaders have an inherent “filter” for vendor claims that is appropriate and understandable (I too have years of experience overseeing sprawling enterprise tech stacks and attempting to separate marketing from “the meat”). We expect—indeed, we want—customers to question our claims and check our work. As my co-founder and COO Jane Tran put it during a panel discussion at ITI EU 2024:

“As a carrier, you should be skeptical towards new technology solutions. Our work as a vendor is to make you confident that we have thought about all the risks for you already.” 

Today, confidence-building has extended to ensuring customers and partners that our platform complies with emerging AI rules around the world—including ones that are still being written.

Balancing AI underwriting and transparency 

When we launched last year, there was lots of buzz about the potential of AI, along with lots of talk about its potential downside. We didn’t need to hire pricey consultants to know that AI regulations would be coming soon. 

Early on, we actively engaged with US regulators to understand their thinking and offer our insights to them as AI experts. From these conversations, we learned that the chief issue was the scaling out of bias and the impact of AI hallucinations on consequential decisions.

Sixfold CEO Alex Schmelkin (right) joined a panel discussion about AI in underwriting at the National Association of Insurance Commissioners (NAIC)’s national meeting in Seattle, WA.

With these concerns in mind, we proactively designed our platform with baked-in transparency to mitigate the influence of human bias, while also installing mechanisms to eliminate hallucinations and elevate privacy. Each Sixfold customer operates within an isolated, single-tenant environment, and end-user data is never persisted in the LLM-powered Gen AI layer so information remains protected and secure. We were implementing enterprise AI guardrails before it was cool.

I’ve often found customers and prospects are surprised when I share with them how prepared our platform is for the evolving patchwork of global AI regulations. I’m not sure what their conversations with other companies are like, but I sense the relief when they learn how Sixfold was built from the get-go to comply with the new way of things–even before they were a thing.

It’s impossible in 2024 to be an insurance carrier and not also be an AI company. In this most data-focused of sectors, the winners will be the organizations making the best use of emerging AI tech to amplify capacity and improve accuracy.

This is a challenge and opportunity that Sixfold is uniquely suited to address thanks to our decades of collective industry and technological experience. We know insurers’ needs—intimately—and understand precisely how AI can overcome them. 

In previous posts, I’ve described how Sixfold uses state-of-the-art AI to ingest data from disparate sources, surface relevant information, and generate plain-language summarizations. Our platform, in effect, provides every underwriter with a virtual team of researchers and analysts who know exactly what’s needed to render a decision. But getting there is the rub. Training AI models (these “virtual teams”) to understand what information is relevant for specific product lines is no small task, but it’s where Sixfold excels.

To use AI is human, to create your own unique AI model is divine

Underwriting guidelines aren’t typically encapsulated in a single machine-readable document. They’re more likely to exist in an unordered web of internal documents and reflected in historic underwriting decisions. Distilling a diffuse cultural understanding into an AI model can take months using a traditional approach, but with Sixfold, it can be accomplished—and accomplished well—in as little as a few days.

Sixfold’s proprietary AI captures carriers’ unique risk appetite by ingesting a wide variety of inputs (be it a multi-hundred-page PDF of guidelines, a loose assortment of spreadsheets, or even past underwriting decisions) and translating it into an AI model that knows what information aligns with a positive risk signal, a negative one, or a disqualifying factor. 

With this virtual wisdom model in place, the platform can identify and ingest relevant data from submitted documents, supplement with information from public and third-party data sources, and generate semantic summaries of factors supporting its conclusions—all adhering to the carriers’ unique underwriting approach.

Frees human underwriters to do uniquely human tasks

It can take years for a human underwriter to master underwriting guidelines and rules, but that doesn’t mean human underwriters are no longer needed–quite the opposite. By offloading the administrative bulk to AI, underwriters can use their increased capacity to prioritize cases that align with their unique risk appetite. 

It can take years for a human underwriter to master underwriting guidelines and rules.

Consider a P&C carrier that prefers not to underwrite businesses that work with asbestos. When an application comes in, Sixfold’s platform processes all broker-submitted documents and supplements it with relevant data ingested from public and third-party data sources. If Sixfold were to then surface information about “assistance with obtaining asbestos abatement permits” from the applicant’s company website, it would automatically mark the finding as a negative risk signal (with clear sourcing and semantic explanation) in the underwriter-facing case dashboard. With Sixfold, underwriters can rapidly discern the applications that are incompatible with their underwriting criteria and quickly focus on cases aligned with their risk appetite.

Sixfold rapidly identifies disqualifying factors and frees underwriters to focus on applications aligned with their criteria.

Automating these previously resource-intensive data processing workflows allows carriers to obliterate traditional limits on underwriting capacity. The question for the industry has rapidly moved from “is automation possible?” to “how quickly can we get it implemented?” Sixfold’s purpose-built platform empowers customers to leapfrog competitors relying on traditional approaches to training AI underwriting models. We get you there faster–this is our superpower.

Users can peek behind the AI and review the positive and negative risk signals the system has been trained to look for.


AI is the defining technology of this decade. After years of unfulfilled promises from Hollywood and comic books, the science fiction AI future we’ve long been promised has finally become business reality. 

We can already see AI following a familiar path through the marketplace similar to past disruptive technologies.

  • Stage one: it’s embraced by early adopters before the general public even knows it exists;
  • Stage two: cutting-edge startups tap these technologies to overcome long-standing business challenges; and then
  • Stage three: regulators draft rules to guide its usage and mitigate negative impacts.

There should be no doubt that AI-powered insurtech has accelerated through the first two stages in near record time and is now entering stage three.

AI underwriting solutions, meet the rule-makers

The Colorado Department of Regulatory Agencies recently adopted regulations on AI applications and governance in life insurance. To be clear, Colorado isn’t an outlier, it’s a pioneer. Other states are following suit and crafting their own AI regulations, with federal-level AI rules beginning to take shape as well.

The early days of the regulatory phase can be tricky for businesses. Insurers are excited to adopt advanced AI into their underwriting tech stack, but wary of investing in platforms knowing that future rules may impact those investments. 

We at Sixfold are very cognizant of this dichotomy: The ambition to innovate ahead, combined with the trepidation of going too far down the wrong path. That’s why we designed our platform in anticipation of these emerging rules. 

We’ve met with state-level regulators on numerous occasions over the past year to understand their concerns and thought processes. These engagements have been invaluable for all parties as their input played a major role in guiding our platform’s development, while our technical insights influenced the formation of these emerging rules.

Sixfold CEO Alex Schmelkin (right) joined a panel discussion about AI in underwriting at the National Association of Insurance Commissioners (NAIC)’s Summer 2023 national meeting in Seattle, WA.

To simplify a very complex motion: regulators are concerned with bias in algorithms. There’s a tacit understanding that humans have inherent biases, which may be reflected in algorithms and applied at scale.

Most regulators we’ve engaged with agree that these very legitimate concerns about bias aren’t a reason to prohibit or even severely restrain AI, which brings enormous positives like accelerated underwriting cycles, reduced overhead, and increased objectivity–all of which ultimately benefit consumers. However, for AI to work for everyone, it must be partnered with transparency, traceability, and privacy. This is a message we at Sixfold have taken to heart.

In AI, it’s all about transparency

The past decade saw a plethora of algorithmic underwriting solutions with varying degrees of capabilities. Too often, these tools are “black boxes” that leave underwriters, brokers, and carriers unable to explain how decisions were arrived at. Opaque decision-making no longer meets the expectations of today’s consumers—or of regulators. That’s why we designed Sixfold with transparency at its core.

Customers accept automation as part of the modern digital landscape, but that acceptance comes with expectations. Our platform automatically surfaces relevant data points impacting its recommendations and presents them to underwriters via AI-generated plain-language summarizations, while carefully controlling for “hallucinations.” It provides full traceability of all inputs, as well as a full lineage of changes to the UW model, so carriers can explain why results diverged over time. These baked-in layers of transparency allow carriers–and the regulators overseeing them–to identify and mitigate incidental biases seeping into UW models.

Beyond prioritizing transparency, we‘ve designed a platform that elevates data security and privacy. All Sixfold customers operate within isolated, single-tenant environments, and end-user data is never persisted in the LLM-powered Gen AI layer so information remains protected and secure.  

Even with platform features built in anticipation of external regulations, we understand that some internal compliance teams are cautious about integrating gen AI, a relatively new concept, into their tech stack. To help your internal stakeholders get there, Sixfold can be implemented with robust internal auditability and appropriate levels of human-in-the-loop-ness to ensure that every team is comfortable on the new technological frontier.

Want to learn more about how Sixfold works? Get in touch.

Sixfold emphasizes the importance of collaborating with regulators to create technology that benefits everyone.

We at Sixfold believe regulators play a vital role in the marketplace by setting ground rules that protect consumers. As we see it, it’s not the technologist’s place to oppose or confront regulators; it’s to work together to ensure that technology works for everyone. 

We see responsibility as an absolute must

What is Responsible AI?
Pink LinePink Line

Responsible AI refers to the practice of designing, developing, and deploying artificial intelligence systems in a manner that is ethical, transparent, accountable, and fair. A rule we live by!

What are the benefits of responsible AI in underwriting?
Pink LinePink Line

Sixfold is an AI solution purpose-built for insurance underwriting. The platform streamlines the underwriting process, enabling underwriters to focus on decision-making rather than manual tasks. As a result, Sixfold improves the accuracy and transparency of decisions while simultaneously increasing underwriting capacity.

How can AI systems be designed to promote responsible underwriting practices?
Pink LinePink Line

An AI solution, integral to Sixfold's philosophy, ensures ethical underwriting by:

  1. Preventing bias: It strictly avoids discrimination based on age, gender, ethnicity, or other non-relevant factors.
  2. Ensuring transparency: The system keeps detailed records of its decision-making for audits and reviews, promoting accountability.
  3. Informing users: It clearly explains data usage and the rationale behind insurance decisions, enhancing user trust.
  4. Upholding standards: The AI adheres to all privacy, legal, and ethical guidelines, reflecting Sixfold's commitment to integrity and fairness in underwriting.
What is an example of responsible AI?
Pink LinePink Line

Sixfold embodies an AI solution crafted with a deep commitment to responsibility. This dedication translates into a focus on fairness, accountability, transparency, and ethical practices throughout our AI technology.