Content Hub

Sixfold Content

Transparency

AI in Insurance Is Officially “High Risk” in the EU. Now What?

The new EU AI Act defines AI in insurance as “high risk.” Here’s what that means and how to remain compliant in Europe and around the world.

AI in Insurance Is Officially “High Risk” in the EU. Now What?
Resource Categories
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore all Resources

Stay informed, gain insights, and elevate your understanding of AI's role in the insurance industry with our comprehensive collection of articles, guides, and more.

New York City, January 9, 2024

Sixfold, the Generative AI exclusively built for insurance underwriters, today announced that the company has joined Guidewire’s Insurtech Vanguards program, an initiative led by property and casualty (P&C) cloud platform provider, Guidewire (NYSE: GWRE), to help insurers learn about the newest insurtechs and how to best leverage them.

Jane Tran, Co-founder & COO at Sixfold, expressed, “Guidewire stands as the industry's foremost policy vault, embodying the definitive source of truth. Collaborating with Guidewire empowers us to advance our enterprise-grade generative AI solutions tailored specifically for underwriters.”

Insurtech Vanguards is a community of select startups and technology providers that are bringing novel solutions to the P&C industry. As part of the program, Guidewire provides strategic guidance to and advocates for the participating insurtechs, while connecting them with Guidewire’s P&C customers. 

Sixfold seamlessly handles the ingestion, routing, classification, and summarization of submissions, and provides trustworthy, data-driven policy recommendations to underwriters in a user-friendly format.

About Sixfold

Sixfold brings the power of generative AI to the underwriting process. The platform significantly reduces manual workload for underwriters and amplifies confidence in every underwriting decision with improved accuracy, transparency, and capacity.

We recently conducted a Q&A with Drew Das, AI Engineer at Sixfold. Our discussion covered various topics, including the nuances of developing generative AI tools for marketing versus the insurance sector, advice for those aspiring to enter the field, and tools he enjoys using.

Let's start with your background and how you got into the world of AI?

I've always been in tech. I started in web development during high school, launching a business creating WordPress sites. I studied at UC Davis, where I also worked in the college IT department. I've spent nearly 10 years working in Silicon Valley, primarily in web development.

My first introduction to conversational interfaces was in 2017 with a company focused on chatbots. This early experience involved developing chatbots for job applications, targeting sectors like truck driving and foreign workers. We aimed to simplify the application process for those uncomfortable with traditional job sites.

I've also worked in cybersecurity, food delivery, and at Jasper, a startup in generative AI. At Jasper, I was the lead of content and involved in launching Jasper Chat, a chatbot product developed in under four days. This product became a primary feature for users. I also worked on an AI-based text editor, a first-of-its-kind product that helped introduce many to generative AI.

Could you describe in your own words what an AI engineer does? Specifically, what are your daily tasks and responsibilities?

The role of an AI engineer is still evolving, as AI is a relatively new field. Previously, we had machine learning engineers and software engineers with distinct functions. Machine learning engineers focused on training computer systems for making predictions using statistical methods. Software engineers, on the other hand, worked on translating business logic into application code, often using APIs provided by the machine learning team.

An AI engineer's role is broader than that of a machine learning engineer. It involves working with various AI tools, like ML models, vector databases, and advanced techniques. An essential skill for AI engineers is prompt engineering and understanding how these systems integrate. The primary objective is to combine these systems to create software that operates on top of data, rather than just converting business logic into code. This involves building a layer above data designed to emulate human behavior.

For example, in text matching, the goal is to accelerate tasks typically done by humans, such as researching and compiling data to make predictions. AI engineers strive to create systems that can perform these tasks as efficiently as humans.

Could you give some insights into what's on a typical to-do list for you?

Currently, my main focus is on improving text matching accuracy in our system. My task is to implement Retrieval-Augmented Generation (RAG) techniques, which include hybrid search, re-ranking, and new methods of data embedding. These techniques aim to improve our text matching accuracy. This task involves a lot of experimentation, implementing different systems, and optimizing them for better performance.

How can you in a simple way explain the difference between a purpose-built AI tool and a generic AI tool?

General AI tools, like GPT-3, are versatile. They can adapt to various tasks, such as classification or auto-completion. The interface is straightforward: input text, get text out. However, when you want the AI to use a specific language or guide a user in a certain way, prompt engineering becomes essential. People customize these general systems with specific instructions, but this can be cumbersome and has its limitations.

Purpose-built AI like Sixfold comes into play when you start fine-tuning the systems. This involves feeding them examples of data to achieve a desired tone, style, or structure. Additionally, building a knowledge retrieval system on your proprietary data can make your AI unique, providing access to information that other systems don't have. Customizing AI systems is challenging. It's one thing to create a demo or a cool AI video, but running these systems in a production environment, especially in enterprise-grade products, requires them to respond quickly, like within two seconds. So, a lot of work goes into customizing an AI system for practical, real-world applications.

What’s your take on transparency in AI, particularly with large language models? 

I believe transparency is crucial in AI, both within and outside organizations. Particularly, I'm referring to how AI tools are built and used. Although we're not yet at a point where AI systems are making all decisions autonomously, the trend is moving towards AI taking over more complex tasks. Take self-driving cars as an example: in the future, human-driven cars might be considered less trustworthy or even costlier to insure compared to AI-driven ones, potentially limiting human driving to specific scenarios like racetracks.

In such cases, it becomes vital for the AI systems controlling these cars to be transparent. We need to understand their training data, be aware of their limitations, and identify potential risks, especially since these systems significantly impact society. This transparency is essential because AI systems are not deterministic; their output heavily depends on the quality of the input data. Understanding what an AI system has been trained on gives us a clearer picture of its capabilities and limitations, which is essential as these systems become more integral to our daily lives. That's how I view transparency in AI.

What excites you the most about Sixfold?

At my last role, my focus was on solving marketing problems, specifically automating marketing systems and content generation. Marketing is inherently subjective, which makes it challenging to capture the right flavor of content. One dilemma in this field is the risk of producing generic or misleading content, which can be detrimental to society. 

The challenges in my current role are more complex. It's about understanding and utilizing the reasoning capabilities of the model, which involves deducing insights from a given set of data. This differs from just altering the tone of content for marketing purposes. 

For example, in marketing, success is often measured by a feedback loop, like how much generated content is retained in a final document, indicating user preference. However, in my current role, the metrics are different. We have a known 'ground truth' or a specific outcome we aim to achieve, and the goal is to develop a system that consistently aligns with this known outcome. This requires a higher level of accuracy and a different approach compared to marketing, where the outcomes are more subjective and based on individual perception.

How do you stay updated with the advancements in AI and large language models? Are there any newsletters or blogs you follow?

It's one of the biggest challenges in this field! You might spend months developing something innovative, only to find it's made obsolete by a new development the following week.

It can be frustrating, but it's also exhilarating. You learn to not get too attached to your work and treat it as part of a learning journey. The field is dynamic; each week brings something new that could either render your current work obsolete or introduce an exciting new method.

However, it's crucial for companies to remain disciplined and not get constantly sidetracked by every new innovation. Deciding whether to try out new technologies and determining their integration importance requires careful consideration.

As for staying informed, I don't stick to specific newsletters or blogs. I prefer a more hands-on approach, as I'm not from an academic background. I find YouTube content especially useful for seeing how new things are implemented. It's more about application for me – I need to see something in action to understand it. So, I explore various sources like Hacker News and YouTube, or anything relevant I come across on a particular topic.

Do you share your own work or insights publicly?

Actually, I don't engage much in open source work or sharing my projects publicly. After work, I prefer to disconnect and focus on other interests, like learning guitar. It's about maintaining a healthy balance. I'm fortunate that my work aligns closely with my interests. Over the past three years, I've had the opportunity to explore new techniques and projects that I've been curious about, right in my professional environment. For instance, this week I'm working on an advanced retrieval system, something I've always wanted to try.

Considering young engineers or students interested in entering this field, do you have any advice or recommendations for them?

My main advice is that simply following tutorials and reading books isn't enough to truly learn. The key is to build something. You need to apply what you've learned, either in a professional setting or through a personal project. In technology, and especially in AI, hands-on experience is essential. The possibilities with AI are vast. Building on top of existing AI systems is surprisingly accessible.

For instance, I'm currently working on an AI-based pet project. It's essentially a photo translator using a Raspberry Pi device, which has a computer, display, and camera. The idea is that you take a picture of something, and the system uses GPT for vision to describe what it sees. Then, using that description, it generates a DALLE 3 image related to the object and displays it on the screen. I call it the 'Unreal Camera.' For example, if you take a picture of a dog, it creates an artistic interpretation of that dog. It essentially presents a graphic version of whatever you photograph.

This kind of project would have been impossible to undertake alone a few years ago; it would have required a whole team and several years. Now, thanks to the power of AI, I was able to build it in just two days. So, my recommendation for anyone entering this field is to start building something practical and useful. That's the best way to learn and understand the potential of AI. 

Do you have a favorite AI tool at the moment?

I primarily use ChatGPT and a fascinating app called Perplexity, which is great for research. Perplexity is unique in how it can visit different websites and compile data. Another tool I frequently use is GitHub Copilot for coding. It's incredibly helpful, as it assists in writing code. These tools, especially Copilot, have been instrumental in my work. 

Thanks for your time, Drew! We’ll let you get back to your AI tasks now.

If you’d like an opportunity to work at Sixfold, check out our vacancies.

Insurance underwriting isn’t for the weak. It’s a dizzyingly complex undertaking that requires connecting data points across disparate sources to support consequential decisions—all while meeting modern expectations for speed, accuracy, and compliance. 

The role has grown exponentially more challenging as technology has become more ubiquitous, stretching our information-rich digital trails ever longer.

Over the past two decades, various vendors have developed Intelligent Data Processing (IDP) tools to manage all this information by automating the extraction, ingestion, and structuring of data at scale. These tools have been widely adopted by carriers, but fall short of today’s mounting data challenges–in fact, they’re exasperating them.

McKinsey estimates that underwriters spend 30-to-40% of their time on rote administrative tasks “such as rekeying data or manually executing analysis.” These were the types of tasks that IDPs were supposed to automate and make more efficient—but that’s not what’s happening. In a recent Accenture survey, 64% of underwriters reported that today’s tech either makes no difference or increases their workload. 

Automated data extraction was, until recently, the only way to tame the information deluge. New technologies have paved the way for a better, more seamless approach. Emerging LLM-powered AI represents a new paradigm that eliminates extraction chokepoints, reduces the burden on overtaxed underwriters, and accelerates decisioning.

Generative AI in insurance changes everything

Traditional IDPs were designed to exhaustively extract every piece of data–no matter how irrelevant or repetitive—so that it can be structured into a centralized database and passed along to overloaded human underwriters to query and scrutinize. The more complex and document-laden a process (e.g., loss run reports with intricate hierarchical ordering of nested sets), the more odious the inefficiencies and the more work tossed onto underwriters’ plates.

Insurance solutions touting the “most efficient” or “fastest” data extraction are about as meaningful in 2023 as boasting the “highest print-quality” fax machine. Comprehensive extraction is a relic of a fading technological paradigm. The industry is rightly turning to next-gen AI technologies to free underwriters from repetitive data work (which is better handled by machines anyway) so they can focus on building value and closing deals. 

Sixfold uses state-of-the-art LLMs to synthesize information across multiple sources and generate summaries in plain language for underwriter review. No processing power is misspent on redundant extraction; underwriters’ valuable time is no longer wasted sorting through virtual buckets of well-structured (but context-free) data. 

When processing a life insurance application, traditional IDPs will, for example, extract each mention of the applicant having diabetes, even if it appears across dozens of documents. Unlike AI-powered platforms, IDPs are incapable of discerning meaning from data—underwriters are still required to connect the dots. Sixfold skips the needless chronicling of data points and independently generates clear summations of relevant throughlines (e.g., “The applicant was diagnosed with type 2 diabetes 12 years ago and it’s being properly managed with insulin and diet”), thus freeing underwriters to forgo the data work and render decisions faster.

Sixfold brings the power of advanced AI to insurance underwriting

In effect, Sixfold provides underwriters with a virtual army of researchers, data processors, and writers who know precisely what information is needed to render decisions quickly (and just as importantly, what isn’t). 

It’s already having a huge impact. With Sixfold, companies are accelerating submission-to-quote cycles by as much as 43%, clearing backlogged queues, and massively increasing GWP per underwriter. 

Even better? It’s far easier to get up and running with Sixfold than a traditional IDP. These older systems required huge investments in time and resources to train their ML models on an organization’s unique needs. Sixfold, on the other hand, can be easily—and quickly—configured to match the appetite and needs of specific carriers and programs. It’s more-or-less ready to go out-of-the-box (or out of the virtual SaaS box).

AI is reshaping insurance before our eyes

The marketplace is littered with the remnants of corporate behemoths that misread the technological tea leaves—and in today’s world, giants fall fast. Consider how, in just one decade, Yahoo slid from the world’s most popular website to near-irrelevance. Or how Kodak only took eight years to complete its journey from top-five global brand to ejection from the Dow Jones. Or how, in a mere six years, Blockbuster leaped from its 9,000-plus-location peak into bankruptcy. 

The downfall of huge corporations highlights the consequences of misjudging technological trends in today's marketplace.

The takeaway: Past performance will not save you. New technological paradigms can seemingly come out of nowhere to reward leaders who had an eye on the future—and expose those who didn’t.

I’m confident that this year will be remembered as an inflection point for generative AI. The way insurance is handled moving forward will be a radical departure from the past. There’s now a clear industry-wide divide between those pursuing iteration and those seeking transformation. Which side do you want to be on?

We recently sat down for a quick Q&A with Stewart Hu, AI scientist at Sixfold. Our conversation ranged from his career journey to how he stays current in the field, as well as the tasks on his daily agenda.

Let’s get this started! In your own words, what does your job as an AI scientist involve?

AI scientists engage in a lot of practical work. Despite our 'scientist' title, our roles often overlap with those of developers or research engineers. In fact, over 50% of our tasks are typical software engineering activities. We develop software grounded in foundational models, employing a range of techniques, not just AI.

Previously, AI encompassed anything linked to machine learning, but now it's more commonly associated with large language models like GPT. Our role includes integrating these models into software applications, utilizing models such as GPT-4, and even fine-tuning our custom models. Additionally, we apply both traditional machine learning and deep learning methods. This involves creating classifiers with techniques predating neural networks, like gradient boosting machines or random forests. At our core, we are software engineers crafting machine learning algorithms to address real-world challenges.

How did you get into the world of Generative AI? 

My fascination with AI really took off with GPT-3's emergence. But it was the debut of the stable diffusion model in August 2022 that truly captivated me. This revelation prompted me to pivot my career towards a tech startup specializing in deep learning and AI.

In the early stages of my career, I worked as a software engineer. This was followed by a ten-year journey in data science, beginning with statistical learning and gradually evolving into machine learning, deep learning, and finally AI. Essentially, I devoted my first decade to hardcore software development, and the next decade explored the realms of data science and machine learning.

Could you give some insights into what's on a typical to-do list for you?

My work is basically divided into three key areas.

Firstly, there's data management: sourcing appropriate data, organizing it properly, and conducting thorough analyses. A major chunk of our time is dedicated to dealing with data - acquiring, scrutinizing, and delving into it.

Secondly, I engage in software development, where my goal is to craft software that's not only reusable but also adaptable to growing complexities. This involves strategic software design to ensure it can be easily scaled up.

The third area is AI, particularly focusing on 'retrieval augmented generation’ . This entails extracting pertinent details from extensive document collections to accurately contextualize models like GPT-4. My day-to-day involves juggling these three components.

How would you distinguish a purpose-built AI tool from a generic one?

AI often gets hyped up with flashy demos requiring little coding. However, Sixfold is a purpose-built gen AI tool, our focus is on crafting solutions that address real-world business problems, not just making eye-catching demos. We use AI to make underwriters work faster, more accurate, and enjoyable. By taking over repetitive tasks, AI allows underwriters to focus on the more engaging aspects of their job.

Our platform is built with a strong emphasis on accountability, not just on interpretability or explainability. This means our solutions cite sources when making recommendations and provide actual source documents for our classifications. It's a practical, business-centric approach that boosts confidence in underwriting decisions.

What excited you the most about joining Sixfold?

Two things particularly drew me to Sixfold. First, the experienced team leading the company. The founders have a proven track record of creating substantial business value, blending tech knowledge with sharp business insight. Second, on a personal level, my wife has been in the insurance industry for over ten years, and I've always found it fascinating. Joining Sixfold presented a chance to dive deeper into this sector. 

It was the combination of the seasoned leadership and the company's expertise in insurance and underwriting that ultimately convinced me to become part of the team.

How do you stay engaged with the AI community? 

My go-to resource is X (formerly known as Twitter), where I've created a list named ‘AI Signals.' This list features over 100 experts deeply engaged in the field, tackling everything from fine-tuning models to enhancing the speed of large language model inference. While some of these individuals may not be widely known, their insights are incredibly valuable. 

Previously, I would follow arXiv for academic papers, GitHub for trending repositories, and Papers with Code to find research papers with their corresponding code. However, X has become my most essential tool. I regularly check updates from my list there to keep up-to-date with the latest developments.

That sounds like a great list! Can we share it with the readers?

Of course, happy to share it - here you go

How can people best follow your work?

I haven't been active on my blog lately, but I do maintain a GitHub repository named 'LLM Notes.' It serves as a practical guide for data scientists and machine learning practitioners. This repository is a compilation of the knowledge and insights I've gathered throughout my career. A few months back, I uploaded a wealth of information there, including lessons learned, common pitfalls, and personal experiences. It's a good resource for anyone interested in the field. 

Thanks for your time, Stewart! We’ll let you get back to your to-do list now.  

If you’d like an opportunity to work at Sixfold, check out our vacancies.

AI is the defining technology of this decade. After years of unfulfilled promises from Hollywood and comic books, the science fiction AI future we’ve long been promised has finally become business reality. 

We can already see AI following a familiar path through the marketplace similar to past disruptive technologies.

  • Stage one: it’s embraced by early adopters before the general public even knows it exists;
  • Stage two: cutting-edge startups tap these technologies to overcome long-standing business challenges; and then
  • Stage three: regulators draft rules to guide its usage and mitigate negative impacts.

There should be no doubt that AI-powered insurtech has accelerated through the first two stages in near record time and is now entering stage three.

AI underwriting solutions, meet the rule-makers

The Colorado Department of Regulatory Agencies recently adopted regulations on AI applications and governance in life insurance. To be clear, Colorado isn’t an outlier, it’s a pioneer. Other states are following suit and crafting their own AI regulations, with federal-level AI rules beginning to take shape as well.

The early days of the regulatory phase can be tricky for businesses. Insurers are excited to adopt advanced AI into their underwriting tech stack, but wary of investing in platforms knowing that future rules may impact those investments. 

We at Sixfold are very cognizant of this dichotomy: The ambition to innovate ahead, combined with the trepidation of going too far down the wrong path. That’s why we designed our platform in anticipation of these emerging rules. 

We’ve met with state-level regulators on numerous occasions over the past year to understand their concerns and thought processes. These engagements have been invaluable for all parties as their input played a major role in guiding our platform’s development, while our technical insights influenced the formation of these emerging rules.

Sixfold CEO Alex Schmelkin (right) joined a panel discussion about AI in underwriting at the National Association of Insurance Commissioners (NAIC)’s Summer 2023 national meeting in Seattle, WA.

To simplify a very complex motion: regulators are concerned with bias in algorithms. There’s a tacit understanding that humans have inherent biases, which may be reflected in algorithms and applied at scale.

Most regulators we’ve engaged with agree that these very legitimate concerns about bias aren’t a reason to prohibit or even severely restrain AI, which brings enormous positives like accelerated underwriting cycles, reduced overhead, and increased objectivity–all of which ultimately benefit consumers. However, for AI to work for everyone, it must be partnered with transparency, traceability, and privacy. This is a message we at Sixfold have taken to heart.

In AI, it’s all about transparency

The past decade saw a plethora of algorithmic underwriting solutions with varying degrees of capabilities. Too often, these tools are “black boxes” that leave underwriters, brokers, and carriers unable to explain how decisions were arrived at. Opaque decision-making no longer meets the expectations of today’s consumers—or of regulators. That’s why we designed Sixfold with transparency at its core.

Customers accept automation as part of the modern digital landscape, but that acceptance comes with expectations. Our platform automatically surfaces relevant data points impacting its recommendations and presents them to underwriters via AI-generated plain-language summarizations, while carefully controlling for “hallucinations.” It provides full traceability of all inputs, as well as a full lineage of changes to the UW model, so carriers can explain why results diverged over time. These baked-in layers of transparency allow carriers–and the regulators overseeing them–to identify and mitigate incidental biases seeping into UW models.

Beyond prioritizing transparency, we‘ve designed a platform that elevates data security and privacy. All Sixfold customers operate within isolated, single-tenant environments, and end-user data is never persisted in the LLM-powered Gen AI layer so information remains protected and secure.  

Even with platform features built in anticipation of external regulations, we understand that some internal compliance teams are cautious about integrating gen AI, a relatively new concept, into their tech stack. To help your internal stakeholders get there, Sixfold can be implemented with robust internal auditability and appropriate levels of human-in-the-loop-ness to ensure that every team is comfortable on the new technological frontier.

Want to learn more about how Sixfold works? Get in touch.

Sixfold emphasizes the importance of collaborating with regulators to create technology that benefits everyone.

We at Sixfold believe regulators play a vital role in the marketplace by setting ground rules that protect consumers. As we see it, it’s not the technologist’s place to oppose or confront regulators; it’s to work together to ensure that technology works for everyone. 

The past decade saw more than its fair share of insurtech solutions promising to harness the power of “AI.” Many of these tools use hard-to-train algorithms powered by technologies that are years—if not decades—old. These legacy underwriting tools may inject some process efficiencies but don’t address the fact that insurers are struggling more than ever to expand capacity and grow Gross Written Premiums (GWPs) per underwriter. 

Underwriters face a lot of issues

A recent Accenture survey found that underwriters spend 40% of their time on administrative tasks—that’s a full two days of their work week. Inboxes are flooded with more submissions than ever, but by some estimates, underwriters are only able to respond to 10%. 

These aren’t challenges that companies can simply spend their way around; they require a fundamentally new approach. At Sixfold, we believe ascendent technologies, like LLM-powered generative AI, will lead the way. By moving beyond legacy solutions, carriers can take on today’s most pressing underwriting challenges–and the challenges on the horizon.

Ingesting and synthesizing data from disparate sources at scale

In the connected-everything world, insurers have access to more data than ever. This is a blessing and a challenge. On one hand, it guides decisioning and improves outcomes. On the other hand, there’s so much data from so many disparate sources, that it’s impossible to process efficiently.

Underwriters often find themselves sorting through hundreds of pages of documents for a single application. This limits capacity and squeezes GWP per underwriter. The only way to overcome these chokepoints without massively expanding headcount (and dinging already precarious expense ratios) is by using sophisticated AI tools to automate complex business tasks at scale.  

With Sixfold’s state-of-the-art underwriting AI, insurers can seamlessly integrate structured and unstructured data from multiple disparate sources. The platform reflects each company's unique risk appetite, so it automatically surfaces relevant information to accelerate UW decisioning.

Say it in plain language

Sixfold uses LLM-powered generative AI (the same tech behind ChatGPT, Bard, etc.) to summarize findings to underwriters in plain language, not spreadsheets. 

The platform, in effect, gives every underwriter their own virtual research team to build detailed reports on every application. Sixfold even generates coverage recommendations based on the company’s UW format. Compare this to legacy AI tools, which merely repackage information into number-heavy spreadsheets and dashboards, inevitably requiring additional inspection and contextualization from underwriters.

Even better? Plain language summations expand the underwriting talent pool by de-emphasizing technical and computational skillsets that are better handled by machines anyway. This is a crucial break from legacy tools, as insurers are now forced to compete for limited underwriting talent against private-equity-backed firms, insurtechs, MGAs, and other nontraditional insurance companies. 

Opacity in insurance is no longer an option, AI transparency is the new norm

Sixfold was designed with transparency at its core because that’s what today’s customers expect and increasingly what regulators demand. The platform provides full sourcing and lineage of all underwriting decisions with clear semantic summaries, i.e. no more “black boxes.” 

The platform provides clear reasoning for its conclusions on why a case is qualified, or, as in the case above, disqualified.

Customers accept automation as part of the modern digital landscape, but that acceptance comes with expectations of transparency, particularly when there are unexpected outcomes. Legacy solutions make it difficult—if not impossible—for insurers to provide customers with the clarity they deserve. Disappointed customers and diminished brand reputation, however, aren’t the only negative outcomes the industry needs to be mindful of.

As scaled automation becomes more ubiquitous, so have the calls for greater transparency from . At all levels of government, there are movements to counter the influence of potential bias through increased transparency and accountability—particularly in crucial areas like insurance. 

The marketplace has long since moved on from “because the algorithm said so,” and insurers must employ tools to reflect those changes.

Beyond legacy AI

We’re not the first to automate underwriting tasks using “AI,” but we’re the first to fundamentally reimagine the underwriting role using state-of-the-art LLM tech to generate business value. Customers are using our platform to accelerate submission-to-quote cycles by as much as 43% and massively increase their GWP per underwriter. 

The role of the underwriter is evolving, and the industry needs a new generation of tools to match. This is why we created Sixfold.

No items found