Sixfold Content
Sixfold News
Just Launched: Beyond the Policy đ
Iâm super excited to introduce Beyond the Policy, Sixfoldâs innovation hub designed exclusively for underwriters.
Explore all Resources
Stay informed, gain insights, and elevate your understanding of AI's role in the insurance industry with our comprehensive collection of articles, guides, and more.
Sixfold Partners with CyberCube
The partnership with CyberCube aligns with the strategy of utilizing the best data sources to streamline underwriting, keeping insurers ahead as cyber insurance premiums are projected to reach $20 billion by 2025.
In 2025, the cyber risk landscape is expected to become more complex with increasing threats driven by rising privacy violations, data breaches, the rise of AI, and external factors such as emerging regulations. According to Munich Re, the cyber insurance market has nearly tripled in size over the past five years, with global premiums projected to surpass $20 billion by 2025, up from nearly $15 billion in 2023, as reported by CyberSecurity Dive.
Reflecting the rapid market growth and emerging threats, Sixfold has seen increased demand from specialty insurers in the cyber sector and has successfully brought on several industry leaders as customers.  "In the near future, cyber policies will become as essential as General Liability or Property & Casualty coverage. Given the world we live in, this shift is inevitable. Cyber policies are poised to become the most specific and highly customized policies available" said Jane Tran, Co-founder & COO at Sixfold.
"In the near future, cyber policies will become as essential as General Liability or Property & Casualty coverage. Given the world we live in, this shift is inevitable. Cyber policies are poised to become the most specific and highly customized policies available"
Empowering Underwriters to Quickly Adapt to New Cyber Risks
As cyber risks grow, the pressure on underwriters to assess risks accurately and expedite the case review process continues to increase. Sixfoldâs AI solution for cyber insurance addresses these challenges by securely ingesting each insurerâs underwriting guidelines and aggregating all necessary business information to quickly provide recommendations that align with the carrierâs risk appetite. This capability allows insurers to quickly adjust their risk strategies in response to new cyber threats.
âWith Sixfold, insurers can synchronize their underwriting guidelines across the board and adapt quickly. For example, when a new malware threat is identified, you can instantly incorporate it into your risk criteria through Sixfold. This ensures that the entire cyber team factors it into their assessments immediately without needing to learn every detail or the threat or spending hours digging for the right informationâ said Alex Schmelkin, Founder & CEO of Sixfold.
Besides, effective cyber underwriting demands deep expertise in IT systems, cybersecurity measures, and industry developments. This need for specific expertise presents a significant talent issue for insurers, especially with 50% of the underwriting workforce set to retire by 2028. Sixfold bridges the knowledge gap by instantly providing underwriters with the specialized knowledge they need for accurate risk assessments.Â
âUnderwriters no longer need to be cyber experts; they can rely on Sixfold to spotlight the critical information needed for accurate underwriting decisions. Our platform simplifies the complex world of cyber risk and empowers underwriters to make more confident decisions, fasterâ said Jane Tran, Co-founder & COO at Sixfold.
Sixfold Partners with CyberCube to Enhance Cyber Risk Assessments
âSixfold has teamed up with CyberCube, the worldâs leading analytics provider to quantify cyber risk. This integration of CyberCube's advanced cyber risk analytics with Sixfold's AI underwriting solution enables insurers to achieve faster and more accurate risk assessments. The partnership enhances underwriting efficiency, strengthens regulatory compliance, and offers highly tailored cyber insurance solutions, empowering insurers to stay ahead of the rapidly evolving cyber threat landscape. "The partnership between CyberCubeâs comprehensive cyber data and Sixfoldâs innovative risk assessment is setting a new standard for the future of underwriting, keeping insurers prepared for new challenges in determining accurate cyber policies.â said Ross Wirth, Head of Partnership and Ecosystem for CyberCube.
â
"The partnership between CyberCubeâs comprehensive cyber data and Sixfoldâs innovative risk assessment is setting a new standard for the future of underwriting, keeping insurers prepared for new challenges in determining accurate cyber policies.â
â
To see our Sixfold speeds up the cyber underwriting process join our upcoming live product demo.
How to Secure AI Compliance in Insurance
Sixfold's CEO and founder, Alex Schmelkin, along with special guests, discusses developments in AI regulation for the U.S. insurance industry and addresses common compliance concerns.
With the rise of AI solutions in the Insurance market, questions around AI regulations and compliance are increasingly at the forefront. Key questions such as âWhat happens when we use data in the context of AI?â and âWhat are the key focus areas in the new regulations?â are top of mind for both consumers and industry leaders.
To address these topics, Sixfoldâs founder and CEO, Alex Schmelkin, hosted the webinar âHow to Secure Your AI Compliance Teamâs Approvalâ. Joined by industry experts Jason D. Lapham, Deputy Commissioner for P&C Insurance for the State of Colorado, and Matt Kelly, Data Strategy & Security Counsel at Debevoise & Plimpton, the discussion provided essential insights into navigating AI regulations and compliance.
Here are the key insights from the session:
AI Regulation Developments: Colorado Leads the Way in the U.S
âThereâs a requirement in almost any regulatory regime to protect consumer data. But now, what happens when we start using that data in AI? Are things different?â â Alex Schmelkin
Both nationally and globally, AI regulations are being implemented. In the U.S., Colorado became the first state to pass a law and implement regulations related to AI in the insurance sector. Jason Lapham explained that the key components of this legislation revolve around two major requirements:
- Governance and Risk Management Frameworks: Companies must establish robust frameworks to manage the risks associated with AI and predictive models.
- Quantitative Testing: Businesses must test their AI models to ensure that outcomes generated from non-traditional data sources (e.g., external consumer data) do not lead to unfairly discriminatory results. The legislation also mandates a stakeholder process prior to adopting rules.
Initially, the focus was on life insurance, as it played a critical role in shaping the legislative process. The first regulation, implementing Coloradoâs Bill 169, adopted in late 2023, addressed governance and risk management. This regulation applies to life insurers across all practices, and the Regulatory Agency received the first reports this year from companies using predictive models and external consumer data sources.
So, whatâs the next move for the first-moving state in terms of AI regulations? Colorado Division of Insurance is developing a framework for quantitative testing to help insurers assess whether their models produce unfairly discriminatory outcomes. Insurers are expected to take action if their models do lead to such outcomes.
Compliance Approach: Develop Governance Programs
âWhen weâre discussing with clients, we say focus on the operational risk side, and it will get you largely where you need to be for most regulations out there.â â Matt Kelly
With AI regulations differing across U.S. states and globally, companies face challenges. Matt Kelly described how his team at Debevoise & Plimpton navigate these challenges by building a framework that prioritizes managing operational risk related to technology. Their approach involves asking questions such as :
- What AI is being used?
- What risks are associated with its use?
- How is the company governing or mitigating those risks?
By focusing on these questions, companies can develop strong governance programs that align with most regulatory frameworks. Kelly advises clients to center their efforts on addressing operational risks, which takes them a long way toward compliance.
The Four Pillars of AI ComplianceÂ
Across different AI regulatory regimes, four common themes emerge:
- Transparency and Accountability: Companies must understand and clearly explain their AI processes. Transparency is a universal requirement.
- Ethical and Fair Usage: Organizations must ensure their AI models do not introduce bias and must be able to demonstrate fairness.
- Consumer Protection: In all regulatory contexts, protecting consumer data is essential. With AI, this extends to ensuring models do not misuse consumer information.
- Governance Structure: Insurance companies are responsible for ensuring that theyâand any third-party model providersâcomply with AI regulations. While third-party providers play a role, carriers are ultimately accountable.
Matt Kelly emphasizes that insurers can navigate these four themes successfully by establishing the right frameworks and governance structures.Â
Protection vs. Innovation: Striking the Right BalanceÂ
âWe tend not to look at innovation as a risk. We see it as aligned with protecting consumers when managed correctly.â â Matt Kelly
Balancing consumer protection with innovation is crucial for insurers. When done correctly, these goals align. Matt noted that the focus should be on leveraging technology to improve services without compromising consumer rights.
One major concern in insurance is unfair discrimination, particularly in how companies categorize risks using AI and consumer data. Regulators ask whether these categorizations are justified based on coverage or risk pool considerations, or whether they are unfairly based on unrelated characteristics. Aligning these concerns with technological innovation can lead to more accurate and fair coverage decisions while ensuring compliance with regulatory standards.
Want to learn more?Â
Watch the full webinar recording and download Sixfoldâs Responsible AI framework for Sixfoldâs approach to safe AI usage.Â
â
How to Choose an AI Vendor (Who Can Actually Deliver)
Sixfoldâs Head of AI explains how to pick the right team to build your AI insurance solution.
Companies of all sizes are actively exploring how emerging AI technologies can overcome longstanding business challenges. Inevitably, they run up against the reality that weathered AI pros like myself have long known: AI ainât easy. Rather than going it alone, many businesses choose to partner with firms that specialize in building solutions with LLMs. The good news? There are a growing number of AI vendors to pick from, with more popping up all the time. The bad? Discerning if a vendor can deliver what you need isnât always so straightforward.
It seems like everyone and their little cousin touts the ability to âwrapâ custom applications around one of the big-name LLMs. If thatâs all they bring to the table, they might help you address simple use cases, but probably wonât have the chops to build and manage complex solutions in heavily regulated industries like insurance. Thatâs a whole different thing.
So, how can you tell if a prospective vendor can meet your business's needs? In this blog post, Iâll explore some key areas along the AI value chain and propose some questions to ask so you can make an informed decision.
So, how can you tell if a prospective vendor can meet your business's needs? In this blog post, Iâll explore some key areas along the AI value chain and propose some questions to ask so you can make an informed decision.
Input preparation
What you put into your AI system is what you get out of it. Make sure a prospective vendor prioritizes clean data, stored & handled in a secure compliant manner.
You can think of data like a commodity that powered the previous century: oil. You donât just dig some oil out of the ground and pour it into your gas tank. (Or, I guess you could, but you wouldnât get far before your engines seized up.) Like oil, data requires multiple rounds of preparation before it can be used.Â
The value of the output your AI produces is directly related to the quality of the input. Before moving forward with any prospective vendor, ensure they have the meansâand indeed, the knowledgeâto help you build compliant, secure data workflows from beginning to end.Â
The value of the output your AI produces is directly related to the quality of the input. Before moving forward with any prospective vendor, ensure they have the meansâand indeed, the knowledgeâto help you build compliant, secure data workflows from beginning to end.Â
Here are some points to consider to ensure this is a vendor for you.
Questions to consider:Â
- âHow will the data be collected?
âData must be carefully collected to protect privacy and prevent bias. Ensuring that data has been ethically obtained and correctly governed is a point of emphasis for regulators.â - How will the data be âcleanedâ?
âData needs to be refined and structured in a way that an AI solution can use and interpret. Make sure a prospective vendor understands what types of data are appropriate for your use case and how to prepare it at scale.â - How will the data be transferred, stored, and secured?
âWhen developing solutions for complex, highly regulated industries, proof of certification for things like SOC2 and HIPAA are table stakes. Additionally, youâll want to verify that the vendor uses secure data transfer methods, such as encryption during transit and at rest, to prevent unauthorized access. Also, ensure they effectively track the status of the data over time via robust version control and data lineage systems.
Prompt developmentÂ
LLMs work best when you make it difficult for them to make mistakes. An AI vendor should understand how to craft prompts to generate business value.Â
For an AI solution to generate value, it must surface useful information with as little human intermediation as possible. This is achieved by ensuring that every prompt to an LLM includes all guidance, data access, and guardrails necessary to generate a high-quality return. Things like:Â
1. Industry-specific content to guide results
2. Phrasing that reflects informed insight into the domainÂ
3. Precise instructions on the structure of the result being soughtÂ
Your vendor will need to demonstrate they understand the capabilities and limitations of AI and can provide insights on how to structure LLM conversations to extract maximum value. Here are some points to review with a prospective partner to ensure they have the meansâand better yet, a historyâof value-oriented prompt engineering.
Questions to consider:Â â
- âHow do they build prompts, and what domain-specific knowledge do they have?
âTechnical acumen is one thing, but does the vendor understand the specific needs of your industry? Itâs one thing to ask an LLM to plan out a fun afternoon at the beach, itâs another thing to have it understand if, for example, family-owned restaurants align with a home insurer's risk appetite, or not. â - What methods are used to select material included in the context window?
âYou should understand the vendorâs criteria for selecting contextually relevant information and how they ensure this information is timely and accurate. Ask what processes they use to filter and prioritize the most pertinent data for inclusion in prompts.â - How often, and in what ways are prompts updated over time? Are these changes tracked?
âLearn about their schedule for reviewing and updating prompts to keep them aligned with the latest industry trends and data. Ensure they have a system for tracking changes to prompts, including version history and impact analysis, to maintain transparency and continuous improvement.â - What methods are used to evaluate the results of prompts, and to compare the results to prior versions when changes are made?
âAsk about their evaluation metrics and benchmarks for assessing prompt performance, including accuracy, relevance, and consistency. Understand their process for A/B testing new prompt versions and how they compare the results to previous versions to ensure improvements.
Output control
Non-deterministic AI systems act in unpredictable ways. A quality vendor should know how to measure misaligned behaviors, as well as how to address them. Â
â
The value of the output your AI produces is directly related to the quality of the input. Before moving forward with any prospective vendor, ensure they have the meansâand indeed, the knowledgeâto help you build compliant, secure data workflows from beginning to end. Ask an LLM the same question 10 times and you might get 10 different responses. The goal is to generate 10 accurate, useful answers. Achieving this requires putting as much care into reviewing the systemâs output as you do into preparing the input.Â
Ask an LLM the same question 10 times and you might get 10 different responses. The goal is to generate 10 accurate, useful answers. Achieving this requires putting as much care into reviewing the systemâs output as you do into preparing the input.Â
Continuous monitoring and tweaking are necessary to adapt your system to accommodate new data and evolving requirements. Here are some questions to explore when evaluating a vendorâs approach to scaled output control.
Questions to consider:Â
- âWhat evals will you run?
âInquire about their evaluation frameworks, including both automated and manual assessments, to ensure outputs meet quality standards. Learn about the specific metrics they use to evaluate outputs, such as precision, recall, and F1 score, as well as checks for hallucinations and biases.â - What role will human experts play in this process?
âVerify that human subject matter experts are involved in reviewing and validating AI outputs to ensure they are contextually appropriate and accurate. Ask about their process for incorporating expert feedback into continuous improvement cycles for the AI system.â - How often will you review overall results, and what metrics will you use to guide refinement and improvement?
âGet a handle on their schedule for regular reviews and audits of AI outputs to ensure ongoing quality and relevance. Inquire about the key performance indicators (KPIs) and metrics they use to monitor and refine the AI system, such as user satisfaction scores, error rates, and feedback loops.
TransparencyÂ
Not only does visibility allow you to properly evaluate an AIâs performance, itâs increasingly required by regulators as a means to address system bias.
Transparency is crucial for every step from data preparation to prompt development and output review. You cannot evaluate what you cannot see. To maintain the highest possible standards, every AI vendor should be prepared to provide a window into every step under their control.Â
Questions to consider:Â
- âCan you provide clear documentation of your processes and methods?
âEnsure that the vendor offers comprehensive and understandable documentation covering all aspects of their AI processes, from data collection to output generation. Ask for examples of their documentation to assess its clarity and completeness.â - Can you demonstrate every point at which they interact with an LLM, and provide a complete trail of what information was exchanged?
âVerify that the vendor maintains detailed logs and records of interactions with the LLM, including data inputs, prompts, and outputs. Ensure they can provide audit trails that detail the flow of information through their systems, which is crucial for regulatory compliance and troubleshooting.â - Will you provide a routine report about their evaluations and measuring for potential bias?
âInquire about their regular reporting practices, including how often they produce reports on AI performance, bias detection, and mitigation efforts. Ask to see examples of these reports to evaluate their thoroughness and transparency in addressing potential biases and other issues.
At Sixfold, weâve created a Responsible AI framework for prospects and customers to showcase our ongoing transparency work.
In Summary
AI has the potential to overcome challenges that have been holding businesses back for decades. If you havenât started your AI journey, nowâs the time to get started. Partnering with an AI vendor can help you identify use cases ripe for transformation and provide the skills to get you there.
I hope that this checklist helped you identify which vendor has the right combination of technical know-how, industry expertise, and regulatory awareness to get your business where it needs to be.
To Build or Buy AI: A Guide for Insurers
Before building that LLM-powered underwriting solution in-house, answer these four questions.
Iâm just going to say it: I donât care how accomplished your team is, they just wonât be able to build a proprietary horizontal LLM to compete, feature-wise, with the GPTs, Geminis, and Claudes of the world.Â
Your team may, however, have it in them to build a vertical AI solution to execute specific high-level underwriting tasks. Their solution will probably incorporate one (or even several) aforementioned foundational models complemented with additional components, purpose-made for your specific use case.Â
If you havenât investigated advanced AI for your underwriting tech stack, youâre already behind. The question for carriers has long since moved on from âshould we implement?â to âwhatâs the best way forward?â Some might think it preferable to build a proprietary AI solution using internal resources.
Many larger enterprises are certainly going to take on that substantial challenge. But is this strategy right for your organization? Here are four questions to consider before taking that leap:
1. Do you know what a quality AI-powered solution looks like?
You know how to measure the success of, say, a proprietary Java-powered microservice or web portal. But do you know what metrics to use for a non-deterministic AI solution? Itâs a whole new thing.
LLMs are flexible and amazing, but theyâre also unpredictable and can get things wrong (even when the end user did everything right). Developing non-deterministic systems requires an evolution in thinking about usefulness and quality control. It means getting acquainted with new concepts like âerror tolerance.âÂ
If youâve worked with traditional digital systems, you know that when a problem arises, itâs almost always attributable to human error somewhere along the line. LLMs, on the other hand, can do weird stuff when theyâre working properly. Ask an LLM the same question 10 times in a row and youâll get 10 different answers. The key with these solutions isnât robotic repetition, itâs making sure they provide 10 useful answers.Â
Ask an LLM the same question 10 times in a row and youâll get 10 different answers. The key with these solutions isnât robotic repetition, itâs making sure they provide 10 useful answers.Â
Not only must you anticipate some amount of unpredictability with LLMs, you have to build out an infrastructure to mitigate their impact. This could mean building in extra layers of validation to detect errors. Or perhaps by giving human users the ability to spot errors and give feedback to the system. In some cases, it might mean that we live with some amount of "spoilage," i.e., accepting bad results from time to time.
This is new territory, I know. Are you ready for it? Almost as importantlyâwould you know how to communicate this new paradigm to the stakeholders who matter?
2. Are you prepared for a relentless pace of change?
Due to LLMâs inherent newness, few engineers or product managers have experience shepherding a vertical AI to market. That means your team must learn to deal with both structured and unstructured data when engaging with LLMs. It means learning the latest prompt design strategies to ensure you're providing consistently accurate answers (and indeed, defining what âaccuracyâ even means in a non-deterministic system). And it means occasionally having to re-learn it all over again after the next great AI innovation drops.  And a new AI innovation is always about to drop.
Developing cutting-edge vertical AI in 2024 is very different than it was in 2023 and I can promise you, it will be different in 2025.
Developing cutting-edge vertical AI in 2024 is very different than it was in 2023 and I can promise you, it will be different in 2025. Technology moves fast, and at this moment of peak-buzz AI, you have to be prepared for changes to come at your team weekly, if not daily.Â
Last year, for example, we were a LangChain shop, as was pretty much everyone else attempting to address big challenges with LLMs. Fast-forward one year and weâand many players in this spaceâconcluded that LangChain just isnât for production and moved on to building scalable pipelines directly with generative AI primitives. That meant rebuilding some key features from scratch while adding resiliency and scale.
Determination is paramount in the face of rapid change. Are you prepared to hard-pivot a project youâve been pushing along for months because the ecosystem has irrevocably changed with a new model release, new technique, or newly proposed regulation? Are you prepared to explain the necessity of these sea changes to your team and stakeholders?
3. Are you up on todayâs AI regulations? How about tomorrowâs?
Thereâs a lot of talk in the public discourse about the potential negative impacts of scaled automation. As a result, regulatory bodies at all levels of government have drafted rules for how AI can be implemented, many of which single out consequential sectors such as insurance.Â
Technological acumen is crucial, but it could all be rendered meaningless if it doesnât comply with regulatory requirements. Do you have the infrastructure in place to keep on top of this evolving patchwork of global regulations?
To navigate these choppy waters, you need a team in place to make sure youâre complying with todayâs rules, and prepared for tomorrowâs.
Whatâs better? Getting your team in the conversation with the rule-makers, and help inform the rule sets as they take shape.
4. Can you compete for AI talent?
You have an amazing dev team. Theyâre driven and passionate, and great colleagues too. Iâm sure they could launch a top-notch mini-site in just a few weeks. But have they designed an LLM-powered AI solution before?Â
If not, youâll need to find yourself some AI experts.
That means competing for talent in a limited pool of AI engineers and paying top dollar for it to keep pace with MAMAA-caliber compensation packages.
That means competing for talent in a limited pool of AI engineers (Reuters reports a 50% skills gap in AI roles) and paying top dollar for it to keep pace with MAMAA-caliber compensation packages.
This pool becomes even smaller when looking for talent experienced with building systems for highly regulated industries in general, let alone insurance in particular.
Did you answer ânoâ to any question above?
I donât know where youâll land when it comes to building your vertical AI solution. If the go-it-alone path seems treacherous, then you can always partner with a team thatâs been leading the way in the emerging LLM-powered AI for insurance.
Iâm not a salesman, Iâm a techie, but I can tell you we do great work and our team would love to talk through what you have in mind.
This blog post was originally posted on LinkedIn
New AI Features to Raise the Bar on Underwriting Efficiency
Weâve enhanced our commercial underwriting platform with a suite of AI-powered features that streamline end-to-end underwriting across all lines of business.
With our latest product updates, weâve extended our commercial underwriting product with a suite of AI-powered features to facilitate end-to-end underwriting across all lines of business, scaling from transactional underwriting to complex, three-dimensional risks.
Sixfoldâs number one superpower is to easilyâand quicklyâingest carriersâ unique underwriting guidelines and automatically surface the submissions that match the carrierâs unique risk appetite. The platform empowers carriers to streamline the underwriting process by:
đ Analyzing publicly available information and ingesting data from multiple disparate sources in an instant for a comprehensive risk assessment
đ Generating a comprehensive summarization of the businessâs operations and providing NAICS/SIC classification
đ Surfacing positive and negative risk factors tuned to a carrierâs unique appetite
đ Answering complex questions across large sets of documents
đ Prioritizing risks with an underwriter-facing dashboard for improved resource allocation
Over the past half, weâve significantly matured our P&C underwriting platform with deep investments across accuracy, traceability, performance, and extensibility. Some of the key highlights include:
âď¸ Improved Accuracy with Advanced Document Extraction Â
As underwriters ask complex questions across large sets of documents, weâve invested in new models to improve extraction across the universe of insurance documents. With Sixfoldâs latest models, weâve seen a 40% boost in accuracy in extracting data from the most illegible documents.
With our ongoing investments in extraction models tailored to the documents underwriters see daily, Sixfold ensures precise and reliable insights from large, challenging document sets, transforming how underwriters interact with data.
âď¸ Appetite Match Scoring with Weighted Risk Scoring
To replicate underwritersâ cognitive processes, Sixfold is introducing weighted risk signals to reflect the nuance across how underwriting factors impact where a risk sits within a carrierâs appetite. Now, carriers can assign varying importance to different factors to prioritize risks more accurately based on alignment with their risk appetite.
âď¸ Enhanced Transparency with Inline Citations
Not only can Sixfold answer complex questionnaires across lines of business, but now all answers are grounded in the original source material with citations to the most relevant sections for confident decision-making.
âď¸ Bringing Risk Classification and Summarization Down to Seconds
With our continued commitment to investing in the Sixfold pipeline architecture to improve performance, availability, and resilience, weâve brought down the median case processing time from 80 seconds to 31 seconds. With these enhancements, in less than a minute, Sixfold can research publicly available information to learn everything we can about a business and analyze the aggregated data for business summarization and NAICS/SIC classification.
âď¸ Embed Sixfold across Underwritersâ Existing Workflows with our API
With the Sixfold API, carriers can seamlessly integrate Sixfold into existing workflows for enhanced productivity and unified risk management. From automated data gathering and ingestion to custom-tailored underwriting recommendations that can be embedded across existing workflows and systems, Sixfold cuts out the manual work and document handling overhead for 10x faster risk review processes.
âď¸ Mitigating drift and bias with our Responsible AI frameworkÂ
Designed to navigate the rapidly evolving AI landscape confidently, Sixfoldâs Responsible AI framework ensures carriers are well-insulated from risk with its enhanced auditability, data provenance, and traceability. By actively collaborating with regulatory bodies and legal counsel, Sixfold remains at the forefront of responsible AI innovation, safeguarding carriers with unparalleled diligence.
đ Coming Soon: Research Assistant
Stay tuned for future launch updates to hear about upcoming capabilities like Sixfoldâs Research Assistant, designed to find answers to complex research questions, all of which are grounded in the original source material with citations.
Want to see our new capabilities in action?
Get in touch ââ
6 Common Myths About AI, Insurance, and Compliance
I run into the same misconceptions about AI and insurance again and again. Let me try to put some of these common myths to bed once and for all.
These days, my professional life is dedicated to one focused part of the global business landscape: the untamed frontier where cutting-edge AI meets insurance.Â
I have conversations with insurers around the world about where itâs all going and how AI will work under new global regulations. And one thing never ceases to amaze me: how often I end up addressing the same misconceptions.Â
Some confusion is understandable (if not inevitable) considering the speed with which these technologies are evolving, the hype from those suddenly wanting a piece of the action, and some fear-mongering from an old guard seeking to maintain the status quo. So, I thought Iâd take a moment to clear the air and address six all-too-common myths about AI in insurance.
Myth 1: Youâre not allowed to use AI in insurance
Yes, thereâs a patchwork of emerging AI regulationsâand, yes, in many cases they do zero-in specifically on insuranceâbut they do not ban its use. From my perspective, they do just the opposite: They set ground rules, which frees carriers to invest in innovation without fear they are developing in the wrong direction and will be forced into a hard pivot down the line.Â
Sixfold has actually increased customers (by a lot) since the major AI regulations in Europe and elsewhere were announced. So, letâs put this all-too-prevalent misconception to bed once and for all. There are no rules prohibiting you from implementing AI into your insurance processes.
Myth 2: AI solutions canât secure customer data
As stated above, there are no blanket prohibitions on using customer data in AI systems. There are, however, strict rules dictating how dataâparticularly PII and PHIâmust be managed and secured. These guidelines arenât anything radically new to developers with experience in highly regulated industries.Â
Security-first data processes have been the norm since long before LLMs went mainstream. These protocols protect crucial personal data in applications that individuals and businesses use every day without issue (digital patient portals, browser-based personal banking, and market trading apps, just to name a few). These same measures can be seamlessly extended into AI-based solutions.
Myth 3: âMy proprietary data will train other companiesâ modelsâ
No carrier would ever allow its proprietary data to train models used by competitors. Fortunately, implementing an LLM-powered solution does not mean giving up control of your dataâat least with the right approach.Â
A responsible AI vendor helps their clients build AI solutions trained on their unique data for their exclusive use, as opposed to a generic insurance-focused LLM to be used by all comers. This also means allowing companies to maintain full control over their submissions within their environment so that when, for example, a case is deleted, all associated artifacts and data are removed across all databases.
At Sixfold, we train our base models on public and synthetic (AKA, ânot customerâ) data. We then copy these base models into dedicated environments for our customers and all subsequent training and tuning happens in the dedicated environments. Customer guidelines and data never leave the dedicated environment and never make it back to the base models.
Letâs kill this one: Yes, you can use AI and still maintain control of your data.
Myth 4: Thereâs no way to prevent LLM hallucinations
Weâve all seen the surreal AI-generated images lurching up from the depths of the uncanny valleyâhands with too many fingers, physiology-defying facial expressions, body parts & objects melded together seemingly at random. Surely, we canât use that technology for consequential areas like insurance. But Iâm here to tell you that with the proper precautions and infrastructure, the impact of hallucinations can be greatly minimized, if not eliminated.
Mitigation is achieved using a myriad of tactics such as using models to auto-review generated content, incorporating user feedback to identify and correct hallucinations, and conducting manual reviews to ensure quality by comparing sample outputs against ground truth sets.
Myth 5: AIs run autonomously without human oversight
Even if you never watched The Terminator, The Matrix, 2001: A Space Odyssey, or any other movie about human-usurping tech, itâd be reasonable to have some reservations about scaled automation. Thereâs a lot of fearful talk out there about humans ceding control in important areas to un-feeling machines. However, thatâs not where weâre at, nor is it how I see these technologies developing.Â
Letâs break this one down.
AI is a fantastic and transformative technology, but even Iâthe number one cheerleader for AI-powered insuranceâagree we shouldnât leave technology alone to make consequential decisions like who gets approved for insurance and at what price. But even if I didnât feel this way, insurtechs are obliged to comply with new regulations (e.g., the EU AI Act and the California Department of Insurance), that tilt towards avoiding fully automated underwriting and require, at the very least, that humans overseers can audit and review decisions.Â
When it comes to your customersâ experience, AI opens the door to more human engagement, not less. In my view, AI will free underwriters from banal, repetitive data work (which machines handle better anyway) so that they can apply uniquely human skills in specialized or complex use cases they previously wouldnât have had the bandwidth to address.
Myth 6: Regulations are still being written, itâs better to wait for them to settle
I hear this one a lot. I understand why people arrive at this view. My take? You canât afford to sit on the sidelines!
To be sure, multiple sets of AI regulations are taking root at different governmental levels, which adds complexity. But hereâs a little secret from someone paying very close attention to emerging AI rulesets: thereâs very little daylight between them.Â
Hereâs the thing: regulators worldwide attend the same conferences, engage with the same stakeholders, and read the same studies & whitepapers. And they all watching what each other is doing. As a result, weâre arriving at a global consensus focused on three main areas: data security, transparency, and auditability.Â
The global AI regulatory landscape is, like any global regulatory landscape, complex; but Iâm here to tell you itâs not nearly as uneven or even close to unmanageable as you may fear.Â
Furthermore, if an additional major change were to be introduced, it wouldn't suddenly take effect. Thatâs by design. Think of all the websites and digital applications that launchedâand indeed, thrivedâin the six-year window between when GDPR was introduced in 2012 to when it became enforceable in 2018. Think of everything that would have been lost if they had waited until GDPR was firmly established before moving forward.
My entire career has been spent in fast-moving cutting-edge technologies. And I can tell you from experience that itâs far better to deploy & iterate than to wait for regulatory Godot to arrive. Jump in and get started!
There are more myths to bust! Watch our compliance webinar
The regulations coming are not as odious or as unmanageable as you might fearâparticularly when you work with the right partners. I hope Iâve helped overcome some misconceptions as you move forward on your AI journey.
Want to learn more about AI insurance and compliance? Watch the replay of our compliance webinar featuring a discussion between myself; Jason D. Lapham, the Deputy Commissioner for P&C Insurance the Colorado Division of Insurance; and Matt Kelly a key member of Debevoise & Plimptonâs Artificial Intelligence Group. We're discussing the global regulatory landscape and how AI models should be evaluated regarding compliance, data usage, and privacy.
â