Content Hub

Sixfold Content

Sixfold News

How to Choose an AI Vendor (Who Can Actually Deliver)

Sixfold’s Head of AI explains how to pick the right team to build your AI insurance solution.

How to Choose an AI Vendor (Who Can Actually Deliver)
Resource Categories
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore all Resources

Stay informed, gain insights, and elevate your understanding of AI's role in the insurance industry with our comprehensive collection of articles, guides, and more.

With the rise of AI solutions in the Insurance market, questions around AI regulations and compliance are increasingly at the forefront. Key questions such as “What happens when we use data in the context of AI?” and “What are the key focus areas in the new regulations?” are top of mind for both consumers and industry leaders.

To address these topics, Sixfold’s founder and CEO, Alex Schmelkin, hosted the webinar How to Secure Your AI Compliance Team’s Approval. Joined by industry experts Jason D. Lapham, Deputy Commissioner for P&C Insurance for the State of Colorado, and Matt Kelly, Data Strategy & Security Counsel at Debevoise & Plimpton, the discussion provided essential insights into navigating AI regulations and compliance.

Here are the key insights from the session:

AI Regulation Developments: Colorado Leads the Way in the U.S

“There’s a requirement in almost any regulatory regime to protect consumer data. But now, what happens when we start using that data in AI? Are things different?” — Alex Schmelkin

Both nationally and globally, AI regulations are being implemented. In the U.S., Colorado became the first state to pass a law and implement regulations related to AI in the insurance sector. Jason Lapham explained that the key components of this legislation revolve around two major requirements:

  1. Governance and Risk Management Frameworks: Companies must establish robust frameworks to manage the risks associated with AI and predictive models.
  2. Quantitative Testing: Businesses must test their AI models to ensure that outcomes generated from non-traditional data sources (e.g., external consumer data) do not lead to unfairly discriminatory results. The legislation also mandates a stakeholder process prior to adopting rules.

Initially, the focus was on life insurance, as it played a critical role in shaping the legislative process. The first regulation, implementing Colorado’s Bill 169, adopted in late 2023, addressed governance and risk management. This regulation applies to life insurers across all practices, and the Regulatory Agency received the first reports this year from companies using predictive models and external consumer data sources.

So, what’s the next move for the first-moving state in terms of AI regulations? Colorado Division of Insurance is developing a framework for quantitative testing to help insurers assess whether their models produce unfairly discriminatory outcomes. Insurers are expected to take action if their models do lead to such outcomes.

Compliance Approach: Develop Governance Programs

“When we’re discussing with clients, we say focus on the operational risk side, and it will get you largely where you need to be for most regulations out there.” — Matt Kelly

With AI regulations differing across U.S. states and globally, companies face challenges. Matt Kelly described how his team at Debevoise & Plimpton navigate these challenges by building a framework that prioritizes managing operational risk related to technology. Their approach involves asking questions such as :

  • What AI is being used?
  • What risks are associated with its use?
  • How is the company governing or mitigating those risks?

By focusing on these questions, companies can develop strong governance programs that align with most regulatory frameworks. Kelly advises clients to center their efforts on addressing operational risks, which takes them a long way toward compliance.

The Four Pillars of AI Compliance 

Across different AI regulatory regimes, four common themes emerge:

  1. Transparency and Accountability: Companies must understand and clearly explain their AI processes. Transparency is a universal requirement.
  2. Ethical and Fair Usage: Organizations must ensure their AI models do not introduce bias and must be able to demonstrate fairness.
  3. Consumer Protection: In all regulatory contexts, protecting consumer data is essential. With AI, this extends to ensuring models do not misuse consumer information.
  4. Governance Structure: Insurance companies are responsible for ensuring that they—and any third-party model providers—comply with AI regulations. While third-party providers play a role, carriers are ultimately accountable.

Matt Kelly emphasizes that insurers can navigate these four themes successfully by establishing the right frameworks and governance structures. 

Protection vs. Innovation: Striking the Right Balance 

“We tend not to look at innovation as a risk. We see it as aligned with protecting consumers when managed correctly.” — Matt Kelly

Balancing consumer protection with innovation is crucial for insurers. When done correctly, these goals align. Matt noted that the focus should be on leveraging technology to improve services without compromising consumer rights.

One major concern in insurance is unfair discrimination, particularly in how companies categorize risks using AI and consumer data. Regulators ask whether these categorizations are justified based on coverage or risk pool considerations, or whether they are unfairly based on unrelated characteristics. Aligning these concerns with technological innovation can lead to more accurate and fair coverage decisions while ensuring compliance with regulatory standards.

Want to learn more? 

Watch the full webinar recording and download Sixfold’s Responsible AI framework for Sixfold’s approach to safe AI usage. 

With our latest product updates, we’ve extended our commercial underwriting product with a suite of AI-powered features to facilitate end-to-end underwriting across all lines of business, scaling from transactional underwriting to complex, three-dimensional risks.

Sixfold’s number one superpower is to easily–and quickly–ingest carriers’ unique underwriting guidelines and automatically surface the submissions that match the carrier’s unique risk appetite. The platform empowers carriers to streamline the underwriting process by:

🔘 Analyzing publicly available information and ingesting data from multiple disparate sources in an instant for a comprehensive risk assessment

🔘 Generating a comprehensive summarization of the business’s operations and providing NAICS/SIC classification

🔘 Surfacing positive and negative risk factors tuned to a carrier’s unique appetite

🔘 Answering complex questions across large sets of documents

🔘 Prioritizing risks with an underwriter-facing dashboard for improved resource allocation

Over the past half, we’ve significantly matured our P&C underwriting platform with deep investments across accuracy, traceability, performance, and extensibility. Some of the key highlights include:

✔️ Improved Accuracy with Advanced Document Extraction  

As underwriters ask complex questions across large sets of documents, we’ve invested in new models to improve extraction across the universe of insurance documents. With Sixfold’s latest models, we’ve seen a 40% boost in accuracy in extracting data from the most illegible documents.

With our ongoing investments in extraction models tailored to the documents underwriters see daily, Sixfold ensures precise and reliable insights from large, challenging document sets, transforming how underwriters interact with data.

Sixfold ensures precise and reliable insights from large, challenging document sets; transforming how underwriters interact with data.

✔️ Appetite Match Scoring with Weighted Risk Scoring

To replicate underwriters’ cognitive processes, Sixfold is introducing weighted risk signals to reflect the nuance across how underwriting factors impact where a risk sits within a carrier’s appetite. Now, carriers can assign varying importance to different factors to prioritize risks more accurately based on alignment with their risk appetite.

A carrier can assign varying importance to different factors to prioritize risks more accurately based on alignment with their risk appetite.

✔️ Enhanced Transparency with Inline Citations

Not only can Sixfold answer complex questionnaires across lines of business, but now all answers are grounded in the original source material with citations to the most relevant sections for confident decision-making.

All answers are grounded in the original source material with citations to the most relevant sections for confident decisioning.

✔️ Bringing Risk Classification and Summarization Down to Seconds

With our continued commitment to investing in the Sixfold pipeline architecture to improve performance, availability, and resilience, we’ve brought down the median case processing time from 80 seconds to 31 seconds. With these enhancements, in less than a minute, Sixfold can research publicly available information to learn everything we can about a business and analyze the aggregated data for business summarization and NAICS/SIC classification.

✔️ Embed Sixfold across Underwriters’ Existing Workflows with our API

With the Sixfold API, carriers can seamlessly integrate Sixfold into existing workflows for enhanced productivity and unified risk management. From automated data gathering and ingestion to custom-tailored underwriting recommendations that can be embedded across existing workflows and systems, Sixfold cuts out the manual work and document handling overhead for 10x faster risk review processes.

✔️ Mitigating drift and bias with our Responsible AI framework 

Designed to navigate the rapidly evolving AI landscape confidently, Sixfold’s Responsible AI framework ensures carriers are well-insulated from risk with its enhanced auditability, data provenance, and traceability. By actively collaborating with regulatory bodies and legal counsel, Sixfold remains at the forefront of responsible AI innovation, safeguarding carriers with unparalleled diligence.

👀 Coming Soon: Research Assistant

Stay tuned for future launch updates to hear about upcoming capabilities like Sixfold’s Research Assistant, designed to find answers to complex research questions, all of which are grounded in the original source material with citations.

Want to see our new capabilities in action?
Get in touch →

These days, my professional life is dedicated to one focused part of the global business landscape: the untamed frontier where cutting-edge AI meets insurance. 

I have conversations with insurers around the world about where it’s all going and how AI will work under new global regulations. And one thing never ceases to amaze me: how often I end up addressing the same misconceptions. 

Some confusion is understandable (if not inevitable) considering the speed with which these technologies are evolving, the hype from those suddenly wanting a piece of the action, and some fear-mongering from an old guard seeking to maintain the status quo. So, I thought I’d take a moment to clear the air and address six all-too-common myths about AI in insurance.

Myth 1: You’re not allowed to use AI in insurance

Yes, there’s a patchwork of emerging AI regulations—and, yes, in many cases they do zero-in specifically on insurance—but they do not ban its use. From my perspective, they do just the opposite: They set ground rules, which frees carriers to invest in innovation without fear they are developing in the wrong direction and will be forced into a hard pivot down the line. 

Sixfold has actually increased customers (by a lot) since the major AI regulations in Europe and elsewhere were announced. So, let’s put this all-too-prevalent misconception to bed once and for all. There are no rules prohibiting you from implementing AI into your insurance processes.

Myth 2: AI solutions can’t secure customer data

As stated above, there are no blanket prohibitions on using customer data in AI systems. There  are, however, strict rules dictating how data—particularly PII and PHI—must be managed and secured. These guidelines aren’t anything radically new to developers with experience in highly regulated industries. 

Security-first data processes have been the norm since long before LLMs went mainstream. These protocols protect crucial personal data in applications that individuals and businesses use every day without issue (digital patient portals, browser-based personal banking, and market trading apps, just to name a few). These same measures can be seamlessly extended into AI-based solutions.

Myth 3: “My proprietary data will train other companies’ models”

No carrier would ever allow its proprietary data to train models used by competitors. Fortunately, implementing an LLM-powered solution does not mean giving up control of your data—at least with the right approach. 

A responsible AI vendor helps their clients build AI solutions trained on their unique data for their exclusive use, as opposed to a generic insurance-focused LLM to be used by all comers. This also means allowing companies to maintain full control over their submissions within their environment so that when, for example, a case is deleted, all associated artifacts and data are removed across all databases.

At Sixfold, we train our base models on public and synthetic (AKA, “not customer”) data. We then copy these base models into dedicated environments for our customers and all subsequent training and tuning happens in the dedicated environments. Customer guidelines and data never leave the dedicated environment and never make it back to the base models.

Let’s kill this one: Yes, you can use AI and still maintain control of your data.

Myth 4: There’s no way to prevent LLM hallucinations

We’ve all seen the surreal AI-generated images lurching up from the depths of the uncanny valley—hands with too many fingers, physiology-defying facial expressions, body parts & objects melded together seemingly at random. Surely, we can’t use that technology for consequential areas like insurance. But I’m here to tell you that with the proper precautions and infrastructure, the impact of hallucinations can be greatly minimized, if not eliminated.

Mitigation is achieved using a myriad of tactics such as using models to auto-review generated content, incorporating user feedback to identify and correct hallucinations, and conducting manual reviews to ensure quality by comparing sample outputs against ground truth sets.

Myth 5: AIs run autonomously without human oversight

Even if you never watched The Terminator, The Matrix, 2001: A Space Odyssey, or any other movie about human-usurping tech, it’d be reasonable to have some reservations about scaled automation. There’s a lot of fearful talk out there about humans ceding control in important areas to un-feeling machines. However, that’s not where we’re at, nor is it how I see these technologies developing. 

Let’s break this one down.

AI is a fantastic and transformative technology, but even I—the number one cheerleader for AI-powered insurance—agree we shouldn’t leave technology alone to make consequential decisions like who gets approved for insurance and at what price. But even if I didn’t feel this way, insurtechs are obliged to comply with new regulations (e.g., the EU AI Act and the California Department of Insurance), that tilt towards avoiding fully automated underwriting and require, at the very least, that humans overseers can audit and review decisions. 

When it comes to your customers’ experience, AI opens the door to more human engagement, not less. In my view, AI will free underwriters from banal, repetitive data work (which machines handle better anyway) so that they can apply uniquely human skills in specialized or complex use cases they previously wouldn’t have had the bandwidth to address.

Myth 6: Regulations are still being written, it’s better to wait for them to settle

I hear this one a lot. I understand why people arrive at this view. My take? You can’t afford to sit on the sidelines!

To be sure, multiple sets of AI regulations are taking root at different governmental levels, which adds complexity. But here’s a little secret from someone paying very close attention to emerging AI rulesets: there’s very little daylight between them. 

Here’s the thing: regulators worldwide attend the same conferences, engage with the same stakeholders, and read the same studies & whitepapers. And they all watching what each other is doing. As a result, we’re arriving at a global consensus focused on three main areas: data security, transparency, and auditability. 

The global AI regulatory landscape is, like any global regulatory landscape, complex; but I’m here to tell you it’s not nearly as uneven or even close to unmanageable as you may fear. 

Furthermore, if an additional major change were to be introduced, it wouldn't suddenly take effect. That’s by design. Think of all the websites and digital applications that launched—and indeed, thrived—in the six-year window between when GDPR was introduced in 2012 to when it became enforceable in 2018. Think of everything that would have been lost if they had waited until GDPR was firmly established before moving forward.

My entire career has been spent in fast-moving cutting-edge technologies. And I can tell you from experience that it’s far better to deploy & iterate than to wait for regulatory Godot to arrive. Jump in and get started!

There are more myths to bust! Watch our compliance webinar

The regulations coming are not as odious or as unmanageable as you might fear—particularly when you work with the right partners. I hope I’ve helped overcome some misconceptions as you move forward on your AI journey.

Watch it here!

Want to learn more about AI insurance and compliance? Watch the replay of our compliance webinar featuring a discussion between myself; Jason D. Lapham, the Deputy Commissioner for P&C Insurance the Colorado Division of Insurance; and Matt Kelly a key member of Debevoise & Plimpton’s Artificial Intelligence Group. We're discussing the global regulatory landscape and how AI models should be evaluated regarding compliance, data usage, and privacy.

This article was originally posted on LinkedIn

I’m just going to say it: I don’t care how accomplished your team is, they just won’t be able to build a proprietary horizontal LLM to compete, feature-wise, with the GPTs, Geminis, and Claudes of the world. 

Your team may, however, have it in them to build a vertical AI solution to execute specific high-level underwriting tasks. Their solution will probably incorporate one (or even several) aforementioned foundational models complemented with additional components, purpose-made for your specific use case. 

If you haven’t investigated advanced AI for your underwriting tech stack, you’re already behind. The question for carriers has long since moved on from “should we implement?” to “what’s the best way forward?” Some might think it preferable to build a proprietary AI solution using internal resources.

Many larger enterprises are certainly going to take on that substantial challenge. But is this strategy right for your organization? Here are four questions to consider before taking that leap:

1. Do you know what a quality AI-powered solution looks like?

You know how to measure the success of, say, a proprietary Java-powered microservice or web portal. But do you know what metrics to use for a non-deterministic AI solution? It’s a whole new thing.

LLMs are flexible and amazing, but they’re also unpredictable and can get things wrong (even when the end user did everything right). Developing non-deterministic systems requires an evolution in thinking about usefulness and quality control. It means getting acquainted with new concepts like “error tolerance.” 

If you’ve worked with traditional digital systems, you know that when a problem arises, it’s almost always attributable to human error somewhere along the line. LLMs, on the other hand, can do weird stuff when they’re working properly. Ask an LLM the same question 10 times in a row and you’ll get 10 different answers. The key with these solutions isn’t robotic repetition, it’s making sure they provide 10 useful answers. 

Ask an LLM the same question 10 times in a row and you’ll get 10 different answers. The key with these solutions isn’t robotic repetition, it’s making sure they provide 10 useful answers. 

Not only must you anticipate some amount of unpredictability with LLMs, you have to build out an infrastructure to mitigate their impact. This could mean building in extra layers of validation to detect errors. Or perhaps by giving human users the ability to spot errors and give feedback to the system. In some cases, it might mean that we live with some amount of "spoilage," i.e., accepting bad results from time to time.

This is new territory, I know. Are you ready for it? Almost as importantly—would you know how to communicate this new paradigm to the stakeholders who matter?

2. Are you prepared for a relentless pace of change?

Due to LLM’s inherent newness, few engineers or product managers have experience shepherding a vertical AI to market. That means your team must learn to deal with both structured and unstructured data when engaging with LLMs. It means learning the latest prompt design strategies to ensure you're providing consistently accurate answers (and indeed, defining what “accuracy” even means in a non-deterministic system). And it means occasionally having to re-learn it all over again after the next great AI innovation drops.  And a new AI innovation is always about to drop.

Developing cutting-edge vertical AI in 2024 is very different than it was in 2023 and I can promise you, it will be different in 2025.

Developing cutting-edge vertical AI in 2024 is very different than it was in 2023 and I can promise you, it will be different in 2025. Technology moves fast, and at this moment of peak-buzz AI, you have to be prepared for changes to come at your team weekly, if not daily. 

Last year, for example, we were a LangChain shop, as was pretty much everyone else attempting to address big challenges with LLMs. Fast-forward one year and we—and many players in this space—concluded that LangChain just isn’t for production and moved on to building scalable pipelines directly with generative AI primitives. That meant rebuilding some key features from scratch while adding resiliency and scale.

Determination is paramount in the face of rapid change. Are you prepared to hard-pivot a project you’ve been pushing along for months because the ecosystem has irrevocably changed with a new model release, new technique, or newly proposed regulation? Are you prepared to explain the necessity of these sea changes to your team and stakeholders?

3. Are you up on today’s AI regulations? How about tomorrow’s?

There’s a lot of talk in the public discourse about the potential negative impacts of scaled automation. As a result, regulatory bodies at all levels of government have drafted rules for how AI can be implemented, many of which single out consequential sectors such as insurance

Technological acumen is crucial, but it could all be rendered meaningless if it doesn’t comply with regulatory requirements. Do you have the infrastructure in place to keep on top of this evolving patchwork of global regulations?

To navigate these choppy waters, you need a team in place to make sure you’re complying with today’s rules, and prepared for tomorrow’s.

What’s better? Getting your team in the conversation with the rule-makers, and help inform the rule sets as they take shape.

4. Can you compete for AI talent?

You have an amazing dev team. They’re driven and passionate, and great colleagues too. I’m sure they could launch a top-notch mini-site in just a few weeks. But have they designed an LLM-powered AI solution before? 

If not, you’ll need to find yourself some AI experts.

That means competing for talent in a limited pool of AI engineers and paying top dollar for it to keep pace with MAMAA-caliber compensation packages.

That means competing for talent in a limited pool of AI engineers (Reuters reports a 50% skills gap in AI roles) and paying top dollar for it to keep pace with MAMAA-caliber compensation packages.

This pool becomes even smaller when looking for talent experienced with building systems for highly regulated industries in general, let alone insurance in particular.

Did you answer “no” to any question above?

I don’t know where you’ll land when it comes to building your vertical AI solution. If the go-it-alone path seems treacherous, then you can always partner with a team that’s been leading the way in the emerging LLM-powered AI for insurance.

I’m not a salesman, I’m a techie, but I can tell you we do great work and our team would love to talk through what you have in mind.

This blog post was originally posted on LinkedIn

Today’s LLM-based AI solutions boast powerful capabilities that just three years ago were only found in science fiction. Modern AIs, driven by advances in machine learning & computational methods inspired by the human brain, continuously gain new capabilities from the data they encounter, enabling them with previously unattainable potential.

However, when it comes to operating within complex, highly regulated sectors like insurance, not any ol’ AI solution will do. In this post, I want to explore why carriers are turning to a new generation of vertical AIs purpose-built to address the industry’s unique needs and challenges.

Horizontal solutions only leverage the Internet’s surface

“Horizontal” LLM-based chatbots (e.g., Open AI’s ChatGPT or Anthropic’s Claude) are competent at a wide range of tasks, but you’d never trust them to execute a consequential insurance underwriting workflow.

Well, I mean, you could. But your underwriters would still need to engage in dozens (or even hundreds) of rounds of prompts, follow-ups, and clarifications to surface the information they need—all of which would require close review & scrutinization for accuracy, compliance, and hallucinations. They'd need to invest time in sorting through pages of answers to find important facts, correlate & de-dupe information, build timelines, and draw relevant connections. After which, they'd have to relate all of these processed facts back to their risk appetite to evaluate the quality of the risk.

With a horizontal solution, you’ve generally been limited by what’s publicly available online. To echo a common observation: these models offer “the average of the internet.

Horizontal, multi-use AI solutions deliver little—if any—operational efficiencies for a complex enterprise use case like underwriting. The industry needs something more from its AI.

How vertical solutions overcome the data dilemma

One key area where general-purpose LLM chatbots and wrappers crucially fall short is lack of access to specialized data. A LLM’s “knowledge” can only run as deep as the data it’s been trained on. With a horizontal solution, you’ve generally been limited by what’s publicly available online. To echo a common observation: these models offer “the average of the internet.” They might be perfectly helpful in, say, planning out a keto-friendly dinner for two, but much less so when it comes to assessing risk signals on insurance applications.

To access invaluable cloistered data, an AI vendor must cultivate relationships with specialized data gatekeepers and arrive at a precise alignment on use and security.

In order to be useful in underwriting, a generative AI solution must have been — as table stakes —trained on informative but isolated datasets such as loss histories. Even anonymized versions of these datasets aren’t available for AI training purposes (they can’t even be purchased).

To access this invaluable cloistered data, an AI vendor must cultivate relationships with specialized data gatekeepers and arrive at a precise alignment on use and security. It’d be impractical for a horizontal AI provider to address every possible enterprise niche. To pry these data doors open, you need highly specialized vendors with a singular industry focus.

Vertical solutions: a partnership of insurance nerds and tech geeks

Beyond special data access, a vertical AI solution is designed to address the highly specific needs of its sector. The complexity and regulations inherent to insurance underwriting require a team that is as well-versed in emerging tech as they are in long standing carrier challenges. 

A vertical AI solution likely incorporates a medley of intelligent tools under a single platform umbrella. A foundational LLM, for example, may be tapped for specific functions (e.g., summarization), but higher-level capabilities can only be achieved when the LLM is partnered with purpose-built functionality dedicated to specific tasks (e.g., external data APIs, vector stores, etc.) The solution’s precise structure must be guided by experts with an intimate understanding of today’s industry challenges—and an eye on the ones soon to be in effect.

Keep your eye to the verizon

Horizontal AI solutions are amazing, but they fall short in core underwriting tasks due to their shallow expansiveness; lack of access to specialized data; and ultimately the fact that they’re just one building block that must be partnered with industry-specific capabilities to deliver value to carriers. 

This article was originally posted on LinkedIn

We had an opportunity to chat with Sixfold's Co-founder and COO Jane Tran about the company’s amazing first year and the vision for the years to come, as well as her career journey, giving back, and tips on running a fast-paced AI startup.

What was your first job and how did that influence your career?

My very first job ever was as a cashier at an Italian deli around the corner from my parents’ house.

I think everyone should do a service job because it emphasizes the importance of good customer service like how to treat people. I also learned how to be quick because it's New York. New Yorkers don’t like to wait for their Bacon, Egg, and Cheese.

Walk us through your career journey from the Italian deli to co-founder and COO at  Sixfold.

I’ve had a good mix of enterprise and startup experience. I started my career at JP Morgan as part of their rotational analyst program. One of my last rotations was with the Turnaround & Process Improvement team for the Chief Information Officer—that’s where I fell in love with tech. I worked on really cool projects like improving the eDiscovery process and improving data governance. I had a blast. 

After that, I spent several years at Marsh and MetLife working with the CIOs on different strategy and planning projects before I decided to give startups to go. I was on the founding team at Unqork where I was Head of Solutions before becoming COO.

When I decided to leave Unqork, I kept in contact with Alex [Schmelkin, founding team at Unqork, and co-founder of Sixfold]. When he came up with the idea for what would become Sixfold, he asked me to join him to get this idea off the ground.

What does the name Sixfold mean and who came up with it?

So, all kudos due to Alex’s daughter Nina Schmelkin for that! She was doing a project for school around patterns. A sixfold pattern is considered one of the most interesting, naturally occurring patterns—snowflakes are a sixfold pattern. And so, when it came to choosing a name for the company, we were thinking about AI and the role that patterns play, and “Sixfold” seemed like an ideal fit.

Sixfold just turned one year old. How would you describe year one?

A ton of fun! We’re building really tangible use cases using cutting-edge tech. This first year has reinforced the importance of anchoring your work in first principles. AI is obviously super hot and evolving at warp speed, but we can't ignore the things that support great software development and great user experiences. That meant getting that foundation and discipline in place while at the same time making room for extensive R&D and a ton of iterations. 

We learned a lot. We tweaked a bunch. And I think we found product market fit. Our early customers are already starting to see value, and I’m really excited to see where that grows this year.

Do you have any notable “wow” moments from the first year?

Yeah, when we delivered our first end-to-end underwriting pilots and heard the underwriters say “we were able to complete this task in a fifth of the time.” I particularly love hearing how much they trust the tool.

Hearing that first positive user feedback feels like a major achievement! And, obviously, the recent closing of our Series A funding round.

Who are your role models and how did they influence your career? 

My parents. My mom runs a small business in kitchen supplies with her siblings — it was one of those things where they just sort of fell into it. They knew they could offer a really good product and create a fit within the market. They understood what customers wanted and knew that they could manufacture it. So they just went for it. That takes a lot of bravery. On the flip side, my dad hung wallpaper for a living. He's retired now, but he had that hard work ethic and true care for his craft. He developed a reputation for excellence and really worked his way up.

I think the combination of entrepreneurship, work ethic, and quality very much influences who I am.

A common conversation for startups is balancing “the need for speed” with employee happiness. How do you build that balance into the company culture? 

Unfortunately, I don't have a magic formula. I would say that from the get-go, Sixfold’s three founders — Brian, Alex, and myself — anchored ourselves on our Mission and a handful of operating principles like putting the customer first and being direct while being kind. We try to surround ourselves with people who share those principles so that naturally becomes the culture of the company. 

Jane together with Alex Schmelkin (Co-founder & CEO) and Brian Moseley (Co-founder & CTO).

The three of us really care about who works for us and how we all work together. Sometimes we may not have the best balance, but we always strive to be better. As founders, we care a lot about our work, but we also care deeply about family, friends, and life outside work. We understand that people who work with us have the same need for a balanced life outside work.

Do you have any tips when it comes to hiring?

Instinct is a huge part of it. It’s also helpful to have a great HR team in place — kudos to Marie [Sixfold’s HR Business Partner] for doing a lot of the initial groundwork, so by the time a candidate gets to me, they fit a lot of our criteria for that role. From there, a lot of it just comes down to just instinct. Ask yourself if they'll fit within the culture of this company.

Tell us about your mentoring work for different startups and organizations.

I'm on the board of directors and co-chair for an organization called Womankind. It helps survivors of gender-based violence in New York, with a focus on the AAPI community. They've been around for more than 40 years. They started with a single hotline for the NYC area, but have expanded to serve thousands of women every year across the US as one of the few true end-to-end organizations. So families get to stay together. They get legal help. They get job placement. Their kids have a safe place to be while they're figuring out, you know, the next steps. I'm really proud to be part of that organization. 

I've also mentored and advised a few other early-stage startups that I think are doing something new and interesting. This also helps me to understand what else is out there in the ecosystem. I’ve mostly been focused on B2B enterprise throughout my career, so I like learning about retail or other sectors.

I love being around people who are building interesting things. It’s super fun and it can be super informational too.

What are your top work tools that you feel like you couldn’t do without?

I don't do a lot of the things that “productivity hackers” do. I would say Apple Notes and Google Tasks are central to my workday. I keep extensive notes on my meetings— so that’s a lot of Apple notes. And then, for the important things with a deadline, I'll set up a Google task or calendar reminder. And that's how I organize.

What are you excited about for Sixfold’s second year? 

This year will be about continuing to mature our product and getting a lot of new customer use cases live.

We're working with a lot of great people and a lot of great customers. I’m looking forward to showcasing what we can do within this market and at this fidelity — a year ago, I don't think anyone would have thought that we’d be able to do what we’re doing, so I can’t wait to see what the next year or two will bring!

Want to work with Jane and the rest of the Sixfold team? Check out our career page
No items found