Imagine a company deploying a cutting-edge AI model to power its image generation platform, only to discover that the technology inadvertently allows the creation of harmful, even illegal content. In a world where AI is rapidly becoming integral to business operations, these kinds of high-stakes risks are all too common.
Liz O’Sullivan, having spent over a decade navigating the challenges of AI deployment, saw firsthand how privacy, security, and ethical concerns were often overlooked in favor of speed and innovation. These experiences, along with her passion for AI safety, inspired her to co-found Vera—a platform designed to make scaling AI safe, repeatable, and transparent. With Vera’s AI Gateway, companies can integrate customizable guardrails that prevent harmful outcomes, all while allowing teams to innovate freely.
In this interview, Liz shares how her experiences and Vera’s solutions are helping companies avoid disaster while unlocking the full potential of AI—ensuring that ethical deployment is not just an afterthought, but a foundational part of AI’s future.
Vera just launched its AI Gateway platform. Can you walk us through the main features and how it addresses the current challenges companies face when scaling AI?
LOS: I’ve been working in AI companies since 2012, which has given me insight into the various challenges that prevent teams from moving beyond the prototype phase. Many people mistakenly believe that issues like privacy, security, and customer experience are minor concerns that can be easily addressed. However, when dealing with AI—a fundamentally probabilistic technology—these issues can consume your entire budget in terms of time and resources.
My co-founder and I aimed to create a product that simplifies the process of reliably installing infrastructure to address these concerns in a repeatable and scalable manner. The benefits of Vera are significant: you can avoid vendor lock-in with specific models or vendors like OpenAI, Cohere, or Anthropic, as well as the open-source market. You install your policy infrastructure once, and it seamlessly transfers to any new project, vendor, model, or use case. Plus, our user-friendly drag-and-drop functionality allows you to clearly define what your systems are allowed to do or say, whether for your teams or customers, especially in products powered by generative AI.
One of Vera’s goals is to provide customizable guardrails for AI deployment. Can you give us a real-world example of how this feature has helped a customer mitigate risks while still scaling their AI operations?
LOS: One hundred percent of our inbound leads come from clients or potential clients who have gone to market with either homegrown or out-of-the-box guardrails and have experienced serious issues. In one case, a client in the image generation industry faced problems involving non-consensual sexual material, including images of real people and even children. This is not only a significant societal issue but also illegal, creating a great deal of operational overhead for companies as they deal with the consequences on a daily basis.
With the Vera platform, we prevent the creation of such harmful imagery by detecting and stopping attempts before they can be generated.
AI safety is a big focus for Vera, and it’s something you’ve been passionate about throughout your career. What originally drew you to this focus and can you share more about your background?
LOS: My background is a blend of traditional startup experience and non-traditional activism in this field. I’ve been working in AI companies on the business side for 12 years, giving me a front-row seat to the early days when things often didn’t work, and security, explainability, and customer experience were significant concerns.
During the deep learning revolution of 2016 and 2017, many new possibilities emerged. My team’s job was to make AI models function for real-world use cases, helping clients generate revenue and explore new product offerings. It became clear to me that the practical challenges of implementing models often outweigh the theoretical concerns that many people rightfully have.
For example, if a model has even a small amount of discriminatory bias and is released into the wild without proper checks, it can lead to a massive loss of trust. Users will notice if your platform denies same-sex couples the ability to upload images, a result of cultural biases that permeate the entire AI training process.
I fundamentally believe that AI is a force for good and can positively transform our society. It’s been interesting that these concerns have fallen by the wayside because they are difficult challenges that people face, and we want to imagine a world where companies can deploy AI freely and responsibly, without getting them in trouble or hurting people. To achieve this, we needed to build something that is scalable, repeatable, easy, and dare I say, fun—something that allows developers to run crazy and do all the things they want to do, but without the associated risks and downsides.
Can you elaborate on your co-founder Justin Norman’s academic background and how his expertise influences your work at Vera?
LOS: Justin has been studying under Hany Farid at Berkeley, who is widely regarded as the world’s foremost expert in detecting and responding to synthetic media and deepfake technology. This expertise addresses one of many significant risks associated with AI—one that especially needs to be addressed in an election year.
Your platform helps companies implement AI faster while ensuring safety. What kind of industries or specific use cases do you see benefiting most from AI Gateway’s model routing and policy customization?
LOS: Certainly, image generation is one area with a host of severe risks that can genuinely harm real human beings today. There’s a lot of buzz in this space, and it’s great to see the community actively engaging with these models. Generally, anywhere you find a chat interface—like customer support, partial automation, document summarization on a public-facing product or profile, or code generation—can also benefit from our platform because these applications often introduce vulnerabilities.
This is especially true in regulated industries, but not limited to them. Ultimately, your products will struggle to succeed if they can’t engender trust among users. For instance, one incorrect response to a political question has the ability to alienate an entire group of people, and Vera can stop that from happening.
ATP Fund was the first investor in Vera. How has their support influenced your ability to scale and bring your product to market?
LOS: I know many VCs say this, but when it comes to Kyle, it’s entirely true: he’s my first call whenever something is going well or when I face a unique challenge. He is always there to act as a sounding board. He’s the kind of investor you can share your entire self with, and he understands because he’s been there before. He knows what you need to hear at every stage of the company, from inception and incorporation, to investing guidance, all the way to product marketing and customer acquisition. Kyle just seems to know everybody, and everyone loves him—and if you’re lucky enough to be friends with him, it’s easy to see why.
You’ve been processing tens of thousands of AI model requests per month already. What impact has Vera’s platform had on your early customers in terms of content safety and compliance?
LOS: Through the Vera platform, our clients have been able to significantly reduce the amount of offensive content going in or out of their generative AI-powered products. This includes concerns around people indicating self-harm, non-consensual sexual material involving real people or children, profanity, aggressive speech, political content, and security vulnerabilities that would otherwise go unaddressed or unseen. With Vera, these issues are not problems because we can detect them before they succeed in their attack.
Looking ahead, what excites you most about the future of Vera and the broader AI landscape?
LOS: I’m very excited by the proliferation of open source models, which are just fundamentally better from a safety perspective to assess, validate, test and understand the weaknesses that they might possess. I think there’s a real concern that so much of this industry is consolidated in just a couple of companies, and this push to decentralize that power is really exciting and better for everyone involved—whether that’s the public or the companies who seek to use these models.
> Learn more about Vera’s groundbreaking AI platform at askvera.io.