Healthcare AI startup OpenEvidence raises $75M in Sequoia-led round to scale AI for doctors, joins unicorn club

Healthcare AI startup OpenEvidence has secured $75 million in new funding to expand its chatbot designed for doctors. The Series A round, led by Sequoia, pushes the company’s valuation to $1 billion, placing it in unicorn territory.
Based in Cambridge, Massachusetts, OpenEvidence was founded by Daniel Nadler, who previously built Kensho Technologies—an AI-driven financial analytics company that Standard & Poor’s acquired for $700 million in 2018. The startup’s team includes AI scientists from Harvard and MIT PhD programs.
The fresh capital will support further development of medical-specific large language models (LLMs) and efforts to bring in top researchers focused on AI applications in medicine.
Founded in 2021, OpenEvidence is focused on organizing and expanding access to medical knowledge, offering an AI assistant that helps doctors make informed decisions at the point of care. The platform has already gained traction, with hundreds of thousands of verified physicians across more than 10,000 U.S. healthcare centers relying on it. Unlike general-purpose AI models, OpenEvidence is trained on specialized medical content, including material from the New England Journal of Medicine, through exclusive partnerships. The tool is available at no cost to verified doctors in the U.S.
Nadler describes OpenEvidence as an AI assistant built specifically for medical professionals. The company claims that a quarter of U.S. doctors already use the tool.
After selling Kensho, Nadler initially self-funded OpenEvidence in 2021 before raising a small friends-and-family round in 2023. This latest funding marks the startup’s first institutional backing, bringing total funds raised to over $100 million.
“As we approach our platform’s two-year anniversary, OpenEvidence is being used daily by hundreds of thousands of doctors—but we’re just getting started,” Nadler said. “This Series A funding from Sequoia will allow us to continue building the most trusted AI platform for medical professionals.”
The company plans to use part of the funding to expand content partnerships. OpenEvidence also announced a collaboration with The New England Journal of Medicine, giving clinicians direct access to its materials through the platform.
AI’s Growing Role in Healthcare
While OpenEvidence may resemble ChatGPT in its interface, Nadler emphasizes its distinctiveness. “Trust matters in medicine. The fact that it’s built from the ground up for doctors and trained on The New England Journal of Medicine makes a black-and-white difference in accuracy,” he told CNBC.
The company licenses peer-reviewed medical journals and ensures its model was never connected to the public internet during training. This controlled data approach helps minimize “hallucinations,” a known issue in AI models where responses can be inaccurate or misleading.
OpenEvidence operates on a freemium model, offering its chatbot at no cost while generating revenue through advertising. The product has gained popularity largely through word of mouth. “Doctors work in close quarters, especially in hospitals,” Nadler explained. “When one doctor pulls out their phone to look something up, others notice and ask about it.”
This organic growth was a key factor in Sequoia’s investment decision. Sequoia partner Pat Grady, who led the round, sees OpenEvidence’s adoption mirroring that of consumer internet products. “There aren’t many healthcare tools that spread the way consumer apps do, but this is one of them,” he told CNBC.
OpenEvidence’s funding comes amid a surge of investment in AI startups. Last year, AI accounted for a quarter of venture capital funding, according to CB Insights. Healthcare is seen as a particularly promising sector, with AI’s ability to process vast amounts of data showing potential in areas like drug discovery and medical imaging.
Despite the enthusiasm, AI in healthcare faces challenges. Concerns range from regulatory hurdles to the broader ethical debate over AI’s impact on society, including job displacement and long-term risks.
Nadler believes AI’s role in medicine will be overwhelmingly positive. He points to physician burnout and projections of a 100,000-doctor shortfall by the decade’s end as evidence that AI-driven tools can help. “People are asking whether AI will be good for humanity. I believe the answer is clear: it will be.”