Beyond excited to announce our partnership with Databricks, a leader in enterprise data intelligence, to bring Anthropic’s world-class models directly to over 10,000+ enterprises. In meeting with customers, we’ve heard a consistent need for enterprise data, AI tooling, and frontier models together in one platform. We’re excited to help Databricks deliver this, and to do so with a partner who also values security and safety as much as we do. Starting today, Sonnet 3.7, our first hybrid reasoning model, is directly available in Databricks. We’re excited to partner with Databricks in our continued efforts to bring safe and secure frontier models to customers.
Is Databricks hosting the models in their environment OR making calls to existing Azure or AWS services which host this model?
If Databricks is just federating those LLM calls out to an external 3rd party service, the data governance model is now completely shot... Hope people are considering this.
Mike Davies oh well, now I want to hear more ;)
Massive move! Anthropic & Databricks feels like a signal to the market: frontier AI isn’t just about innovation, it’s about secure, enterprise-grade intelligence. This kind of integration is what teams like ours at are excited about: hybrid reasoning models meeting real-world data workflows, all under the same roof. Curious to see how this partnership shapes the next phase of AI adoption in large-scale data ecosystems.
You are my favorites to win and this is a brilliant strategic move. Let open ai go wide and shallow and you all go deeper with values
Have been using Claude to build solutions to real world problems. 🕵️♂️🙏✅ https://detective.nz/news/05-03-2025/identifying-toxic-harmful-people/
Congratulations, Daniela Amodei! This partnership is a game-changer for enterprise data intelligence. Excited to see Claude models making a real impact in secure and scalable AI solutions. Wishing you and the team great success ahead!
Congratulations on this exciting partnership with Databricks! 🎉
This partnership between Anthropic and Databricks clearly aligns with the pressing enterprise demand for secure, responsible, and scalable AI deployment. One area I'd love to see you both address more explicitly is transparency—not just in safety governance, but in clearly communicating the practical boundaries of model capabilities, particularly as enterprises lean heavily into Claude 3.7 for business-critical use cases. Given your emphasis on frontier model safety, openly sharing how you handle scenarios like model drift, hallucinations, or unintended inference from proprietary datasets would set a new standard for industry transparency. Excited to see this partnership evolve—and hopefully, push enterprise AI accountability even further.