This week at 𝐀𝐈 𝐢𝐧 𝐅𝐢𝐧𝐚𝐧𝐜𝐢𝐚𝐥 𝐒𝐞𝐫𝐯𝐢𝐜𝐞𝐬 𝐃𝐞𝐞𝐩 𝐃𝐢𝐯𝐞 𝟐𝟎𝟐𝟓 in London, one message stood out loud and clear: 𝐧𝐨 𝐀𝐈 𝐚𝐝𝐨𝐩𝐭𝐢𝐨𝐧 𝐢𝐬 𝐩𝐨𝐬𝐬𝐢𝐛𝐥𝐞 𝐰𝐢𝐭𝐡𝐨𝐮𝐭 𝐀𝐈 𝐠𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞. Across sessions, it was evident that financial institutions are moving fast to make governance the foundation of AI adoption: ● 𝐄𝐔 𝐀𝐈 𝐀𝐜𝐭 & 𝐑𝐞𝐠𝐮𝐥𝐚𝐭𝐨𝐫𝐲 𝐑𝐞𝐚𝐝𝐢𝐧𝐞𝐬𝐬: Massimo Buonomo, European Commission, reminded us that delays beyond Aug 2026 don’t excuse inaction. Governance takes time to implement; banks should start early. ● 𝐊𝐞𝐲 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 𝐃𝐢𝐦𝐞𝐧𝐬𝐢𝐨𝐧𝐬: Adrian Cox, Deutsche Bank, highlighted risks like hallucinations, bias, IP & legal exposure, and the need for robust tech & data integration, including legacy systems and privacy safeguards. ● 𝐀𝐮𝐭𝐨𝐧𝐨𝐦𝐲 & 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲: Panels and talks from Matt Adams, Citi, and others reinforced that as AI systems become more autonomous, technical controls and evidence-based security measures are no longer optional - they are mission-critical. ● 𝐀𝐈 𝐢𝐧 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞: Our CEO, Petar Tsankov, shared how evidence-based governance turns compliance into a driver of trust and innovation, with practical steps to move from principles to proof. The takeaway is clear: 𝐞𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐛𝐚𝐬𝐞𝐝 𝐠𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 𝐢𝐬 𝐭𝐡𝐞 𝐤𝐞𝐲 𝐭𝐨 𝐬𝐜𝐚𝐥𝐚𝐛𝐥𝐞, 𝐬𝐚𝐟𝐞, 𝐚𝐧𝐝 𝐚𝐮𝐝𝐢𝐭𝐚𝐛𝐥𝐞 𝐀𝐈 𝐢𝐧 𝐟𝐢𝐧𝐚𝐧𝐜𝐢𝐚𝐥 𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬. 💡 Explore with one of our experts how to apply it in your own AI strategy: https://lnkd.in/dm2ZHJG9 Arena International Events Group
LatticeFlow AI
Softwareentwicklung
Stadtkreis 5 Industriequartier, Zurich 11.681 Follower:innen
We empower enterprise companies to deliver trustworthy and performant AI models at scale.
Info
LatticeFlow AI is a company founded by leading AI researchers from ETH Zurich with the mission to empower organizations to build and deploy trustworthy AI. To learn more about LatticeFlow AI, visit https://latticeflow.ai.
- Website
-
https://latticeflow.ai
Externer Link zu LatticeFlow AI
- Branche
- Softwareentwicklung
- Größe
- 11–50 Beschäftigte
- Hauptsitz
- Stadtkreis 5 Industriequartier, Zurich
- Art
- Privatunternehmen
- Gegründet
- 2020
- Spezialgebiete
- AI Governance
Orte
-
Primär
Wegbeschreibung
Förrlibuckstrasse 70
Stadtkreis 5 Industriequartier, Zurich 8005, CH
-
Wegbeschreibung
Boulevard "Tsarigradsko shose" 111P, 1784,
Sofia, BG
Beschäftigte von LatticeFlow AI
-
Sherman Wood
Solution Architect │ AI Strategy & Risk Controls (EU AI Act, OWASP LLM Top 10) │ GenAI Systems (RAG, Evaluations)
-
Jean-Luc Chatelain
Managing Partner | AI CTO | Advisor | Author | Investor | Board Member | Digital Transformation and Analytics
-
Andreas Goeldi
Partner at b2venture | Ex founder and CTO
-
Dave Henry
Updates
-
Rok Šikonja, our Engineering Manager, is back in his home country today - taking the stage at the 27th Quality Day: 100% Quality – Robotics, Digitization & Artificial Intelligence, hosted by the Chamber of Commerce of Dolenjska and Bela Krajina. 🇸🇮 He shared lessons from implementing AI in quality control and how LatticeFlow AI helps teams bring true “quality at scale” into production, making AI work in the real world, not just in demos. Gospodarska zbornica Dolenjske in Bele krajine
-
-
One of the biggest challenges in AI governance is translating high-level policies into the 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗲𝘃𝗶𝗱𝗲𝗻𝗰𝗲 required to build trust in real-world AI systems. That’s exactly what a global wealth management institution set out to solve - by developing deep technical controls that validate how their GenAI systems perform, behave, and can be audited in practice. In our latest video case study, Angela Carpintieri and Petar Tsankov break down how to build an AI risk management approach that actually works day-to-day: how they connected governance to 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻𝘀, how to 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹𝗶𝘇𝗲 𝗔𝗜 𝗼𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁, and ensure the AI systems are 𝘁𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝘁, 𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲, 𝗮𝗻𝗱 𝗮𝗰𝗰𝗼𝘂𝗻𝘁𝗮𝗯𝗹𝗲 from lab to production. It’s a strong example of how you can help your organization move from 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝘁𝗼 𝗽𝗿𝗼𝗼𝗳, strengthening control and compliance without slowing innovation. 📹 𝗪𝗮𝘁𝗰𝗵 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝘀𝘂𝗰𝗰𝗲𝘀𝘀 𝘀𝘁𝗼𝗿𝘆: https://lnkd.in/daQd2xDx #AIgovernance #LatticeFlowAI
-
LatticeFlow AI hat dies direkt geteilt
Spend a bit of time with me, Dan Lucarini, and Matt Mullen this morning listening to us discuss the 2025 Deep Analysis Innovation Awards on the latest We Love Ugly Data podcast. The highlights: -> I share why we've done these awards for the past 6 years and what we look for in the winners: Does it solve a problem? Does it apply ingenuity? Does it add value? Does it show flexibility? -> Dan talks about @infrrd, one of our two winners. In part, he was won over with the number of patents that have been awarded for their research in document AI and their development team deserves recognition for that work. -> Matt introduces our second winner, LatticeFlow AI. They are focused on ensuring that an organizations AI applications meet both internal (think GRC; governance, risk and compliance) and external (regulations, like ISO, EU AI Act etc), with a focus both on internal data quality as well as how well individual models are likely to work in conjunction with it. Listen now. Link below.
-
LatticeFlow AI hat dies direkt geteilt
🤖 Great energy at Center for Digital Trust (C4DT), EPFL's conference on #AI #Agents yesterday, exploring how coordinating and self-improving agents is bringing the next wave of AI capability scaling, beyond training scale and reasoning. During our panel on #AI #Governance for Agents, together with Magda Barska (Senior Manager, Accenture), Michel Jaccard (Founder, id est avocats), Clarissa Valli Buttow (Senior Researcher, UNIL), and moderated by Katherine Loh (C4DT / EPFL), we focused on the practical side of accountability and control: ➡️ The growing gap between AI risk frameworks on paper and the technical controls needed to implement them ➡️ How to move from abstract principles to technical evaluations ➡️ The unique challenges emerging with autonomous agents, third-party AI, and shadow AI inside large organizations Thanks David Viollier for the invite and Katherine for the great moderation! LatticeFlow AI
-
-
📢 𝗪𝗲’𝗿𝗲 𝘁𝗮𝗸𝗶𝗻𝗴 𝗽𝗮𝗿𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗔𝗜 𝗶𝗻 𝗙𝗶𝗻𝗮𝗻𝗰𝗶𝗮𝗹 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀 𝗗𝗲𝗲𝗽 𝗗𝗶𝘃𝗲 2025 Our CEO & Co-Founder, Petar Tsankov, will be taking the stage at the AI in Financial Services Deep Dive on 25 November in London. This event brings together leaders from across banking, insurance, and asset & wealth management to explore how AI can deliver measurable ROI while meeting fast-evolving regulatory expectations. Petar’s session “𝗔𝗜 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲. 𝗗𝗼𝗻𝗲 𝗥𝗶𝗴𝗵𝘁” will focus on: ▸ How financial institutions can move from high-level AI principles to measurable, auditable governance ▸ Connecting governance policies with deep technical controls that validate performance, risk, and compliance ▸ How evidence-based governance transforms compliance into a driver of trust and innovation ▸ A real GenAI success case with a leading wealth manager, demonstrating rigorous model testing and monitoring in practice 📍 25 November 2025 - Hilton London Metropole 🔗 Event details: https://lnkd.in/dH9rZyn4 Arena International Events Group
-
-
📹 𝗦𝗲𝘁𝘁𝗶𝗻𝗴 𝗨𝗽 𝗔𝗜 𝗥𝗶𝘀𝗸 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 𝗶𝗻 𝗙𝗶𝗻𝗮𝗻𝗰𝗶𝗮𝗹 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀: 𝗔 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗖𝗮𝘀𝗲 When a global wealth management institution set out to build trust in its AI systems, they knew policies weren’t enough; they needed 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗽𝗿𝗼𝗼𝗳. In this video, Angela Carpintieri and Petar Tsankov walk through how to build an AI risk management framework that truly works in practice, where deep technical controls transform principles into clear, actionable insights. Watch the video to: ✅ Learn how to connect governance, risk, and compliance frameworks with 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻𝘀 ✅ See how to 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹𝗶𝘇𝗲 𝗔𝗜 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 across teams and functions ✅ Understand how to 𝗯𝘂𝗶𝗹𝗱 𝘁𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝘁, 𝗮𝘂𝗱𝗶𝘁𝗮𝗯𝗹𝗲 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 that drive trust and accountability 🎥 Watch the full success story here 👉 https://lnkd.in/daQd2xDx It’s a blueprint for how financial institutions can 𝘀𝘁𝗿𝗲𝗻𝗴𝘁𝗵𝗲𝗻 𝗼𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁 𝗮𝗻𝗱 𝘀𝘁𝗮𝘆 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝘁, without slowing innovation.
-
-
📢 This Wednesday, Nov 19th, our CEO and Co‑Founder, Petar Tsankov, will join some of Europe’s leading experts at the 𝗖4𝗗𝗧 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲: 𝗔𝗻𝘁𝗶𝗰𝗶𝗽𝗮𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗘𝗿𝗮 – 𝗔𝘀𝘀𝗲𝘀𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗗𝗶𝘀𝗿𝘂𝗽𝘁𝗶𝗼𝗻𝘀 𝗯𝘆 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀. Petar’s panel, “𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲, 𝗔𝗰𝗰𝗼𝘂𝗻𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀,” will dive into big questions - from oversight and reporting of autonomous agents, to how we define responsibility and enforce accountability when things go wrong. Expect a dynamic exchange with thought leaders from Accenture, id est avocats, and the University of Lausanne - UNIL We're passionate about building ethical, resilient AI, and this panel promises lively discussion on frameworks that ensure agentic systems remain trustworthy as they grow more powerful. If you’re as curious about the future of AI governance as we are, join us at the Starling Hotel in Saint-Sulpice. Let’s shape the way forward together! 📝 Registration is FREE but required: https://lnkd.in/dvKeqjGH 🗓️ Date: Wednesday, November 19th, 2025 | 09:30 - 17:30 CET 📍 Venue: Starling Hotel Conference Center, 1025 Saint-Sulpice Center for Digital Trust (C4DT), EPFL #EPFL #C4DT #LatticeFlowAI
-
-
🎓 We’re thrilled to welcome Muyang Du, a master’s student from ETH Zürich, who has chosen to do her Master’s thesis with us. 🎉 “I'm excited to write my thesis at LatticeFlow AI because it’s the perfect place to bridge the gap between AI innovation and real-world trust.”, shared Muyang We love collaborating with young talents who are passionate about building trustworthy and responsible AI, and we can’t wait to see where Muyang’s journey takes her.
-
-
🚀 𝗪𝗲 𝗷𝘂𝘀𝘁 𝗹𝗲𝘃𝗲𝗹𝗲𝗱 𝘂𝗽 𝘁𝗿𝘂𝘀𝘁 𝗶𝗻 𝗔𝗜. LatticeFlow AI is now 𝗦𝗢𝗖 𝟐 𝗧𝘆𝗽𝗲 1 𝗮𝘁𝘁𝗲𝘀𝘁𝗲𝗱, proof that we don’t just talk compliance, we live it. Type 2 is on the way. This isn’t just a badge. It’s 𝗿𝗲𝗮𝗹 𝗮𝘀𝘀𝘂𝗿𝗮𝗻𝗰𝗲 for our customers that our systems and processes meet the highest standards in 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆, 𝗿𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 - the same standards we help them achieve in AI. 💡 Curious how we do it? Dive into the details here: https://lnkd.in/dcZUTumz #TrustworthyAI #SOC2 #AIGovernance
-