A Manhattan federal judge has delivered a really important ruling on artificial intelligence and legal practice - can you claim legal privilege over AI generated documents? It’s a potential major blind spot for organisations - and a huge responsibility for in-house lawyers to explain the issue to their non-legal colleagues. On 10 February, U.S. District Judge Jed Rakoff ruled in USA v. Heppner that a criminal defendant could not claim attorney-client privilege over documents he had himself prepared using an AI service and then subsequently sent to his lawyers. Bradley Heppner, former chairman of GWG Holdings, faces fraud charges over an alleged $150 million scheme, with trial set for April. But the privilege ruling carries significance beyond any single case. The reasoning rests on a principle that long predates artificial intelligence. Privilege protects confidential communications between lawyer and client made for the purpose of legal advice. It does not automatically attach to materials a client creates independently simply because those materials are later forwarded to counsel. What matters is how the document came to exist, not its destination. What AI changes is the scale of the problem. Generative AI tools now allow any executive to produce polished case narratives, issue summaries, and chronologies that resemble legal work product, all without a lawyer’s involvement. The natural instinct is to assume that once these materials are emailed to counsel, they enter the protected sphere. Judge Rakoff’s ruling suggests otherwise - the court’s focus is on what the document is and how it came to exist, not on the fact that it was subsequently routed to a lawyer. This matters because AI is rapidly becoming the default tool through which businesspeople process complex situations. An executive facing a regulatory investigation who uses a chatbot to organise the facts and draft a summary for their lawyer may be creating discoverable material that sits entirely outside the privileged relationship. Judge Rakoff also noted that the AI-generated materials could prove “problematic” if used at trial. Even where privilege is not the issue, AI-authored documents create genuine evidential difficulties - questions of authorship, accuracy, hearsay characterisation, and the optics of presenting AI-mediated narratives as though they were direct recollection. If you want AI-assisted materials to have any chance of privilege protection, “Client-produced and then forwarded to counsel” is the weak fact pattern, and after this ruling, in the US at least, it may be no fact pattern at all.
AI in Legal Practice
Explore top LinkedIn content from expert professionals.
-
-
🚨 It's 2025, but many lawyers are still making the SAME MISTAKES while using AI. Here's the latest case and what EVERY lawyer should know: Last week, lawyers representing a family in a lawsuit against Walmart and Jetson Electric Bikes admitted to using AI after the judge said nearly ALL cases cited did not exist. The judge wrote: "Plaintiffs cited nine total cases: (...) The problem with these cases is that none exist, except (...). The cases are not identifiable by their Westlaw cite, and the Court cannot locate the District of Wyoming cases by their case name in its local Electronic Court Filing System. Defendants aver through counsel that 'at least some of these mis-cited cases can be found on ChatGPT.' [ECF No. 150] (providing a picture of ChatGPT locating “Meyer v. City of Cheyenne” through the fake Westlaw identifier). Additionally, some of Plaintiffs’ language used for explaining the “Legal Standard” is peculiar. (...)" The lawyers then answered: "The cases cited in this Court’s order to show cause were not legitimate. Our internal AI platform 'hallucinated' the cases in question while assisting our attorney in drafting the motion in limine. This matter comes with great embarrassment and has prompted discussion and action regarding the training, implementation, and future use of artificial intelligence within our firm. This serves as a cautionary tale for our firm and all firms, as we enter this new age of AI." → My comments: 1. Lawyers will always be FULLY RESPONSIBLE for the legal work they perform. "Our AI system hallucinated" will never be accepted as a legal excuse (it's the equivalent of a child saying "my dog ate my homework" at school). Lawyers should consider that when opting to use AI to perform any legal work (including reviewing, researching, drafting, etc.). 2. It's bad for any lawyer or law firm's reputation to admit that they didn't review the legal work they were paid to do (and let the AI system do it instead). Law firms that have an open and lenient AI policy are taking high risks. 3. A reminder that ALL existing generative AI applications have some rate of hallucinations, meaning that their developers can't promise that the outcomes will be 100% accurate or based on factual sources. On the other hand, lawyers are paid, among other things, to provide accurate legal advice grounded in evidence and factual knowledge. Any AI company that has legal professionals as their target audience should have that in mind. 4. General-purpose AI systems like ChatGPT - without any additional guardrails or fine-tuning that consider the peculiarities of legal work - are likely not suitable for legal professionals and should be avoided. ♻️ If you have lawyers in your network, share it with them. 👉 NEVER MISS my AI governance updates [especially if you are a lawyer!]: join 52,600+ readers who receive my weekly newsletter (subscribe below). #AI #AIGovernance #Law #AIRegulation #Lawyers #AIPolicy #LegalWork
-
CFO to General Counsel last week: "I read that AI can review contracts now. Why do we still need three legal FTEs?" GC's internal monologue: "Because AI can't negotiate with an angry customer at 9 AM, navigate a GDPR audit at 11 AM, and explain to the board why that 'simple contract' could expose us to €2M liability at 3 PM?" Welcome to 2025, where every General Counsel is expected to: ✅ Implement AI to "cut costs" ✅ Reduce legal headcount ✅ Still deliver faster contract turnarounds ✅ Maintain zero risk tolerance ✅ Be a strategic business partner All by yesterday. Preferably with no budget. Here's what leadership sees: --> AI reviews 100 contracts in minutes! Here's what they miss: --> Who reviews the AI's output? --> Who handles the 15 edge cases it can't process? --> Who negotiates when the customer pushes back? --> Who coordinates with Sales, Finance, and IT? --> Who makes the final call on acceptable risk? The pressure is real. CFOs read one article about "AI replacing lawyers" and suddenly expect the legal department to automate itself out of existence. But here's the truth: AI is powerful for legal teams - when used right. The goal isn't to replace lawyers. It's to free them from the repetitive work that buries them: → Initial contract reviews and risk flagging → Answering the same compliance questions repeatedly → Tracking obligations and renewals → Generating routine agreements That gives your team capacity for what actually matters: strategic negotiation, risk assessment, business partnership, and preventing the fires nobody sees. Smart legal leaders aren't asking "How do I replace my team with AI?" They're asking "How do I use AI to make my team 10x more effective?" How is your leadership team thinking about AI in legal right now?
-
3 Workflows I've Automated for in-house teams. ① Ask Legal ② Procurement ③ Contract Review (not just the review!) 1. Ask Legal [or any department for that matter 🤷🏼♀️] You've heard me talk about legal teams and knowledge management. Long story short, your legal team is answering the same 20 questions over and over 😵💫 A simple way to save a CHUNK of time answering questions from the business (enabling them to go faster) ALL while having complete control & keeping a human in the loop? ↪️ Set up an 'Ask Legal' bot in your comms platform. ↪️ Sync it with your knowledge base (e.g GDrive/Notion/Sharepoint). ↪️ Set up your custom instructions (Want it to tag Bob on privacy questions only, specifically on a Tuesday? No problem). ↪️ Don't want the answer to go straight out to the business without reviewing it first? Cool, turn on co-pilot mode. The result? 60-80% fewer repetitive queries. Your team focuses on the high value things that need a human lawyer. 2. Procurement Businesses have 100's of tools, but when departments don't speak to each other you end up with duplicate tools & subscriptions 😭 💵 🚽. What if there was a way for the business to find out in <1 minute if there was a tool available that covered their needs, before needing to spend some hard secured department budget? Moreover, what if I told you, they could kick off the internal procurement process from the comfort of your comms platform? Team member : “Do we already have a tool for X?” in Slack/Teams ✅ Bot checks knowledge base (policies, procurement tool). ✅ If a match is found, it shares the approved tool & owner to contact. ✅ If not, the bot can ask the user for more info and direct them with next steps to kick off the procurement process from inside Slack/Teams. Ensuring your users ACTUALLY follow the process, without adding friction. Did I just see your CFO cry tears of joy? 3. Third Party Vendor Contract Review & Project Management Getting AI to redline a contract (as a first pass) is a huge win, but there's still the other pieces of the process missing, like: 🤷🏼♀️ The business figuring out IF legal review is even needed (according to company policy). 📨 The business actually submitting the contract to legal. 😩 Managing review capacity within the legal team. 🖥️ Getting the legal team to log & update the PM tool. The list never ends. Legal reviews only what actually needs their eyes, turnaround times improve, and the business stops pinging the team for “update pls?” in Slack : ) TLDR; Most legal teams are drowning in admin work that could be automated. I've built all of these using simple processes and tools (that I've found most businesses have). You also know I love a good Figma flow. So I’ve built them for all three of the above (see a sneak peak below). Want the entire thing? Comment "FLOWS" and I'll send them over. Also, tell me what you want to see - more of the above or step-by-step how-to build videos?
-
Legal AI isn’t coming. It’s already a full market map. Contracts, research, negotiations, patents – that’s the engine of legal work. It’s dense and high-risk. And AI helps navigate it with remarkable efficiency. Here’s a simple map of where AI fits: 1. Contract Management Tracks obligations, deadlines and risky clauses across large contract portfolios. What this changes: missed details cost money. 2. Legal Research & Intelligence Scans case law and regulations in seconds. What this changes: better research, stronger decisions. 3. Document Drafting & Negotiation Generates drafts and compares versions during negotiations. What this changes: less formatting, more strategy. 4. Workflow-Specific Tools Automates tasks in compliance, litigation, due diligence, e-discovery. What this changes: faster processes, fewer errors. 5. IP Management Manages trademarks, copyrights, and patent portfolios. What this changes: IP is often a company’s biggest asset. 6. IP Protection & Monetization Monitors infringements and identifies licensing opportunities. What this changes: protect ideas and turn them into revenue. 7. Patent Drafting & Review Supports precise claim writing and consistency checks. What this changes: small wording gaps can weaken protection. 8. Patent Research Searches global databases for prior art. What this changes: avoids costly rejections. Let machines scan. And let humans decide. That balance is exactly where platforms like e! by Lexemo come in. Not “AI for everything,” but structured legal automation where AI is used deliberately and transparently. If you’re thinking about automating legal processes, take a look: https://lnkd.in/g49NngZn Have you ever had situations where AI helped you sort out a legal mess? Or avoid one altogether? #LegalTech #Automation #AutoMate
-
Three major developments in the last week should have every HR leader, employer, and AI vendor paying attention: 1. The AI Civil Rights Act was reintroduced in the US Congress Led by Senator Ed Markey and Representative Yvette D. Clarke, this legislation places hard guardrails around AI and algorithmic systems used in decisions related to hiring, housing, healthcare and beyond. It demands transparency, bias testing, and accountability. Think of it as GDPR for bias, but with broader implications across HR, tech, and operations. “We will not allow AI to stand for Accelerating Injustice.” – Senator Ed Markey for U.S. Senate 2. California’s new workplace AI discrimination laws are now in effect. The new rule governing companies' use of automated decision-making technology will likely create a situation where companies are liable for hiring practices if a system violates anti-discrimination laws. As other U.S. states also implement laws and regulations containing similar ADMT protections, companies deploying the technology will need to be proactive in their record keeping and vetting of third-parties while auditing their own tools to understand how the software functions. It’s no longer enough to trust your tools and vendors, you must prove they’re fair. 3. Insurers are backing away from covering AI risks AIG, Great American, and WR Berkley are asking regulators to exclude AI-related liabilities from their policies. Why? Because the risks (from chatbots hallucinating to algorithmic bias in hiring) are seen as “too opaque, too unpredictable.” When insurers are pulling cover, it’s a warning sign: you own the risk. 👁 What this means for HR and recruitment business leaders: We’ve officially entered the age of AI Accountability. That means: ✅ You need visibility into how your AI systems work, especially if they’re used for hiring, performance management, or workforce planning. ✅ You must audit your HR tech stack (yes, that includes Workday, ATS platforms, and even AI resume screeners). ✅ You need to document fairness, not just assume it. ✅ You must rethink your contracts with AI vendors. If the tech goes wrong, insurers may not have your back. 🛡 If you haven’t already, it’s time to start building your AI Governance Playbook. 📌 Audit all AI tools in use 📌 Build an internal AI ethics committee 📌 Ensure legal, DEI and HR alignment on tool deployment 📌 Partner only with vendors offering bias mitigation, auditability, and indemnification
-
Cutting through the AI noise - here are 5 use cases for using generative AI today in a law practice: 1) Having AI draft initial responses to standard discovery requests, pulling directly from client documents and past cases—turning 3 hours of document review into 20 minutes of attorney verification. 2) Using AI to analyze deposition transcripts and build detailed witness chronologies, flagging inconsistencies and potential credibility issues that could be crucial at trial. 3) Feeding settlement agreements from similar cases to AI to generate initial settlement terms, helping attorneys start negotiations with data-backed proposals rather than gut instinct. 4) Having AI review client intake forms and past matters to spot potential conflicts of interest—moving beyond simple name matching to identify subtle relationship patterns. 5) Using AI to draft routine motions and pleadings by learning from the firm's document history, maintaining consistent arguments while adapting to case-specific facts. The real value isn't replacing attorney judgment. It's eliminating the mechanical tasks that keep great lawyers from doing their best work. What specific AI applications are you seeing succeed (or fail) in your practice? #legaltech #innovation #law #business #learning
-
When judges sanction lawyers for using AI-generated fake citations, it might seem like an ethics story, but it is really a product story. Across the country, lawyers are facing consequences after submitting filings with fake case citations created by generative AI. In one high-profile example, a large firm apologized to a bankruptcy court for unverified AI output and rewrote its internal policies. Judges are sending a clear message: using AI without verification isn’t innovation. It’s negligence. What we are seeing in courtrooms is what happens when tools are deployed without clear frameworks and when governance lags behind innovation. These are not simply bad lawyers using bad technology. They are professionals missing the product counsel mindset: build, test, verify, and iterate responsibly. AI is exposing something bigger than hallucinations. It is exposing organizational immaturity. For anyone building or scaling products that rely on AI, this moment carries an important message. Governance must be treated as a feature, not a policy. If your AI workflow lacks validation checkpoints, it is not a compliant product, but a risk waiting to be exposed. Compliance should not be seen as a blocker. It is the trust layer that allows innovation to move faster. The firms writing real, working AI policies will outpace those treating AI as a shortcut, because trust compounds faster than speed. Counsel must evolve into designers. Modern lawyers are not just issue spotters. They are system architects making sure innovation operates at the speed of integrity. The AI hallucination problem is not about fiction. It is about friction between ambition and accountability. The next generation of leaders will not just use AI better. They will turn trust itself into a product. In the end, every AI output is a product, and every product quietly tells the truth about how it was built and your judgment and professionalism. -------- Olga V. Mack Building trust and creating new categories at the intersection of contract intelligence, commerce, and AI. Let’s shape the future together.
-
Anthropic just released Legal Plugins for Claude. They just went straight to the lawyers: The plug-in for Legal can: → Automate contract review against your firm's playbook, flagging deviations and generating redlines with business impact analysis → Triage NDAs instantly into GREEN (standard approval), YELLOW (counsel review), or RED (significant issues) categories → Integrate with your existing tools (Slack, Box, Microsoft 365, CLMs) via MCP to handle vendor checks, compliance workflows, and templated responses Is there still value in the application layer? Yes - but legal tech startups can't compete on plugins alone anymore. Simply building a Slack bot isn't enough of a moat when Anthropic offers these out of the box. The differentiation now comes from what foundation models can't easily replicate: robust governance frameworks, enterprise-grade security and compliance infrastructure, fast iteration cycles that respond to specific legal workflows, and purpose-built workspaces designed for how lawyers actually work. But why now? It's clear that both Anthropic and OpenAI both need to increase revenues quickly. They've taken on billions in funding and want to IPO. To get there, they need more cashflow - hence targeting business users. Legal is a natural focus area. My takeaway: Not all lawyers just do redlining and NDAs! So many legal AI tools focus on these tasks as if that's the entirety of legal work. But this is a step in the right direction. Especially in the recent boom of DIY legal tech tools, it seems likely some companies will adopt this - especially those already holding Claude Enterprise subscriptions. Note: Plugins are still in "Research Preview," and Anthropic explicitly warns against using it for regulated workloads for the time being. I'll be doing a deep dive into this in my newsletter this Sunday. Don't miss it: https://lnkd.in/eNXHfEX3
-
𝟐𝟎 𝐄𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 𝐀𝐈 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 𝐁𝐞𝐟𝐨𝐫𝐞 𝐘𝐨𝐮 𝐃𝐞𝐩𝐥𝐨𝐲 𝐀𝐈 Most AI Failures in enterprises are not Technical. They are Compliance Failures. Before deploying AI into Production, Here are the 20 Non-Negotiables: 1. Appoint AI Accountability Leader Assign a senior executive responsible for AI compliance, oversight, and reporting. 2. Establish Cross-Functional AI Board Include legal, security, HR, data, and business teams for governance and approvals. 3. Define Legal AI Role Clarify provider versus deployer obligations and compliance responsibilities. 4. Maintain Technical Documentation Document architecture, data sources, performance metrics, and intended use limitations. 5. Disclose AI Usage Transparently Notify users about AI interactions and synthetic content usage. 6. Publish Model Transparency Reports Document purpose, performance across demographics, limits, and out-of-scope scenarios. 7. Implement Logging and Audits Track inputs, outputs, versions, and decisions for investigations and traceability. 8. Ensure Decision Explainability Provide meaningful explanations and enable human review of high-impact decisions. 9. Create Comprehensive AI Inventory Document all AI systems, APIs, models, and embedded SaaS tools. 10. Develop AI Acceptable Use Policy Define permitted uses, prohibited activities, and approved data types. 11. Classify AI Risk Levels Categorize systems into prohibited, high, limited, or minimal risk tiers. 12. Conduct Formal Risk Assessments Identify harms, discrimination risks, and safety issues before deployment. 13. Test for Bias Regularly Evaluate outputs across protected groups and document mitigation steps. 14. Review Third-Party AI Risk Assess vendor compliance, contracts, liabilities, and regulatory responsibilities. 15. Govern Training Data Legality Track licenses, avoid unauthorized scraping, and respect copyrights. 16. Perform Required DPIAs Assess high-risk personal data processing under GDPR and similar regulations. 17. Confirm Lawful Data Basis Verify consent, contractual necessity, or legitimate interest before processing data. 18. Apply Data Minimization Rules Limit data usage and enforce strict retention schedules. 19. Secure AI Infrastructure Assets Protect pipelines, weights, APIs, and model endpoints with strong controls. 20. Support Data Subject Rights Enable access, correction, deletion, restriction, and automated decision opt-outs. The real shift in enterprise AI is this. From model performance to governance readiness. From proof of concept to regulatory durability. If your AI cannot pass audit, it cannot scale. Compliance is not friction. It is infrastructure. PS: If you found this valuable, join my weekly newsletter where I document the real-world journey of AI transformation. ✉️ Free subscription: https://lnkd.in/exc4upeq #EnterpriseAI #AIGovernance #ResponsibleAI