The New York Times v. OpenAI started as a copyright dispute but has rapidly evolved into one of the most important cases yet on AI data governance and privacy. In a new Corporate Governance Insights article, Nelson Mullins attorneys Jeff Kelly, Scott Sherman, Adrianne Bauman-Cleven, and J. Matthew Gorga examine how a court order requiring OpenAI to preserve ChatGPT user logs has triggered complex questions around data retention, international compliance, and eDiscovery obligations. This decision impacts more than 400 million users and challenges long-held assumptions about deleted data, vendor privacy commitments, and the scope of legal discovery in the age of AI. For companies implementing AI tools, the case underscores the need to reassess data governance frameworks, review vendor agreements, and prepare for a new wave of discovery-related risks. Read the full article for insights on what this means for enterprise use of AI, litigation strategy, and legal risk management. Link to article: https://lnkd.in/epWEJVDX
NYT v OpenAI: AI data governance and privacy case
More Relevant Posts
-
Italy has become the first EU country to enact a national AI law, ahead of the bloc’s full regulations. The law imposes criminal penalties for misuse, like deepfakes or fraud, and demands human oversight in sectors such as healthcare and judicial decisions. This move is a strong reminder that the AI conversation is shifting from innovation to liability. For legal practitioners, that means staying ahead on regulation, ensuring your clients’ use of AI complies with evolving rules, and understanding how oversight requirements will interact with intellectual property, data use, and liability exposures. The broader insight: early adopters of AI will need more than technical readiness. They’ll need legal and compliance strategies baked in. As jurisdictions move faster than ever, the firms that best anticipate and interpret AI regulation will lead, not just survive, in the new landscape. #AIRegulation #LegalTech #Compliance #IPLaw
To view or add a comment, sign in
-
AI is already in your workplace. The question is not if your employees are using it, but how. Associations like the Association of Corporate Counsel are now bringing critical attention to the legal and compliance risks of AI in the enterprise. Their recent session, AI in the Workplace: Approaches to Address Legal and Copyright Risks, highlighted what many corporate counsels already know: 💠 Copyright exposure from AI generated content 💠 Data privacy and compliance challenges that fall squarely on legal teams 💠 The need for practical frameworks to govern AI use responsibly Here is the reality. AI agents are no longer experimental tools. They are decision makers. They negotiate, approve, and trigger real business outcomes. And if they are making decisions, they must be governed with the same rigor as human coworkers. That is why we built AptlyDone.com. Our platform allows organizations to: ✅ Assign authority to AI agents with the same clarity applied to employees ✅ Set limits, expiration rules, and escalation pathways for digital approvals ✅ Maintain a complete, auditable trail of every AI and human delegation decision For legal, compliance, and risk leaders, this is not only about avoiding fines or failed audits. It is about building the governance infrastructure that allows innovation and accountability to grow together. The Association of Corporate Counsel is right: the risks are real. The good news is that the tools to manage them already exist. ➡️ Learn how Aptly ensures AI agents are governed like trusted colleagues at AptlyDone.com https://lnkd.in/gGNBFdY3 Roanie Levy Chris Drummer Shane Tierney Edima Elinewinga, CAE David Bamlango Motty John Marc DiGianni Nocera
🚨 AI is already in your workplace—are you ready for the risks? Employees are using AI tools whether policies are in place or not, and that creates hidden legal and compliance challenges for corporate counsel. Happening Today at 11:00 AM ET: AI in the Workplace: Approaches to Address Legal & Copyright Risks Featuring Roanie Levy, Legal Advisor at Copyright Clearance Center (CCC), this session will explore: - Copyright risks tied to AI-generated content - Data privacy and compliance challenges - Practical steps for AI governance and risk mitigation The risks are real — do you have the tools to manage them? 👉 Register for the webinar here: https://lnkd.in/ef2HnY_8 Sponsored by Copyright Clearance Center. #ACC #AI #LegalRisks #InhouseCounsel #Governance
To view or add a comment, sign in
-
-
OpenAI faces a potential billion-dollar fine as authors and publishers claim the company used pirated books for AI training. Citing internal messages about a deleted dataset, plaintiffs allege intentional copyright infringement and are pushing to access privileged legal communications. This case magnifies the significant intellectual property and financial risks for enterprises leveraging large language models, especially following Anthropic’s recent $1.5 billion settlement over similar allegations. It’s a critical reminder for leaders to prioritize IP governance and partner due diligence in their AI strategy. #AIForCEO #AICopyright #RiskManagement For more articles like this, register to our weekly newsletter: https://lnkd.in/ejYfVBEQ
To view or add a comment, sign in
-
-
Businesses cannot treat AI like just another software tool. Instead, they must understand when and how disclosure, consent and oversight obligations apply. By preparing now, Utah businesses can turn compliance into a competitive advantage — demonstrating both innovation and responsibility in the AI era. Sponsored by Parsons Behle & Latimer
To view or add a comment, sign in
-
The EU Artificial Intelligence Act is a regulation passed by the EU (Regulation (EU) 2024/1689), which establishes a risk-based regulatory framework for AI systems. It categorizes use of AI into different risk levels (unacceptable, high-risk, limited risk, minimal risk) and imposes corresponding obligations (transparency, human oversight, safety, etc.). The goal is to ensure AI respects fundamental rights, privacy, cybersecurity, ethics, while also enabling innovation. In EU, Italy has moved quite swiftly and is now the first EU member state to adopt a comprehensive national law that aligns with and builds upon the EU AI Act. Key facts: On 17 September 2025, the Italian Parliament (Senate) definitively approved a national law on AI (Law No. 1146-B) that integrates many of the EU AI Act’s requirements. This law adds, amongst others, additional obligations in certain sectors and imposes criminal penalties. Below is the link to what our Negar Modirrousta has put into writing on this issue: https://lnkd.in/eGqUXRp7
To view or add a comment, sign in
-
Conn Kavanaugh partner Michael J. Rossi shares his perspective with Massachusetts Lawyers Weekly on the ethical challenges AI presents for the legal profession—emphasizing that issues of data privacy and client confidence may prove more complex than concerns over inaccurate or fabricated case citations. Read more: https://lnkd.in/g_qXz97M #AI #ConnKavanaugh #MassLawyersWeekly
To view or add a comment, sign in
-
-
California's new AI laws are reshaping how U.S. businesses must operate. From deepfakes to data privacy, these regulations set a precedent for AI governance. It's essential to stay informed of continuing developments in the AI space. We cover these latest developments in the Jones Walker LLP AI Law and Policy Navigator. With Andrew R. Lee and Graham Ryan. #AIRegulation #CaliforniaLaw #BusinessCompliance https://lnkd.in/e6iBzFvR
To view or add a comment, sign in
-
💡 OpenAI just got a small legal win, but the bigger story is what it means for every AI company’s data trail A federal judge has lifted the order that forced OpenAI to indefinitely preserve all ChatGPT logs for the The New York Times lawsuit. The company no longer has to store every user conversation, only those linked to accounts the NYT specifically flagged. ⚖️ This sounds procedural, but it’s actually strategic. It signals how courts may start differentiating between blanket data retention and targeted discovery. That distinction could shape the way AI firms balance transparency, privacy, and liability in the months ahead. For leaders building or deploying AI, this is a quiet reminder: governance is not just about what data you keep, it’s about what you no longer need to keep, and can defend that decision. How should AI companies and enterprise users rethink their data retention policies in light of evolving legal scrutiny? #AI #OpenAI #Privacy #DataGovernance #Copyright #LegalTech #Leadership #TechPolicy
To view or add a comment, sign in
-
A global ripple: AI training litigation is crossing borders ���� — India, US, UK, Canada — challenging traditional IP norms. Are you watching closely? Lately there’s been a clear surge in high-stakes data and IP litigation around AI training across multiple jurisdictions. In India, ANI Media has sued OpenAI before the Delhi High Court, alleging its news content was used to train ChatGPT without permission; the matter is ongoing with oral arguments underway [1]. In the United States, similar claims have been brought by major news publishers - The New York Times and Daily News - against OpenAI and Microsoft in the Southern District of New York [2], while a class action by authors against Anthropic in the Northern District of California [3] has moved toward a landmark settlement of US$1.5 billion. The United Kingdom [4] has seen Getty Images proceed against Stability AI over alleged copyright, database right and trade mark infringements (and passing off). And in Canada [5], a coalition of leading news outlets has sued OpenAI over alleged large-scale scraping of their content. Collectively, these cases reveal a defining question of our time - how do we balance the data appetite of AI with the rights of those who created it? As courts worldwide begin to draw these boundaries, we’re not just watching the evolution of law - we’re watching the legal architecture of AI being built in real time. #AILitigation #IntellectualProperty #CopyrightLaw #AIRegulation #DataRights #TechLaw #AIandLaw #LegalInnovation #ArtificialIntelligence #TMT #TechLitigation #CommercializationOfData -- [1] ANI Media Pvt. Ltd. vs Open AI OPCO LLC; Case No. CS(COMM) 1028/2024 [2] The New York Times Company vs Microsoft Corporation, Open AI et al.; Case No. 23-cv-11195 (SHS); Daily News LP, et. al. vs Microsoft Corporation, Open AI et al.; Case No. 24-cv-3258 (SHS); The Center for Investigative Reporting, Inc. vs Open AI et al.; Case No. 24-cv-4872 (SHS) https://lnkd.in/gcT78V-A [3] Bartz et al v. Anthropic PBC; Case No. 3:24-cv-05417 https://lnkd.in/gpXYuqzw [4] Getty Images (US) Inc & Ors vs Stability AI Ltd; Case No: IL-2023-000007 https://lnkd.in/gHgurdyA [5] See Toronto Star Newspapers Limited et al. vs OpenAI Inc. et al.; Case No. CV-24-00732231-00C https://lnkd.in/gKd6RmFb
To view or add a comment, sign in
Explore related topics
- How AI Governance Affects Privacy Roles
- Understanding Chatgpt Data Privacy Issues
- Data Privacy Standards for Open AI Models
- The Significance Of Data Governance In AI Projects
- AI's Influence on Data Ownership Rights
- Data Privacy Risks on Open Platforms
- Addressing Data Privacy in Closed-Source AI Systems
- Data and Model Privacy Issues in Tech
- The Intersection of AI Ethics and Data Privacy
- Data Privacy Concerns in Open vs Proprietary Models