Microsoft AI chief Mustafa Suleyman recently sparked controversy by asserting that anything published on the open web becomes "freeware" for AI use. This bold statement challenges established norms and has significant implications for copyright law and AI ethics. In a recent interview, Microsoft AI executive Mustafa Suleyman made a surprising claim about the status of web content, suggesting it is freely available for AI training. This perspective is particularly controversial given the ongoing legal battles faced by Microsoft and OpenAI, which have been accused of using copyrighted material without permission to train their AI models. Understanding the nuances of this issue is critical as it touches on complex copyright laws, fair use interpretations, and the ethical use of online content. ⚖️ Copyright Laws: In the US, any created work is automatically protected by copyright, and publishing it on the web does not waive these rights. 🤖 Fair Use Misconceptions: Fair use is determined by courts based on specific criteria, including the purpose of use, the nature of the work, the amount used, and the effect on the market, not by a "social contract." ��� Robots.txt: Robots.txt can specify which bots are allowed to scrape content, but it is not legally binding, and compliance is voluntary. 📉 Legal Battles: Microsoft and OpenAI face multiple lawsuits for allegedly using copyrighted content without permission, highlighting the ongoing legal disputes in AI training practices. 🌐 Ethical Considerations: The ethical use of online content by AI companies remains a hotly debated issue, with significant implications for content creators and AI developers. Suleyman's comments underscore the urgent need for clear guidelines and robust legal frameworks to govern the use of online content in AI development. These measures are crucial in ensuring that the rights of content creators are respected and that AI companies operate within the bounds of the law. #AI #Copyright #FairUse #MicrosoftAI #OpenAI #WebContent #DataEthics #LegalIssues #AITraining #TechNews
AI and Digital Rights
Explore top LinkedIn content from expert professionals.
Summary
AI and digital rights refers to the rules and protections that govern how artificial intelligence uses, learns from, and impacts digital information, especially content created by humans online. As AI systems increasingly rely on massive amounts of data—including personal, artistic, and creative works—questions around privacy, ownership, consent, and fairness are becoming central to both technology development and society.
- Protect your content: Take steps to understand how your online creations—like art, writing, or music—can be used by AI and explore tools or platforms that support creator rights and enforce transparent usage policies.
- Stay informed: Keep up with changing regulations and debates about AI and digital rights so you know how your privacy and data may be affected, and what options you have to exercise control.
- Advocate for fairness: Support efforts to establish clear ethical guidelines and legal frameworks that ensure AI respects individual and community rights, and prevents discrimination or exploitation.
-
-
At first glance, the Studio Ghibli style AI-generated art seems harmless. You upload a photo, the model processes it, and you get a stunning, anime-style transformation. But there's something far more complex beneath the surface—a quiet trade-off of identity, privacy, and control. Today, we casually give away fragments of ourselves: - Our faces to AI art apps - Our health data to wearables - Even our genetic blueprints to direct-to-consumer biotech services All in exchange for a few minutes of novelty or convenience. And while frameworks like India’s Digital Personal Data Protection Act (DPDPA) attempt to address this through “consent,” we must ask: What does consent even mean in an era of opaque AI systems designed to extract value far beyond that initial interaction? Because it’s not about the one image you uploaded. It’s about the aggregated behavioral and biometric insights these platforms derive from millions of us. That data trains models that can infer, profile, and yes—discriminate. Not just individually, but at community and population levels. This is no longer just a personal privacy issue. This is about digital sovereignty. Are we unintentionally allowing global AI systems to construct intimate, predictive bio-digital profiles of Indian citizens—only for that value to flow outward? And this isn’t just India’s challenge. Globally, these concerns resonate, creating complex challenges for cross-border data flows and requiring companies to navigate a patchwork of regulations like GDPR. The real risk isn’t that your selfie becomes a meme. It’s that your data contributes to shaping algorithms that may eventually determine what insurance you're offered, which job you’re filtered out of, or how your community is policed or advertised to, all without your knowledge or say. We need to go beyond checkbox consent. We need: 🔐 Privacy-by-design in every product 🛡️ Stronger enforcement of rights across borders 🧠 Collective awareness about how predictive analytics can influence entire societies Let’s be clear that innovation is critical. But if we don’t anchor it within ethics, rights, and sovereignty, we risk building tools that define and disadvantage us, rather than empower us. #Cybersecurity #PrivacyMatters #AIethics #DPDPA #DigitalSovereignty #DataProtection #AIresponsibility #IndiaTech
-
A painter’s masterpiece becomes fodder for an AI model, scraped, dissected, and absorbed without the artist’s consent. The UK government is poised to legalize what amounts to wholesale appropriation of creative works. Their proposed copyright legislation explicitly permits AI companies to consume copyrighted material without permission or compensation, a fundamentally different approach than previous digital transformations. The legislation allows AI companies to train models on copyrighted material without permission, forcing creators to opt out rather than opt in. This has triggered opposition from artists, authors, musicians, and creative professionals who reject having their work harvested as "training data" without compensation. When AI ingests thousands of books, songs, or artworks, it learns to mimic styles and generate content that could devalue or replace human-made work. If AI can produce a symphony like Mozart, a novel like Rushdie, or artwork like Banksy, all without attribution or payment, what happens to the economic system sustaining creative professionals? The UK government argues these changes are necessary to secure Britain’s place as a global AI hub, warning that without them, companies might relocate to jurisdictions with looser regulations. Ministers frame it as a pragmatic economic choice. In response to pressure, the government has promised an economic impact assessment and required AI companies to publish transparency reports. Yet critics remain skeptical, seeing these steps as insufficient to address the power imbalance between individual creators and tech giants. This debate is not confined to Britain. In India, where the creative economy and tech sector are both booming, the stakes are just as high. The Copyright Act of 1957, even with its 2012 digital amendments, needs urgent reconsideration to meet AI’s challenges. Without smart intervention, India risks either slowing tech growth or weakening the cultural industries that define its global influence. At this crossroads, the central question is not whether AI should learn from human creativity, but how to ensure the value it generates flows back to sustain the creative work it depends on. In chasing technological progress, are we eroding the very foundations of human creativity? #ai
-
I'm increasingly convinced that we need to treat "AI privacy" as a distinct field within privacy, separate from but closely related to "data privacy". Just as the digital age required the evolution of data protection laws, AI introduces new risks that challenge existing frameworks, forcing us to rethink how personal data is ingested and embedded into AI systems. Key issues include: 🔹 Mass-scale ingestion – AI models are often trained on huge datasets scraped from online sources, including publicly available and proprietary information, without individuals' consent. 🔹 Personal data embedding – Unlike traditional databases, AI models compress, encode, and entrench personal data within their training, blurring the lines between the data and the model. 🔹 Data exfiltration & exposure – AI models can inadvertently retain and expose sensitive personal data through overfitting, prompt injection attacks, or adversarial exploits. 🔹 Superinference – AI uncovers hidden patterns and makes powerful predictions about our preferences, behaviours, emotions, and opinions, often revealing insights that we ourselves may not even be aware of. 🔹 AI impersonation – Deepfake and generative AI technologies enable identity fraud, social engineering attacks, and unauthorized use of biometric data. 🔹 Autonomy & control – AI may be used to make or influence critical decisions in domains such as hiring, lending, and healthcare, raising fundamental concerns about autonomy and contestability. 🔹 Bias & fairness – AI can amplify biases present in training data, leading to discriminatory outcomes in areas such as employment, financial services, and law enforcement. To date, privacy discussions have focused on data - how it's collected, used, and stored. But AI challenges this paradigm. Data is no longer static. It is abstracted, transformed, and embedded into models in ways that challenge conventional privacy protections. If "AI privacy" is about more than just the data, should privacy rights extend beyond inputs and outputs to the models themselves? If a model learns from us, should we have rights over it? #AI #AIPrivacy #Dataprivacy #Dataprotection #AIrights #Digitalrights
-
You don’t get an alert when your voice is copied. There’s no email when your art is ingested. No notification when your words are modeled into a machine that speaks them back, slightly changed, but still yours. It just happens. Quietly. Permanently. Without your name. We’re in the middle of a structural reckoning between human creativity and Pac-Man-scale extraction. AI systems are now trained on data gathered silently from the public web, books, artwork, music, journalism, software, and lesson plans, much of it scraped without consent, attribution, or compensation. Lawsuits are piling up. But the question isn’t just what’s legal. It’s: what’s enforceable? That’s the central argument of this (very long) article I’ve just published. As a technologist, not a lawyer, I’ve spent years thinking about how to make digital rights operational, not symbolic. It is not reliant on post-hoc lawsuits but enforceable at the point of interaction and machine speed. Because the truth is, today’s systems aren’t designed to recognize ownership, honor licensing terms, or even log who took what. The burden is entirely on the creator; most creators never stood a chance. At Synovient™, we’re addressing this architecturally. We’ve built infrastructure where provenance is cryptographically embedded, content travels in self-governing capsules, terms can’t be stripped, and access requires compliance, not open self-governed trust. Scraping may continue, but what’s scraped will be unusable without permission. The future of AI can’t be built on appropriation. It has to be governed by architecture. The full article is dense, technical, and long by design. It’s written for people building the systems that govern digital life: CTOs, policymakers, AI researchers, and architects. If that’s you or you’ve ever wondered how to fix this at scale, I invite you to read it.
-
Why Morocco Needs Digital Rights Protection in the Age of AI Meta Just Refused My Data-Use Objection — Here’s What That Reveals Recently, I contacted Meta (Facebook) with a request: to stop using my personal information to train generative AI models. Their response was clear: “We cannot fulfill your request because this option is not available in your jurisdiction.” In other words, because I live in Morocco, Meta says I have no legal right to object. Morocco does have Law 09-08 on personal data protection, but it was written long before AI and generative models existed. It does not give citizens the right to refuse companies like Meta from using their data for AI training. Geography now determines digital rights: in the EU or UK, users can opt out; in Morocco, we cannot. This raises a critical question: in the AI era, are digital rights now human rights? Control over personal and creative data affects academic freedom, intellectual property, creativity, and digital dignity. Without legal safeguards, individuals risk becoming involuntary contributors to systems shaping education, journalism, creativity, and public discourse. What needs to change: • A clear right to object to AI use of personal data, adapted to Morocco’s context • Transparency about how platforms use personal content • Effective enforcement mechanisms and public awareness campaigns Morocco cannot be left behind in protecting citizens’ rights in the digital age. #DigitalRights #Morocco #AI #Privacy #TechPolicy #DataProtection #AIethics #Law0908
-
Where do you start in AI governance? I’m often asked how to begin, where to study, and what resources truly matter. One of the libraries I turn to again and again is the Publications and Case Law Library of the European Union Agency for Fundamental Rights (FRA). It’s a goldmine for anyone working at the intersection of AI, ethics, and law. A few (free) publications I’ve found particularly valuable include: 🔹 Bias in Algorithms: Artificial Intelligence and Discrimination (attached) 🔹 Getting the Future Right: Artificial Intelligence and Fundamental Rights 🔹 Online Content Moderation: Current Challenges in Detecting Hate Speech 🔹 GDPR in Practice: Experiences of Data Protection Authorities You can download the publications here: https://lnkd.in/dpikHaZk One example that illustrates the value of FRA comes from data collected between 2015 and 2022. In its 2022 report, FRA warned that its findings “corroborate the need for more comprehensive and thorough assessment of algorithms in terms of bias before such algorithms are used for decision-making that can have an impact on people.” This applies across domains; from predictive policing and migration decisions to education and the allocation of social resources (FRA, 2022, pp. 77–78, Bias in Algorithms: Artificial Intelligence and Discrimination). This principle is now codified in the EU Artificial Intelligence Act, which requires that all decisions substantially affecting people include human oversight. The Act specifies that “high-risk AI systems must be designed in such a way that they can be effectively overseen by natural persons during their period of use"(EU AI Act, Articles 14 and 26; https://lnkd.in/dH8-KMY4). Beyond the reports, the Case Law Database is worth bookmarking. It offers up-to-date decisions covering everything from individual GDPR violations and AI-related worker surveillance, to major actions involving Meta, OpenAI, Microsoft and others launching tech inside the EU and shaping Europe’s digital accountability framework. Access the case-law database here: https://lnkd.in/dC8VNnzf The Fundamental Rights Report 2025, also newly published by the FRA, provides an essential overview of the state of fundamental rights in the EU, highlighting key developments across: Support for human rights systems and defenders; Data protection, privacy and new technologies; Equality, non-discrimination and racism. I find that this annual publication remains a valuable resource for anyone seeking to stay informed about Europe’s shifting ecosystem of workplace rights, regulation, and responsibility, particularly in the context of AI. The 2025 edition is found here: https://lnkd.in/ddsGRy8D #AIgovernance #AIethics #FundamentalRights #GDPR #AIpolicy #ResponsibleAI #AIlaw #AIregulation Kompass Education
-
The UK wants to rewrite copyright law—for AI companies, not for creators. Their latest proposal? A shiny new exception that allows AI companies to train models on copyrighted works by default. Opt-out? Sure, if you enjoy wasting your life filling out forms and fighting a system that barely works. And we know that opt-out doesn't work. Opt-out is an illusion. A huge percentage of people don’t even realize their works have been used. For instance, according to #ALCS UK, only 8% of writers knew their work had been used as training data. My new article for Forbes! The government says this will attract tech investment and “boost innovation.” Translation: creators' work will be scraped, exploited, and fed into machines to build competitors that devalue their original creations. AI doesn’t just use its training data—it competes with it. It’s hard to call this a win for innovation when it dismantles a £126 billion creative industry. Music, films, art, books—the foundation of culture, turned into free training data. As Ed Newton-Rex puts it: “Generative AI competes with its training data. This would allow AI companies to exploit people's work to build highly scalable competitors to them.” Meanwhile, creative industries are fighting back. From Paul McCartney to Getty Images, creators are calling this out for what it is: a massive giveaway to big tech that threatens livelihoods, originality, and the future of creativity. We’re at a crossroads. Will the UK lead the way in balancing AI innovation with creator protection? Or will it sacrifice its creative economy to appease AI companies? This isn’t just about copyright reform. It’s about the value of human creativity in an AI-driven world. https://lnkd.in/gr7UYBwz #AITrainingData #CopyrightReform #OptOutSystem #AIandCopyright #CreatorsRights #CopyrightReform #AIandCopyright #IntellectualProperty #GenerativeAI #UKPolicy
-
The protection of human rights in the digital world is one of the existential challenges for the human rights movement. And that challenge has intensified with the speed of development of AI technologies, the growing influence of the online world on our ability to exercise our rights and, more recently, by the alignment of most of the tech companies with the Trump Administration's hostile approach to rights and regulation in the digital world. Freedom of expression has effectively been privatized at a time when many governments around the world are backsliding on their human rights obligations and international norms and international institutions are being marginalized. The new report by Irene Khan, the UN Special Rapporteur on Freedom of Opinion and Expression, which she presented to the UN General Assembly this week, lays out many of these issues in compelling detail and is well worth a read. The report presents recommendations to governments, companies and investors amongst others. “Against a rising tide of hate and lies on social media, companies have rolled back their policies and tools to combat disinformation and hate speech. When large digital platforms reject international human rights norms, they undermine their own legitimacy and effectiveness as global companies.” https://lnkd.in/eTAuX5WC