The European Commission published its first draft of the “Code of Practice on Transparency of AI‑Generated Content” designed as a tool to help organizations demonstrate alignment with the transparency requirements (Art. 50) of the AI Act. Article 50 of the AI Act includes obligations for providers to mark AI-generated or manipulated content in a machine-readable format, and for users who deploy generative AI systems for professional purposes to clearly label deepfakes and AI-text publications on matters of public interest. The document is divided into two sections. The first section covers rules for marking and detecting AI content, applicable to providers of generative AI systems, including to: - Use a Multi‑layered machine-readable marking of AI‑generated content - Use imperceptible watermarks interwoven within content - Adopt a digitally signed “manifest/provenance certificate” for content that can’t securely carry metadata - Offer free detection interfaces/tools, including confidence scoring, and complementary forensic detection that does not rely on active marking - Test against common transformations and adversarial attacks - Use open standards and shared/aggregated verifiers to enable cross-platform detection and lower compliance friction The second section covers labelling deepfakes and certain AI-generated or manipulated text on matters of public interest and is applicable to deployers of generative AI systems, including: - Deepfake labelling - Modality‑specific labelling rules for real-time video, non-real-time video, images, multimodal content, and audio-only - Operational governance: encourages internal compliance documentation, staff training, accessibility measures, and mechanisms to flag and fix missing/incorrect labels.
How Platforms Regulate AI Content
Explore top LinkedIn content from expert professionals.
Summary
Platforms regulate AI content by using rules, technology, and policies to label, trace, and manage AI-generated material across digital spaces. This ensures users can recognize when content is produced by AI and helps prevent misuse, misinformation, or violations of terms of service by making AI activity transparent and accountable.
- Check platform policies: Always review the terms of service before sharing or reusing AI-generated content, as rules vary widely and may restrict copying, redistribution, or modification.
- Understand labeling systems: Pay attention to visible labels and hidden watermarks embedded in AI content, which are designed to help people identify and trace digital material produced by artificial intelligence.
- Promote user awareness: Encourage clear explanations of how algorithms work and what their limitations are, so users can make informed choices and retain control when interacting with AI systems.
-
-
Algorithmic transparency refers to the principle that the operations and decision-making processes of algorithms should be open and understandable to people who interact with or are impacted by them. It’s an aspect of accountability and fairness that seeks to mitigate the ‘black box’ nature of complex AI systems. For high-risk AI systems, strict transparency requirements will apply under the AI Act, such as adequately informing users when they interact with an AI system and making sure that its capabilities and limitations are clearly outlined. The AI Act will also require that users are aware of the AI's decision-making parameters. Companies must not only disclose how the algorithm works but also need to explain the rationale behind these decisions. This is particularly important for high-risk AI systems, where the consequences of error could be catastrophic. Transparency, in this context, evolves from being a mere buzzword to a structural necessity. The AI Act also focuses on transparency in emotion recognition and biometric categorisation, and deepfakes. For the former, the Act requires that people exposed to these AI systems must be informed, except in cases where the technology is used for criminal investigations. This exception raises ethical questions about balancing privacy with security. For the latter, deepfake technology must come with disclosure that the content isn't authentic, though exceptions exist for legal or artistic purposes. These carve-outs have provoked questions about the potential stifling of creative or journalistic endeavours. While the AI Act has taken the spotlight of AI regulation, the Digital Services Act’s provisions on recommender systems echo the AI Act's call for transparency. Recommender systems, a subset of AI technologies, also must outline their main parameters in "plain and intelligible language," echoing the AI Act's push for clear, comprehensible explanations. The DSA even mandates an explanation of why certain parameters are considered more important than others, extending the notion of transparency into the realm of accountability. Both acts show a commitment to user agency. The AI Act ensures that the user retains a degree of control when interacting with high-risk AI systems, including an ‘off switch’. Meanwhile, the DSA promotes user agency by compelling platforms to allow users to modify their preferences. The AI Act introduces obligatory risk assessments for high-risk applications, mirroring the DSA's requirements for platforms to conduct comprehensive risk assessments. Here, we witness two regulatory streams converging into a river of algorithmic accountability, encouraging a more nuanced, ethical approach to AI development and implementation. Laws on algorithmic transparency reflect the a paradigm shift in our approach to the ethical and social implications of AI. The importance of such legislation will only intensify as AI becomes increasingly interwoven into the fabric of our lives.
-
Using #LLM Outputs Across Different #AI Platforms: Terms of Service Analysis As AI tools become increasingly integrated into workflows, a crucial question arises: Does using one LLM’s output and pasting it into another violate terms of service? This report examines the legal and policy implications of transferring AI-generated content across platforms like #Grok, #Perplexity, #ChatGPT, and #Google #Gemini. 🚨 Key Findings 🔹 Perplexity AI – Among the most restrictive, claiming ownership of API outputs and prohibiting copying, caching, or creating derivative works. Their restrictive policies align with their “answer engine” business model and ongoing copyright lawsuits from publishers like Dow Jones. 🔹 Google Gemini – Similar restrictions on redistribution, but more transparency with citation metadata. Google differentiates between free and paid API tiers, impacting how user data is used. 🔹 Grok (xAI) – More permissive, allowing broader use of outputs, provided users attribute Grok as the source. This aligns with Elon Musk’s stance on AI openness. 🔹 ChatGPT (OpenAI) – Unclear stance on output ownership. However, legal precedents suggest OpenAI does not have strong intellectual property claims on ChatGPT’s outputs, though terms of service may still restrict certain uses. ⚠️ Potential Consequences Violating an LLM’s terms could lead to: ❌ Account suspension or bans ⚖️ Legal action in extreme cases 🚀 Risks of “jailbreaking” if it circumvents intended platform controls Conclusion: Copy-pasting outputs across LLMs may violate terms on some platforms (especially Perplexity and Gemini), while others (like Grok) are more lenient. To ensure compliance, always review the latest TOS before using AI-generated content across multiple platforms. 📌 What are your thoughts on AI-generated content ownership? Should LLM outputs be freely transferable? Drop your insights below! 👇 #AI #LLM #ArtificialIntelligence #LegalTech #MachineLearning #AICompliance #PerplexityAI #ChatGPT #GoogleGemini #GrokAI #AIRegulations
-
On September 1, China became the first country in the world to enforce a comprehensive AI content labeling system. Every piece of AI-generated content - text, images, audio, video, even virtual environments - must now carry two identifiers: 🔹 a visible mark for the user (e.g., “AI-generated”) 🔹 a hidden watermark embedded in metadata Visible labels inform people. Hidden watermarks make manipulation harder. Together, they create the first large-scale infrastructure for AI traceability - deployed across platforms that reach over a billion users daily. Technically, this is groundbreaking because: ✔️ It standardises watermarking at file level, making every AI asset traceable across platforms. ✔️ It forces real-time compliance at scale: platforms must scan, tag, and log billions of uploads, retaining records for six months. ✔️ Even if a visible label is cropped, the metadata watermark persists. But this isn’t only about technology. It’s also about control. 📌 The law is under the Qinglang campaign against misinformation & fraud. 📌 Yet analysts warn it also strengthens censorship: by branding content as “AI-generated,” authorities can discredit inconvenient narratives and push platforms toward over-policing. In other part of the world: 🔸 The EU’s AI Act mandates AI labeling, but with exceptions for satire and art, aiming to protect trust while safeguarding free expression. 🔸 The US relies on voluntary watermarking pledges by OpenAI, Google, and Meta under a 2023 White House initiative. Why it matters globally China has operationalised what others are still debating - a nationwide AI authenticity standard. The technical infrastructure proves it’s possible. The political implications remind us about its risks. #AI #AIGovernance #DigitalSovereignty #Innovation #Stratedge
-
I just published a piece exploring in Tech Policy Press how far AI-powered content classification has come—and what that means for platform accountability. LLM-based systems like CoPE (the 9B parameter model Samidh Chakrabarti and I developed at Zentropi) can now interpret policy documents with accuracy matching GPT-4o, at sub-200ms latency on consumer hardware. Policy changes that used to require months of retraining and relabeling? Now they're document edits. As a demonstration of this, I built a labeler to block requests for AI-generated non-consensual intimate imagery. It took about an hour—30 minutes to a first draft, another 30 refining edge cases. It handles euphemistic language, hypothetical framing, multilingual variants. This is just one example, but the broader implication is clear - when platforms fail to address foreseeable harms, that's increasingly a choice rather than a technical constraint. The bottleneck of policy interpretation - one of the historically legitimate reasons this work was so hard - is being broken down.. We have a long way to go. But the excuses for inaction are fading fast. https://lnkd.in/dHH3Bmzs
-
Most companies have an AI policy. Few have one that actually stops sensitive data leakage and protects the company. A policy that says "use AI responsibly" is not a policy. It's a wish. Here are 10 things your responsible AI policy needs: 𝟭/ 𝗔𝗽𝗽𝗿𝗼𝘃𝗲𝗱 𝗧𝗼𝗼𝗹𝘀 𝗟𝗶𝘀𝘁 Name specific tools employees can use. If it's not on the list, it's not approved. Update quarterly. Specify by department. 𝟮/ 𝗗𝗮𝘁𝗮 𝗖𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗥𝘂𝗹𝗲𝘀 Mirror your existing classification scheme: → Public: Any approved tool → Internal: Enterprise agreements only → Confidential: Approved enterprise tools with protections enabled → Restricted (PII, PHI, PCI): Never enters external AI systems 𝟯/ 𝗛𝘂𝗺𝗮𝗻 𝗥𝗲𝘃𝗶𝗲𝘄 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 Define where humans stay in the loop: customer-facing content, legal docs, financial decisions, hiring, ethical edge cases. AI drafts. Humans approve. AI never has final authority over decisions affecting someone's rights, pay, or employment. 𝟰/ 𝗗𝗶𝘀𝗰𝗹𝗼𝘀𝘂𝗿𝗲 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝘀 Decide when you'll disclose AI involvement. Default: disclose when AI was materially relied upon in regulated or customer-impacting contexts. 𝟱/ 𝗜𝗣 𝗮𝗻𝗱 𝗖𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝘁𝗶𝗮𝗹𝗶𝘁𝘆 Clarify what can't go into prompts. Who owns AI-generated content? What if trade secrets enter a public model? 𝟲/ 𝗕𝗶𝗮𝘀 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 Make bias controls use-case based: hiring, credit/pricing, claims/approvals, targeting that could create discriminatory outcomes. Define who signs off. 𝟳/ 𝗜𝗻𝗰𝗶𝗱𝗲𝗻𝘁 𝗥𝗲𝗽𝗼𝗿𝘁𝗶𝗻𝗴 When AI goes wrong: who to contact, what to document, how fast to report, what triggers escalation. 𝟴/ 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 A policy nobody understands is a policy nobody follows. Mandatory training before access. Role-specific guidance. Annual refreshers. 𝟵/ 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 Someone has to own this: who maintains the policy, approves tools, audits compliance, and how often it's reviewed. 𝟭𝟬/ 𝗔𝘂𝗱𝗶𝘁 𝗮𝗻𝗱 𝗘𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 Policies fail at the enforcement layer. Define: access controls, logging, periodic spot checks, and consequences (coaching → access removal → HR escalation). Companies that skip policy work now will spend 10x more cleaning up problems later. Save this for when you create or update your AI policy.
-
On September 1, 2025, China's new mandatory national standard for AI-generated content labeling (GB 45438-2025) took full effect. The law mandates that every piece of high-risk AI-generated content, from a deepfake video to a synthesized voice clip, must carry both: • A visible, prominent label • A persistent, hidden watermark This forced platforms like WeChat and Douyin to be proactive. They must scan, tag, and log a torrent of content, ensuring its origin is traceable. Meanwhile, in the West, social media platforms from Meta to X and YouTube have largely relied on a patchwork of unenforced voluntary commitments. While they are implementing "Made with AI" labels and some auto-detection, the system is fundamentally broken because it is not universal. • 𝗩𝗼𝗹𝘂𝗻𝘁𝗮𝗿𝘆 𝗣𝗹𝗲𝗱𝗴𝗲𝘀 𝗔𝗿𝗲𝗻'𝘁 𝗔𝗹𝘄𝗮𝘆𝘀 𝗙𝗼𝗹𝗹𝗼𝘄𝗲𝗱: There is no legal mandate to enforce labeling on content from external, open-source AI models, nor is there unified, cross-platform cooperation. • 𝗧𝗵𝗲 𝗕𝘂𝗿𝗱𝗲𝗻 𝗶𝘀 𝗼𝗻 𝗨𝘀𝗲𝗿𝘀: Policies often require users to manually disclose when they upload certain AI-generated content. If they don't, the content will remain unlabeled. • 𝗖𝗼𝗻𝗳𝘂𝘀𝗶𝗼𝗻 𝗙𝗹𝗼𝘂𝗿𝗶𝘀𝗵𝗲𝘀: The result is that the sheer volume of content, combined with a lack of standardized, enforced labeling, allows ambiguity to thrive and makes misinformation harder to fight. China's move is a powerful case study. It proves that a comprehensive, end-to-end AI traceability system is technically possible and can be deployed at a massive scale. The crucial question is whether the West, valuing free expression and innovation, can achieve the same level of transparency without resorting to a centralized, government-mandated model. We have the tools, but do we have the will? https://lnkd.in/eeG4NkGj #AI #ArtificialIntelligence #watermarking #Regulation #Technology #Policy #DigitalEthics
-
Are your AI guardrails killing your best ideas? Most AI governance feels like this: Legal pulls the handbrake. Marketing loses speed. Brand risk goes down, but so does originality. The issue is not the AI. It is how your system frames limits. Too often, guardrails are written as "thou shalt not" lists. They block entire content paths instead of steering the pipeline. Result: safe, same, forgettable. A smarter pattern is "guardrails, not handcuffs" built into Ryza: 1) Turn policy into scenario rules, not blanket bans Translate approvals, risk thresholds, and compliance rules into concrete if/then patterns. Example: restrict claims in regulated segments, but leave storytelling range wide elsewhere. You get precision control without suffocating the brand voice. 2) Separate creative exploration from final approvals Let teams ideate wide inside Ryza, then apply stricter checks at the publish stage. This keeps the system open for discovery, while your risk lens tightens near impact. Creativity first, filtration second. 3) Make guardrails transparent to creators Show writers what the system is blocking and why. Feedback in plain language teaches the team to work with the rails, not around them. Over time, content comes out closer to approval on the first pass. 4) Treat guardrails as living, not fixed Review flagged content patterns monthly. If you are blocking the same harmless things, loosen. If new risk shows up in the pipeline, tighten with targeted rules, not blanket freezes. Core idea: Your AI limits should guide creativity into the brand, not away from it. P.S.: If you want to see how Ryza Content turns policies into adaptive guardrails, book a short demo with our team. #BrandSafety, #AIGovernance, #MarketingOperations, #ContentStrategy, #RefreshWithRyza
-
Who Owns What AI Creates? Last year, a global retailer rolled out an AI-powered tool to help its marketing team generate product descriptions and social media content. The results were impressive: faster turnaround, consistent tone, and a noticeable lift in engagement. But when the legal team reviewed the program, red flags appeared. ❌ Ownership of content: Because the text was generated entirely by AI, much of it was not eligible for copyright protection under U.S. law. Without human creative input, the retailer couldn’t enforce IP rights if competitors copied their campaigns. ❌ Licensing of the tool: The AI platform’s terms of service granted the provider certain rights to reuse outputs. That meant some of the “unique” marketing language might not remain exclusive to the retailer. ❌ Training data risks: There was no guarantee that the AI hadn’t been trained on copyrighted material. If a rights holder challenged the use of their works, the retailer could be exposed to litigation. This isn’t a hypothetical anymore... cases are already moving through the courts. Perplexity recently lost a bid to dismiss a lawsuit brought by News Corp over alleged misuse of proprietary content. Meanwhile, OpenAI and other AI companies are leaning heavily on “fair use” defenses, with mixed results. Businesses relying on AI outputs without reviewing their contracts and compliance posture could be walking straight into the same risks. What businesses should do now: ✅ Document human input – Ensure employees edit, arrange, or meaningfully shape AI outputs so they qualify for copyright protection. ✅ Audit contracts – Review licensing terms of every AI tool in use. Know who owns the outputs, and what rights the provider retains. ✅ Protect your innovations – Use trade secrets, patents, and airtight NDAs to safeguard proprietary data and models. ✅ Monitor litigation trends – Laws around AI, copyright, and fair use are evolving rapidly across jurisdictions. What’s unprotectable in the U.S. may be protectable elsewhere, and vice versa. ✅ Lead with ethics and transparency – Beyond the legal risks, businesses face reputational harm if creators, regulators, or consumers believe their use of AI is exploitative or opaque. AI is no longer a futuristic add-on; it’s woven into daily business. But the line between innovation and infringement has never been thinner. Companies that treat AI as a business tool with legal guardrails, not as a magic shortcut, will be the ones that unlock its full potential without sacrificing their intellectual property. 👉 How is your organization navigating the IP risks and rewards of AI?
-
AI Governance: Map, Measure and Manage 1. Governance Framework: - Contextualization: Implement policies and practices to foster risk management in development cycles. - Policies and Principles: Ensure generative applications comply with responsible AI, security, privacy, and data protection policies, updating them based on regulatory changes and stakeholder feedback. - Pre-Trained Models: Review model information, capabilities, limitations, and manage risks. - Stakeholder Coordination: Involve diverse internal and external stakeholders in policy and practice development. - Documentation: Provide transparency materials to explain application capabilities, limitations, and responsible usage guidelines. - Pre-Deployment Reviews: Conduct risk assessments pre-deployment and throughout the development cycle, with additional reviews for high-impact uses. 🎯Map 2. Risk Mapping: - Critical Initial Step: Inform decisions on planning, mitigations, and application appropriateness. - Impact Assessments: Identify potential risks and mitigations as per the Responsible AI Standard. - Privacy and Security Reviews: Analyze privacy and security risks to inform risk mitigations. - Red Teaming: Conduct in-depth risk analysis and identification of unknown risks. 🎯Measure 3. Risk Measurement: - Metrics for Risks: Establish metrics to measure identified risks. - Mitigation Performance Testing: Assess effectiveness of risk mitigations. 🎯Manage 4. Risk Management: - Risk Mitigation: Manage risks at platform and application levels, with mechanisms for incident response and application rollback. - Controlled Release: Deploy applications to limited users initially, followed by phased releases to ensure intended behavior. - User Agency: Design applications to promote user agency, encouraging users to edit and verify AI outputs. - Transparency: Disclose AI roles and label AI-generated content. - Human Oversight: Enable users to review AI outputs and verify information. - Content Risk Management: Incorporate content filters and processes to address problematic prompts. - Ongoing Monitoring: Monitor performance and collect feedback to address issues. - Defense in Depth: Implement controls at every layer, from platform to application level. Source: https://lnkd.in/eZ6HiUH8