AI Industry Transparency Guidelines

Explore top LinkedIn content from expert professionals.

Summary

AI industry transparency guidelines are rules and frameworks that require companies developing artificial intelligence systems to publicly share details about how their models are built, trained, and used. These guidelines help ensure accountability, safety, and clarity for both AI providers and the public, covering areas like data sources, intended uses, and measures to prevent misuse.

  • Document key details: Keep clear records about your AI model’s design, data sources, intended purposes, and steps taken to address risks like bias and copyright infringement.
  • Label AI content: Mark AI-generated content and deepfakes so users can easily tell what's produced by artificial intelligence, using digital certificates and watermarks when needed.
  • Report and verify: Set up procedures for reporting incidents, conducting risk assessments, and protecting whistleblowers to maintain accountability and build public trust.
Summarized by AI based on LinkedIn member posts
  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    59,246 followers

    Yesterday, the AI Office published the third draft of the General-Purpose AI Code of Practice, a key regulatory instrument for AI providers seeking to align with the EU AI Act. Developed with input from 1,000 stakeholders, the draft refines previous versions by clarifying compliance requirements and introducing a structured approach to regulation. GPAI providers must meet baseline obligations on transparency and copyright compliance, while models classified as having systemic risk face additional commitments under Article 51 of the AI Act. The final version, expected in May 2025, aims to facilitate compliance while ensuring AI models adhere to safety, security, and accountability standards. The Code introduces the Model Documentation Form, requiring AI providers to disclose key details such as model architecture, parameter size, training methodologies, and data sources. Transparency obligations include specifying the provenance of training data, documenting measures to mitigate bias, and reporting compute power and energy consumption. GPI providers must also outline their models’ intended uses, with additional requirements for systemic-risk models, including adversarial testing and evaluation strategies. Documentation must be retained for twelve months after a model is retired, with copyright compliance mandatory for all providers, including open-source AI. GPAI providers must establish formal copyright policies and comply with strict data collection rules. Web crawlers cannot bypass paywalls, access piracy sites, or ignore the Robot Exclusion Protocol. The Code also requires providers to prevent AI-generated copyright infringement, mandate compliance in acceptable use policies, and implement mechanisms for rightsholders to submit copyright complaints. Providers must maintain a point of contact for copyright inquiries and ensure their policies are transparent. For AI models with systemic risk, the Code introduces a Safety and Security Framework, aligning with the AI Act’s high-risk requirements. Providers must assess risks in areas such as cyber threats, manipulation, and autonomous AI behaviours. They must define risk acceptance criteria, anticipate risk escalations, and conduct assessments at key development milestones. If risks are identified, development may need to be paused while safeguards are implemented. GPAI providers must introduce technical safeguards, including input filtering, API access controls, and security measures meeting at least the RAND SL3 standard. From 2 November 2025, systemic-risk models must undergo external risk assessments before release. Providers must maintain a Safety and Security Model Report, report AI-related incidents within strict timeframes, and implement governance structures ensuring responsibility at all levels. Whistleblower protections are also required. With the final version expected in May 2025, AI providers have a short window to prepare before the AI Act takes full effect in August.

  • View profile for Kevin Klyman

    AI Policy @ Stanford + Harvard

    18,236 followers

    Our paper on transparency reports for large language models has been accepted to AI Ethics and Society! We’ve also released transparency reports for 14 models. If you’ll be in San Jose on October 21, come see our talk on this work. These transparency reports can help with: 🗂️ data provenance ⚖️ auditing & accountability 🌱 measuring environmental impact 🛑 evaluations of risk and harm 🌍 understanding how models are used   Mandatory transparency reporting is among the most common AI policy proposals, but there are few guidelines available describing how companies should actually do it. In February, we released our paper, “Foundation Model Transparency Reports,” where we proposed a framework for transparency reporting based on existing transparency reporting practices in pharmaceuticals, finance, and social media. We drew on the 100 transparency indicators from the Foundation Model Transparency Index to make each line item in the report concrete. At the time, no company had released a transparency report for their top AI model, so in providing an example we had to build a chimera transparency report with best practices drawn from 10 different companies.   In May, we published v1.1 of the Foundation Model Transparency Index, which includes transparency reports for 14 models, including OpenAI’s GPT-4, Anthropic’s Claude 3, Google’s Gemini 1.0 Ultra, and Meta’s Llama 2. The transparency reports are available as spreadsheets on our GitHub and in an interactive format on our website. We worked with companies to encourage them to disclose additional information about their most powerful AI models and were fairly successful – companies shared more than 200 new pieces of information, including potentially sensitive information about data, compute, and deployments. 🔗 Links to these resources in comment below!   Thanks to my coauthors Rishi Bommasani, Shayne Longpre, Betty Xiong, Sayash Kapoor, Nestor Maslej, Arvind Narayanan, Percy Liang at Stanford Institute for Human-Centered Artificial Intelligence (HAI), MIT Media Lab, and Princeton Center for Information Technology Policy

  • View profile for Victoria Beckman

    Associate General Counsel - Cybersecurity & Privacy

    32,759 followers

    The European Commission published its first draft of the “Code of Practice on Transparency of AI‑Generated Content” designed as a tool to help organizations demonstrate alignment with the transparency requirements (Art. 50) of the AI Act. Article 50 of the AI Act includes obligations for providers to mark AI-generated or manipulated content in a machine-readable format, and for users who deploy generative AI systems for professional purposes to clearly label deepfakes and AI-text publications on matters of public interest. The document is divided into two sections. The first section covers rules for marking and detecting AI content, applicable to providers of generative AI systems, including to: - Use a Multi‑layered machine-readable marking of AI‑generated content - Use imperceptible watermarks interwoven within content - Adopt a digitally signed “manifest/provenance certificate” for content that can’t securely carry metadata - Offer free detection interfaces/tools, including confidence scoring, and complementary forensic detection that does not rely on active marking - Test against common transformations and adversarial attacks - Use open standards and shared/aggregated verifiers to enable cross-platform detection and lower compliance friction The second section covers labelling deepfakes and certain AI-generated or manipulated text on matters of public interest and is applicable to deployers of generative AI systems, including: - Deepfake labelling - Modality‑specific labelling rules for real-time video, non-real-time video, images, multimodal content, and audio-only - Operational governance: encourages internal compliance documentation, staff training, accessibility measures, and mechanisms to flag and fix missing/incorrect labels.

  • View profile for Kevin Pomfret

    Attorney, Author:| Space, AI, Digital Twins, Smart Cities, Mobility, Autonomy

    9,014 followers

    Businesses that offer generative AI systems or services in California should be aware that the state's Generative AI Training Data Transparency Act takes effect on January 1, 2026. It imposes documentation and disclosure obligations on developers of such systems released on or after January 1, 2022. Specifically, covered developers must post on their website documentation describing the data used to train, test, validate, or fine-tune the system, including: · Sources or owners of datasets and how they support the system’s intended purpose. · The size of datasets (ranges permitted; estimates for dynamic datasets). · Types and characteristics of data points and labeling practices. · Whether datasets include copyrighted, trademarked, or patented material versus public domain content. · Whether datasets were purchased or licensed. · Whether datasets include personal information or aggregate consumer information as defined under California law. · Any cleaning, processing, or modifications performed and their purposes. · Data collection periods, including whether collection is ongoing, and the dates first used in development. · Whether synthetic data generation was used, and the functional need for it if included. There are certain exemptions, including if the system (i) is made available only to a federal entity exclusively for national security, military or defense purposes (ii) or made available solely to a hospital medical staff member. Businesses offering generative AI systems and services in California should consider taking the following next steps: · Conducting a data provenance and licensing assessment for all covered systems released since January 1, 2022. · Building a standardized disclosure template aligned with the statute’s enumerated elements to support publication before January 1, 2026 and at each substantial modification. · Establishing change‑management triggers so that retraining or fine‑tuning that materially affects performance prompts updated disclosures. · Mapping exemptions, if any apply, and document the basis for relying on them. #geospatiallaw #geoai

  • View profile for Raymond Sun
    Raymond Sun Raymond Sun is an Influencer

    Tracking AI regulation and culture trends | Tech Lawyer & Developer | @techieray @LegalQuants

    28,876 followers

    “Trust but verify”.   ^ That’s the 3-word summary of the policy approach proposed by the Joint California Policy Working Group on AI Frontier Models (attached below).   Even if you’re not based in California, this is a fantastic rulebook on AI policy and regulation.   It's one of the more nuanced and deeply-thought papers that cuts past the generic “regulation v innovation” debate, and dives straight into a specific policy solution for governing frontier models (with wisdom draw from historical analogies in tobacco, energy, pesticides and car safety).   Here’s my quick summary of the “trust but verify” model.   1️⃣ TRANSPARENCY In a nutshell, the “trust but verify” approach is rooted in transparency, which is essential for building “trust”. But transparency is such a broad concept, so the paper neatly breaks it down in terms of: ▪️ Data acquisition ▪️ Safety practices ▪️ Security practices ▪️ Pre-deployment testing ▪️ Downstream impact ▪️ Accountability for openness There’s nuance and different transparency mechanisms to each area. However, transparency alone doesn’t guarantee accountability or redress. In fact, the paper warns us about “transparency washing” – i.e. where policymakers (futilely) pursue transparency for the sake of it without achieving anything. Transparency needs to be tested and verified (hence the “verify”).   2️⃣ THIRD PARTY RISK ASSESSMENT This supports the “verify” aspect, and the idea of “evidence-based transparency” (i.e. transparency that you can actually trust). This is not just about audits and evaluations, but also specific things like: ▪️ researcher protections (i.e. safe harbour / indemnity protections for public interest safety research) ▪️ responsible disclosure (i.e. infrastructure is needed to communicate identified vulnerabilities to affect parties)   3️⃣ WHISTLEBLOWER PROTECTION This means legal safeguards to protect retaliation against whistleblowers who report misconduct, fraud, illegal activities, etc. It might be the secret to driving *real* corporate accountability in AI.   4️⃣ ADVERSE EVENT REPORTING A reporting regime for AI-related incidents (similar to data breach reporting regimes) help with identification and enforcement + regulatory coordination and information sharing + analytics. 5️⃣ SCOPE What type of frontier models should be regulated? The paper suggests these guiding principles: ▪️ "Generic developer-level thresholds seem to be generally undesirable given the current AI landscape"   ▪️ "Compute thresholds are currently the most attractive cost-level thresholds, but they are best combined with other metrics for most regulatory intents"   ▪️ "Thresholds based on risk evaluation results and observed downstream impact are promising for safety and corporate governance policy, but they have practical issues" 👓 Want more? See my map which tracks AI laws and policies around the world (see link in 'Visit my website'). #ai #tech #airegulation #policy #california

  • View profile for Sona Sulakian

    CEO & Co-founder at Pincites

    17,173 followers

    Onboarding an AI vendor? Don't sign until you've reviewed this checklist. From our analysis of 50+ AI addendums, these are the clauses that actually matter. Not all issues will be relevant to every deal. So always start with the basics: - What data are they collecting? - What can they actually do with it? Force the issue by deleting any usage data or aggregated data rights on a first pass. 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐏𝐞𝐫𝐦𝐢𝐬𝐬𝐢𝐨𝐧𝐬 🔹 No AI use without prior written approval; unapproved use = material breach 🔹 No high-risk or automated decision-making AI unless required for services 🔹 Must comply with all AI laws and related policies 🔹 Support transparency and documentation if buyer requests it 𝐃𝐚𝐭𝐚 & 𝐈𝐏 🔹 Buyer owns all AI inputs, outputs, and related IP 🔹 Vendor cannot use buyer data to train, fine-tune, or improve any AI 🔹 All AI data and outputs are confidential information 🔹 On termination, vendor must return or destroy buyer data and certify deletion 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 & 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 🔹 Maintain strong security controls: MFA, least-privilege, audits, and incident response 🔹 Periodically test and validate AI systems for confidentiality, integrity, and reliability 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 & 𝐄𝐭𝐡𝐢𝐜𝐬 🔹 Ensure AI outputs are accurate, reliable, and ethically developed 🔹 Test for and mitigate bias in training data and outputs 🔹 Don’t generate illegal, offensive, or harmful content 🔹 Clearly label AI-generated audio, images, video, or text 𝐑𝐢𝐬𝐤 & 𝐋𝐢𝐚𝐛𝐢𝐥𝐢𝐭𝐲 🔹 Warrant that AI systems are accurate, secure, bias-free, and virus-free 🔹 Indemnify buyer for IP infringement, contract breaches, or violations of law 🔹 Maintain robust cyber insurance and assume full liability for AI errors or misuse 𝐍𝐨𝐭 𝐬𝐭𝐚𝐧𝐝𝐚𝐫𝐝 𝐲𝐞𝐭, 𝐛𝐮𝐭... 🔹 Conduct third party AI audits 🔹 Maintain AI insurance

  • View profile for Daniel Schwarcz

    Professor at University of Minnesota

    4,009 followers

    At the end of September, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (S.B. 53), requiring large AI companies to report the risks their systems pose and the safeguards they have in place. Unlike last year’s vetoed S.B. 1047, this new version de-emphasizes liability. It explicitly caps financial penalties—even for catastrophic AI failures—and focuses instead on transparency and reporting. As Senator Scott Wiener explained, “Whereas SB 1047 was more of a liability-focused bill, SB 53 is more focused on transparency.” In this new piece for Institute for Law & AI (LawAI), https://lnkd.in/gAig3vSz, Josephine Wolff and I argue that SB 1047's basic approach makes sense, as expanding liability for AI harms won’t necessarily make AI systems safer or more secure. Liability almost always brings insurers into the picture—and as we’ve seen in the cyber insurance market, insurers often struggle to model or mitigate complex, evolving risks. When that happens, insurance helps firms manage liability exposure, not safety risk. California’s transparency-first approach is a smarter place to start. By requiring companies to report on AI risks and incidents, regulators can help build the data needed to understand what works—and what doesn’t—when it comes to preventing AI-related harms. That kind of foundation is critical if we want policy, regulation, and insurance to actually make emerging technologies safer.

  • View profile for Martin Ebers

    Robotics & AI Law Society (RAILS)

    41,923 followers

    European Commission: First Draft Code of Practice on #Transparency of #AI-Generated Content This first draft of the Code addresses key considerations for providers and deployers of AI systems generating content falling within the scope of Article 50(2) and (4). Article 50 of the AI Act includes obligations for providers to mark AI-generated or manipulated content in a machine-readable format, and for users who deploy generative AI systems for professional purposes to clearly label deepfakes and AI-text publications on matters of public interest. To help providers and deployers meet these requirements, the Commission is facilitating the development of a voluntary Code of Practice drafted by independent experts, ahead of these rules entering into application. The draft Code of Practice consists of two sections. The first section covers rules for marking and detecting AI content, applicable to providers of generative AI systems. The second section covers labelling deepfakes and certain AI-generated or manipulated text on matters of public interest and is applicable to deployers of generative AI systems. The Commission will collect feedback on the first draft from participants and observers to the Code of Practice until 23 January. The second draft will be drawn up by mid-March 2026, with the Code expected to be finalised by June next year. The rules covering the transparency of AI-generated content will become applicable on 2 August 2026.

Explore categories