Data Privacy Risks on Open Platforms

Explore top LinkedIn content from expert professionals.

Summary

Data privacy risks on open platforms refer to the danger of sensitive information being exposed, misused, or accessed by unauthorized parties when using publicly accessible tools, social media, or cloud-based systems. As more people and businesses interact with open platforms, personal details, credentials, and confidential data can inadvertently become available to others, leading to potential security breaches and identity theft.

  • Review sharing habits: Always double-check what information you share online, especially in public AI tools or social platforms, to prevent accidental exposure of confidential data.
  • Secure developer workflows: Make sure to avoid pasting credentials or sensitive information into open online utilities or code formatting sites, as these platforms may store and display your data publicly.
  • Monitor third-party risk: Regularly assess which external apps and plugins have access to your data and limit the amount of information shared with vendors or integrated services.
Summarized by AI based on LinkedIn member posts
  • View profile for Jon Nordmark
    Jon Nordmark Jon Nordmark is an Influencer

    Co-founder, CEO @ Iterate.ai ( 🔐 𝗣𝗿𝗶𝘃𝗮𝘁𝗲 𝗔𝗜 ) + co-founder, CEO @ eBags ( $1.6B products sold )

    30,455 followers

    𝗣𝘂𝗯𝗹𝗶𝗰 𝗔𝗜 may create the 𝗯𝗶𝗴𝗴𝗲𝘀𝘁 𝗮𝗰𝗰𝗶𝗱𝗲𝗻𝘁𝗮𝗹 𝗱𝗮𝘁𝗮 𝗹𝗲𝗮𝗸 𝗶𝗻 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗵𝗶𝘀𝘁𝗼𝗿𝘆. Public AI includes ChatGPT, Gemini, Grok, Anthropic, DeepSeek, Perplexity — any chat system that runs on giant 𝘀𝗵𝗮𝗿𝗲𝗱 𝗚𝗣𝗨 𝗳𝗮𝗿𝗺𝘀. And while those public models do amazing things, we need to talk about something most leaders underestimate. — 𝗣𝘂𝗯𝗹𝗶𝗰 𝗔𝗜 𝗶𝘀𝗻’𝘁 𝗮 𝘃𝗮𝘂𝗹𝘁. — 𝗜𝘁’𝘀 𝗮 𝘃𝗮𝗰𝘂𝘂𝗺. People are pasting their identities, internal documents, and even corporate secrets into systems designed for scale — 𝗻𝗼𝘁 𝗽𝗿𝗶𝘃𝗮𝗰𝘆. It feels private. — But — It isn’t. Analogy: Using Public AI is like whispering confidential strategy into a megaphone because you thought it was turned off. Now pause for a moment and watch the video. The colored lines represent 𝗺𝗶𝗹𝗹𝗶𝗼𝗻𝘀 𝗼𝗳 𝗽𝗿𝗼𝗺𝗽𝘁𝘀 from 𝗺𝗶𝗹𝗹𝗶𝗼𝗻𝘀 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 working at 𝘁𝗵𝗼𝘂𝘀𝗮𝗻𝗱𝘀 𝗼𝗳 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 — running across: — 𝘀𝗵𝗮𝗿𝗲𝗱 𝗵𝗮𝗿𝗱𝘄𝗮𝗿𝗲 (1,000s of NVIDIA GPUs powering Public AI) — 𝘀𝗵𝗮𝗿𝗲𝗱 𝗺𝗼𝗱𝗲𝗹𝘀 (like ChatGPT’s GPT-5) — 𝘀𝗵𝗮𝗿𝗲𝗱 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 "𝑺𝒉𝒂𝒓𝒆𝒅" is operative word... the word to pay attention to. T.H.A.T. is Public AI: powerful, massive, centralized… but fundamentally risky. Especially for CISOs, Boards, and CEOs responsible for safeguarding PII, HIPAA-sensitive, and financial data. When your data enters a Public LLM, it moves across the world: — It gets logged. — It gets cached. — It gets stored. — And sometimes, it gets trained on. Even when vendors like OpenAI (ChatGPT), Google (Gemini), DeepSeek, Anthropic, and Perplexity say they don’t train on your data, 𝗼𝘁𝗵𝗲𝗿 𝗿𝗶𝘀𝗸𝘀 𝗿𝗲𝗺𝗮𝗶𝗻: — logging — retention — global routing — caching — prompt injection — model leakage — subpoena exposure Training isn’t the only danger. It’s just one of many. You may think you “deleted” something… but think again. That’s why this series exists: To break down the overlooked risks of Public AI — and highlight the safer path. That safer path is Private AI: Models that run behind your firewall, on your hardware, with your controls. You may think you “deleted” something, but think again. Tomorrow we begin the list. 𝗣𝗿𝗶𝘃𝗮𝘁𝗲 𝗔𝗜 🔒 𝗸𝗲𝗲𝗽𝘀 𝘆𝗼𝘂𝗿 ��𝗮𝘁𝗮 𝗯𝗲𝗵𝗶𝗻𝗱 𝘆𝗼𝘂𝗿 𝗳𝗶𝗿𝗲𝘄𝗮𝗹𝗹 — 𝗻𝗼𝘁 𝗶𝗻 𝘀𝗼𝗺𝗲𝗼𝗻𝗲 𝗲𝗹𝘀𝗲’𝘀 𝗹𝗼𝗴𝘀. — I’ve added a simple explainer in the comments. (This post is part of my 27-part Public AI Risk Series.) #PrivateAI #EnterpriseAI #CyberSecurity #BoardDirectors #AICompliance

  • View profile for Murtuza Lokhandwala

    IT Service Delivery Leader | Project Manager IT | Major Incident & Problem Management | IT Infrastructure | ITIL | Cybersecurity | SLA & Operations Excellence | 14+ Years

    5,659 followers

    Think Before You Share: The Hidden Cybersecurity Risks of Social Media 🚨🔐 In an era where data is the new currency, every post, check-in, or status update can serve as an intelligence goldmine for cybercriminals. What seems like harmless sharing—your vacation photos, workplace updates, or even a "fun fact" about your first pet—can be weaponized against you. 🔥 How Oversharing Exposes You to Cyber Threats 🔹 Geo-Tagging & Real-Time Location Leaks Sharing your location makes you an easy target. Cybercriminals use this data to track routines, monitor absences, or even launch physical security threats such as home burglaries. 🔹 Social Engineering & Credential Harvesting Those "what’s your mother’s maiden name?" or "which city were you born in?" quiz posts are a hacker’s playground. Attackers scrape these responses to guess password security questions or craft highly convincing phishing emails. 🔹 Metadata & Digital Fingerprinting Every photo you upload contains EXIF metadata (including GPS coordinates and device details). Attackers can extract this information, identify locations, and even map out behavior patterns for targeted cyberattacks. 🔹 OSINT (Open-Source Intelligence) Reconnaissance Threat actors don’t need sophisticated hacking tools when your social media profile provides a full dossier on your life. They correlate job roles, connections, and public interactions to execute whaling attacks, corporate espionage, or deepfake impersonations. 🔹 Dark Web Data Correlation Your exposed social media details can be cross-referenced with breached databases. If your credentials have been compromised in past data leaks, attackers can launch credential stuffing attacks to hijack your accounts. 🔐 Cyber-Hygiene: Best Practices for Social Media Security ✅ Restrict Profile Visibility – Limit exposure by setting profiles to private and segmenting audiences for sensitive updates. ✅ Sanitize Metadata Before Uploading – Use tools to strip EXIF data from images before posting. ✅ Implement Multi-Factor Authentication (MFA) – Enforce adaptive authentication to prevent unauthorized account access. ✅ Zero-Trust Mindset – Assume any publicly shared data can be aggregated, exploited, or weaponized against you. ✅ Monitor for Breach Exposure – Regularly check if your credentials are compromised using breach notification services like Have I Been Pwned. 🔎 The Internet doesn’t forget. Every post contributes to your digital footprint—control it before someone else does. ��� Have you ever reconsidered a social media post due to security concerns? Drop your thoughts below! 👇 #CyberSecurity #SocialMediaThreats #Infosec #PrivacyMatters #DataProtection #Phishing #CyberSecurity #ThreatIntelligence #ZeroTrust #CyberThreats #infosec #cybersecuritytips #cybersecurityawareness #informationsecurity #networking #networksecurity #cyberattacks #CyberRisk #CyberHygiene #CyberThreats #ITSecurity #InsiderThreats #informationtechnology #technicalsupport

  • View profile for Keith King

    Former White House Lead Communications Engineer, U.S. Dept of State, and Joint Chiefs of Staff in the Pentagon. Veteran U.S. Navy, Top Secret/SCI Security Clearance. Over 14,000+ direct connections & 40,000+ followers.

    39,999 followers

    Developer Tools Accidentally Expose Sensitive Credentials Across Banking, Government, and Healthcare A major data leak reveals how simple coding utilities can become high-risk attack vectors. Introduction Two popular online code-formatting tools—JSONFormatter and CodeBeautify—have inadvertently exposed thousands of sensitive credentials over multiple years. Developers using these sites to clean up JSON or reformat code unintentionally saved links containing embedded secrets, leaving them publicly accessible. The exposure spans banks, governments, healthcare systems and even cybersecurity firms, creating a high-impact security incident with wide operational implications. What Happened Scope of Exposure • watchTowr researchers uncovered five years of JSONFormatter data and a full year of CodeBeautify data. • Saved links publicly exposed everything users pasted—including secrets and authentication details. • As of publication, the links remain accessible to anyone. Types of Compromised Information • Active Directory usernames and passwords. • Cloud and database credentials. • SSH private keys and session recordings. • API tokens, CI/CD secrets and code repository access keys. • Payment gateway keys and KYC-sensitive PII. • Bank and stock exchange system credentials, including AWS access keys for a major global trading platform. • Highly identifiable data from at least one cybersecurity company. Why It Happened • The platforms are designed for convenience, not security. • Saved-formatting URLs embed whatever users paste—and are not protected by authentication. • Developers often use these tools without realizing they are publishing confidential data to the open internet. • No automated scrubbing, encryption or expiration policy prevents long-term exposure. Industry Impact • High-risk sectors—including government agencies, financial institutions and healthcare providers—now face potential breaches. • Attackers could weaponize this data to compromise networks, manipulate databases or hijack cloud infrastructure. • Third-party risk expands: these leaks did not originate inside affected organizations but through developer workflows. Why This Matters This incident highlights a systemic issue in modern software development: convenience tools can inadvertently become severe security liabilities. As AI-driven coding accelerates, reliance on quick online utilities grows, increasing exposure risks. Organizations must treat developer tooling as part of their attack surface, enforce strict secret-handling policies and integrate automated scanning to detect leaked credentials before adversaries exploit them. I share daily insights with 34,000+ followers across defense, tech, and policy. If this topic resonates, I invite you to connect and continue the conversation. Keith King https://lnkd.in/gHPvUttw

  • View profile for Naveen Bachkethi

    VP Engineering, GenAI DLP @ Concentric AI | Founder & CEO, Swift Security (Acquired) | GenAI Data Security Leader

    17,036 followers

    The OpenAI–Mixpanel breach(https://lnkd.in/giZuQ3mP) is a warning sign for every company using public SaaS and GenAI tools, not because of what leaked, but because of what it revealed. Last week, OpenAI confirmed that a breach at its analytics provider Mixpanel exposed user names, email addresses, and metadata, even though core data, API keys, and chat content were not affected. On the surface, it feels minor. But in security, metadata is rarely “just metadata.” It's identity. It’s behavior. It’s a map of who uses what, from where, and when. And in the GenAI era, it’s often the connective tissue between people, systems, and enterprise workflows. 👉 The real insight: As organizations integrate public SaaS + GenAI applications, their attack surface no longer stops at their own infrastructure. It now extends to every analytics script, plugin, browser extension, and third-party system stitched into their workflow. We’ve spent years hardening core data systems. But very few companies have hardened the data exhaust  telemetry, user metadata, prompts, logs, and behavioral signals that flow silently to vendors. This incident highlights three truths: 1️⃣ Data security must move upstream. We must classify and monitor every type of data, not just the obviously sensitive fields. 2️⃣ Vendor ecosystems are now part of your security perimeter. A single compromised SaaS vendor can create a breach path into hundreds of enterprises. 3️⃣ GenAI adoption amplifies the blast radius. Because AI systems rely heavily on prompts, analytics, and context, they naturally create more metadata than traditional apps and more opportunities for leakage. As someone building in the GenAI security space, I see this as a pivotal moment for our industry. AI is accelerating faster than governance practices are keeping up. We cannot keep treating SaaS telemetry as harmless or optional. It’s part of the risk model. The organizations that win with GenAI will be the ones that: 1. Know where their data flows 2. Understand what leaves their environment 3. Minimize what 3rd-party vendors can see 4. And embed security, visibility, and governance into every stage of their AI journey Breaches like this aren’t outliers, they are signals. Signals that the future of AI adoption must be paired with a new generation of security practices. Because the question isn’t just “How do we secure AI?” It’s “How do we secure everything that goes to AI ?” Good News: Concentric AI strong can help!

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (92,000+ subscribers), Mother of 3

    128,831 followers

    🚨 AI Privacy Risks & Mitigations Large Language Models (LLMs), by Isabel Barberá, is the 107-page report about AI & Privacy you were waiting for! [Bookmark & share below]. Topics covered: - Background "This section introduces Large Language Models, how they work, and their common applications. It also discusses performance evaluation measures, helping readers understand the foundational aspects of LLM systems." - Data Flow and Associated Privacy Risks in LLM Systems "Here, we explore how privacy risks emerge across different LLM service models, emphasizing the importance of understanding data flows throughout the AI lifecycle. This section also identifies risks and mitigations and examines roles and responsibilities under the AI Act and the GDPR." - Data Protection and Privacy Risk Assessment: Risk Identification "This section outlines criteria for identifying risks and provides examples of privacy risks specific to LLM systems. Developers and users can use this section as a starting point for identifying risks in their own systems." - Data Protection and Privacy Risk Assessment: Risk Estimation & Evaluation "Guidance on how to analyse, classify and assess privacy risks is provided here, with criteria for evaluating both the probability and severity of risks. This section explains how to derive a final risk evaluation to prioritize mitigation efforts effectively." - Data Protection and Privacy Risk Control "This section details risk treatment strategies, offering practical mitigation measures for common privacy risks in LLM systems. It also discusses residual risk acceptance and the iterative nature of risk management in AI systems." - Residual Risk Evaluation "Evaluating residual risks after mitigation is essential to ensure risks fall within acceptable thresholds and do not require further action. This section outlines how residual risks are evaluated to determine whether additional mitigation is needed or if the model or LLM system is ready for deployment." - Review & Monitor "This section covers the importance of reviewing risk management activities and maintaining a risk register. It also highlights the importance of continuous monitoring to detect emerging risks, assess real-world impact, and refine mitigation strategies." - Examples of LLM Systems’ Risk Assessments "Three detailed use cases are provided to demonstrate the application of the risk management framework in real-world scenarios. These examples illustrate how risks can be identified, assessed, and mitigated across various contexts." - Reference to Tools, Methodologies, Benchmarks, and Guidance "The final section compiles tools, evaluation metrics, benchmarks, methodologies, and standards to support developers and users in managing risks and evaluating the performance of LLM systems." 👉 Download it below. 👉 NEVER MISS my AI governance updates: join my newsletter's 58,500+ subscribers (below). #AI #AIGovernance #Privacy #DataProtection #AIRegulation #EDPB

  • View profile for Jon Krohn
    Jon Krohn Jon Krohn is an Influencer

    Co-Founder of Y Carrot 🥕 Fellow at Lightning A.I. ⚡️ SuperDataScience Host 🎙️

    44,478 followers

    Consumers and enterprises dread that Generative A.I. tools like ChatGPT breach privacy by using convos as training data, storing PII and potentially surfacing confidential data as responses. Prof. Raluca Ada Popa has all the solutions. Today's guest, Raluca: • Is Associate Professor of Computer Science at University of California, Berkeley. • Specializes in computer security and applied cryptography. • Her papers have been cited over 10,000 times. • Is Co-Founder and President of Opaque Systems, a confidential computing platform that has raised over $31m in venture capital to enable collaborative analytics and A.I., including allowing you to securely interact with Generative A.I. • Previously co-founded PreVeil, a now-well-established company that provides end-to-end document and message encryption to over 500 clients. • Holds a PhD in Computer Science from MIT. Despite Raluca being such a deep expert, she does such a stellar job of communicating complex concepts simply that today’s episode should appeal to anyone that wants to dig into the thorny issues around data privacy and security associated with Large Language Models (LLMs) and how to resolve them. In the episode, Raluca details: • What confidential computing is and how to do it without sacrificing performance. • How you can perform inference with an LLM (or even train an LLM!) without anyone — including the LLM developer! — being able to access your data. • How you can use commercial generative models OpenAI’s GPT-4 without OpenAI being able to see sensitive or personally-identifiable information you include in your API query. • The pros and cons of open-source versus closed-source A.I. development. • How and why you might want to seamlessly run your compute pipelines across multiple cloud providers. • Why you should consider a career that blends academia and entrepreneurship. Many thanks to Amazon Web Services (AWS) and Modelbit for supporting this episode of SuperDataScience, enabling the show to be freely available on all major podcasting platforms and on YouTube — see comments for details ⬇️ #superdatascience #generativeai #ai #machinelearning #privacy #confidentialcomputing

  • View profile for Nico Orie
    Nico Orie Nico Orie is an Influencer

    VP People & Culture

    17,698 followers

    OpenClaw, MCP, and the Architecture of AI Risk Autonomous AI agents are no longer just experiments — they’re starting to act inside real systems. OpenClaw (formerly MoltBot/Clawdbot) is a good example. It can access files, connect to apps, run workflows, and even remember information across sessions. Most of this is powered by the Model Context Protocol (MCP) — a tool that lets AI agents interact with your local and cloud systems. MCP is powerful, but it also opens up new risks. AI researcher Simon Willison calls it the “Lethal Trifecta” — three things that together create a big security problem: 1. Access to private data 2. Exposure to untrusted content (like emails or web pages) 3. Ability to act externally (send messages, call APIs, automate actions) When all three are present, attackers don’t need to hack anything in the traditional way. They can hide malicious instructions in normal content, and the AI will execute them automatically. Add persistent memory, and a malicious instruction planted today could run weeks later. There’s another risk: employees using tools like OpenClaw privately. Like early “shadow IT,” people may install these AI tools on their own devices, connect them to internal apps — without IT or security oversight. AI is moving from answering questions to taking actions. And action changes everything. To stay safe: . Audit all MCP integrations . Enforce least-privilege access . Sandbox agent environments . Require human approval for risky actions . Confirm policies on private AI use AI agents are becoming operational actors. And operational actors need operational controls. Source https://lnkd.in/e5k7ZYi4

  • View profile for Jamal Ahmed

    I Help Professionals Escape Stagnant Careers and Build Six-Figure Data Privacy Careers | Privacy Leader of the Year | AI Governance Expert | Global Keynote Speaker | Bestselling Author | 73,786+ Careers Elevated 🔥

    35,715 followers

    🚨 OpenAI had to withdraw its chat sharing feature. Here’s the privacy lesson everyone’s ignoring: Most people will shrug this off as “tech moves fast.” But if you're in privacy, this is a wake-up call. Even anonymised data becomes dangerous when shared without context, safeguards, or real-world risk modelling. OpenAI didn’t just roll out a flawed feature; they exposed the limits of consent. ☑️ Multiple opt-ins ☑️ Anonymisation ☑️ User choice Still led to people accidentally revealing mental health issues, workplace problems, and more, all indexed on Google. Here’s what you need to take from this: → Privacy by Design isn’t a buzzword. It’s a responsibility. → Leading privacy pros test for the worst-case scenario, not the perfect user. So what should you do? → Never trust UX to do the job of governance. → Audit for real-world behaviour, not internal assumptions. Privacy isn’t about permission. It’s about protection. And this? This was a failure to protect. Let’s stop building for what users should do and start building for what they will do.

  • View profile for Pranav Bhaskar Tiwari

    Technology Law & Policy | Trust & Safety | Public Policy | Government Relations

    6,843 followers

    𝐂𝐚𝐧 𝐜𝐲𝐛𝐞𝐫𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐜𝐨𝐦𝐞 𝐚𝐭 𝐭𝐡𝐞 𝐜𝐨𝐬𝐭 𝐨𝐟 𝐢𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧, 𝐢𝐧𝐧𝐨𝐯𝐚𝐭𝐢𝐨𝐧 & 𝐢𝐧𝐝𝐢𝐯𝐢𝐝𝐮𝐚𝐥 𝐫𝐢𝐠𝐡𝐭𝐬? Recently, the Government released the 𝐃𝐫𝐚𝐟𝐭 𝐓𝐞𝐥𝐞𝐜𝐨𝐦 𝐂𝐲𝐛𝐞𝐫𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐀𝐦𝐞𝐧𝐝𝐦𝐞𝐧𝐭 𝐑𝐮𝐥𝐞𝐬, 𝟐𝟎𝟐𝟓, aiming to combat #fraud by expanding #securityobligations from telecom operators to almost all digital platforms using mobile numbers. At The Dialogue, we hosted a #MultistakeholderConsultation & submitted detailed written comments highlighting risks of legal overreach, mass exclusion, & unchecked executive power. I had the privilege of authoring this submission. 𝐊𝐞𝐲 𝐜𝐨𝐧𝐜𝐞𝐫𝐧𝐬 from our analysis: 𝟏. 𝐋𝐞𝐠𝐢𝐬𝐥𝐚𝐭𝐢𝐯𝐞 𝐂𝐨𝐦𝐩𝐞𝐭𝐞𝐧𝐜𝐞: The draft introduces a new category ‘Telecom Identifier User Entities (TIUEs)’ which includes social media platforms, e-commerce, fintech apps, etc. But the parent Telecom Act, 2023 has no such mandate. Creating new regulated categories through delegated legislation risks being struck down as ultra vires. 𝟐. 𝐃𝐢𝐬𝐜𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧 𝐖𝐢𝐭𝐡𝐨𝐮𝐭 𝐒𝐚𝐟𝐞𝐠𝐮𝐚𝐫𝐝𝐬: The rules allow the government to suspend mobile identifiers used on platforms without notice, review, or appeal unlike Section 69A of the IT Act, which has checks & balances. This means your phone number could be blocked across apps & services without due process, disrupting banking, health, education & more. 𝟑. 𝐒𝐡𝐚𝐫𝐞𝐝 𝐃𝐞𝐯𝐢𝐜𝐞𝐬, 𝐋𝐨𝐬𝐭 𝐀𝐜𝐜𝐞𝐬𝐬: Millions in India, especially women & low-income users, access the internet through shared SIMs or devices. The rules assume a one-to-one relationship between a user & their mobile number, ignoring social realities & risking wrongful denial of access. 𝟒. 𝐔𝐧𝐟𝐮𝐧𝐝𝐞𝐝 𝐌𝐚𝐧𝐝𝐚𝐭𝐞𝐬 𝐟𝐨𝐫 𝐒𝐭𝐚𝐫𝐭𝐮𝐩𝐬: TIUEs would need to use a Mobile Number Validation (MNV) platform priced at ₹3 per verification. While this may seem nominal, small businesses & startups would face huge compliance & integration costs, stifling innovation & competition. 𝟓. 𝐑𝐞𝐠𝐮𝐥𝐚𝐭𝐨𝐫𝐲 𝐂𝐨𝐧𝐟𝐮𝐬𝐢𝐨𝐧:  The rules duplicate existing frameworks under CERT-In, MeitY, RBI, & DPDP Act, creating overlapping mandates, compliance fatigue, & increased risk of enforcement confusion. 𝟔. 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐚𝐭 𝐑𝐢𝐬𝐤: The MNV system opens up metadata trails linking numbers to online services. W/o strong guardrails, this undermines user privacy & misuse of sensitive data. 💡 𝐎𝐮𝐫 𝐫𝐞𝐜𝐨𝐦𝐦𝐞𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐬 ✅ Withdraw & rework the Rules through wider inter-ministerial & public consultation. ✅ Launch a voluntary sandbox phase for high-risk sectors. ✅ Build gender-sensitive access frameworks & consult civil society. ✅ Ensure legal & constitutional alignment. ✅ Codify clear procedural safeguards for any identifier blocking.   Link to our complete submission in comments. Kazim Rizvi Garima Saxena Akriti Jayant #Telecom #TechPolicy #Blocking #ActualKnowledge #Equity #Access #LinkedInInsiderConnect

  • View profile for Mateusz Kupiec, FIP, CIPP/E, CIPM

    Institute of Law Studies, Polish Academy of Sciences || Privacy Lawyer at Traple Konarski Podrecki & Partners || DPO || I know GDPR. And what is your superpower?🤖

    26,518 followers

    ‼️The European Data Protection Board has just published its draft Guidelines 3/2025 (version 1.0) on the interplay between the #DSA and the #GDPR. 📍The guidelines stress that the DSA often refers to GDPR concepts such as profiling, special categories of data, or transparency obligations. The EDPB outlines several areas of interplay. Content moderation under the DSA inevitably involves processing personal data, which must be based on lawful grounds under the GDPR. Notice-and-action mechanisms, complaint handling, and account suspensions also require strict adherence to data minimisation and transparency principles. On advertising, the prohibition in Article 26 DSA on using special categories of data for targeting complements GDPR restrictions, reinforcing a layered protection regime. Recommender systems, meanwhile, raise risks of automated decision-making that could trigger Article 22 GDPR. 📍For me, the most striking part of the guidelines concerns minors. Article 28 DSA obliges providers of online platforms accessible to minors to ensure a high level of privacy, safety, and security. The EDPB clarifies that these duties can justify certain data processing under Article 6(1)(c) GDPR, but only if strictly necessary and proportionate. Crucially, Article 28(3) DSA specifies that platforms are not required to process additional personal data simply to establish whether a user is a minor. 📍The guidelines strongly discourage intrusive age assurance methods such as scanning government IDs or permanently storing age data. Instead, platforms should apply privacy-preserving approaches, for example by confirming only that a user meets a threshold age without revealing their exact identity or date of birth. The EDPB emphasises that age assurance must be risk-based: stricter methods may be justified if the platform exposes children to high risks (e.g. harmful or manipulative content), while lighter-touch measures may suffice where risks are low. 📍Another important clarification is that providers must not nudge minors into choosing recommender systems based on profiling. Non-profiling options should be presented neutrally, and once selected, the platform should not continue processing data for profiling in the background. Similarly, advertisements cannot be targeted at minors on the basis of profiling, even if other GDPR grounds might otherwise permit such processing. 📍The guidelines also recognise that protecting children online must go beyond technical measures. Providers should adapt their services to address risks to minors’ wellbeing, including exposure to harmful content, pressure from personalised recommendations, and misuse of sensitive data. At the same time, measures must be designed with the GDPR principles of minimisation, proportionality, and privacy by design and by default firmly in mind. #privacy #rodo #ecommerce #platforms

Explore categories