Something I contributed to recently was just published, and it matters. The Advanced Study Institute of Asia at SGT University released a compliance report assessing 14 AI platforms widely used by minors in India and evaluating them against the Digital Personal Data Protection Act 2023. I was brought in as a reviewer, specifically on the Trust and Safety compliance lens. The findings are difficult to ignore. 71% of all assessments across 14 platforms and 14 DPDP criteria were found to be outright non-compliant. Only 13% achieved even relative compliance. Instagram, Canva, and xAI Grok scored 100% non-compliance across all criteria assessed. ChatGPT and Perplexity were not far behind. The most pervasive failure is something deceptively simple: the DPDP Act defines a child as anyone under 18. All platforms reviewed set 13 as the minimum age. That five-year gap is not a technicality. It affects millions of Indian teenagers who are using these tools daily in schools, for homework, and for learning, with essentially no meaningful data protection in place. Parental consent mechanisms are either absent or rely on self-declaration, which anyone can bypass in seconds. Behavioural tracking continues unchecked. Grievance redressal is inaccessible or non-functional for most platforms. India has a compliance deadline approaching. Platforms have time to fix this, but the window is not open indefinitely. The full report is published by the Centre for Law and Critical Emerging Technologies at ASIA. Authored by Shivani Singh and Sonal Lalwani, reviewed by Neeti Goutam and me. If you work in Trust and Safety, policy, EdTech, or data governance, it is worth a read. https://lnkd.in/g-TYc6TS
India AI Platforms Non-Compliant with Data Protection Act
More Relevant Posts
-
Day 17/20: My Machine Learning & AI Journey × Digital Rights × Platform Accountability We audit financial institutions. We audit governments. We audit corporations. But who audits the algorithm? As I continue this journey into machine learning, one concept has become increasingly important to me. Algorithmic auditing. The process of systematically examining an AI system to assess whether it is performing fairly, accurately and without causing disproportionate harm to specific groups. Think of it like a financial audit. A financial auditor does not just ask whether the numbers add up. They ask whether the right processes were followed, whether anything was hidden and whether the outcomes are legitimate. An algorithmic audit asks the same questions of an AI system. Is the model accurate across all groups, or only some? Who bears the cost of its errors? And can its decisions be explained and challenged? And here is why this matters beyond the technical. Last week I sat in a room where a commissioner acknowledged that there is not enough data about technology facilitated gender based violence. My response as a data analyst was that the data exists. It is just uncollected, unreported and siloed. But even when data is collected, an unaudited system can take that data and reproduce the very inequalities it was supposed to address. A system trained on historical reporting patterns will learn that certain communities report less. And it will treat that silence as absence. Not as a symptom of a broken system. Algorithmic auditing interrupts that cycle. It forces the question not just of what the system is doing, but of what the system is doing to whom. And in the hands of advocates, lawyers and civil society organisations, audit findings become evidence. They become the basis for legal challenges, policy demands and platform accountability. In Africa specifically, where enforcement of digital rights frameworks remains uneven and where AI systems are increasingly deployed in high-stakes contexts, independent algorithmic auditing is not a luxury. It is a democratic necessity. Because a system that cannot be audited cannot be held accountable. And a system that cannot be held accountable will always serve the people who built it more than the people it was built for. Day 17 of 20. Because you can't build safe platforms on unsafe systems. Benardine Atuhairwe Digital Rights Advocate | Lawyer | Data & Policy Analyst Africa Agility Foundation, Tshepiso Raphunga, Amoding Barbara Kelis, Lilian Olivia, Collaboration on International ICT Policy for East and Southern Africa (CIPESA) #BenardineInsights #AfricaAgility #AIforAccountability #DigitalRights #WomenInTech #DataEthics #LearningInPublic #PlatformAccountability #GIT20DayChallenge #AIEthics #AIandSociety #AlgorithmicAuditing
To view or add a comment, sign in
-
-
“We have consent.” That was the confident reply. I asked, “Where is it stored?” They paused. The company had: - A consent line in the website footer - A checkbox during signup - A clause inside terms and conditions But they did not have: - A timestamped consent log - A record of what exact notice was shown - Version control of consent language - A mechanism to withdraw consent - A system to stop processing after withdrawal Under DPDPA, consent is not a feeling. It is a verifiable record. It must be: - Specific - Informed - Unambiguous - Purpose linked - Demonstrable The real compliance test is simple: If tomorrow a Data Principal asks, “Prove what I agreed to,” can you? Consent is not about collecting data. It is about documenting legitimacy. PS : AI is used to generate this post content #DPDPA #ConsentManagement #DataProtection #PrivacyCompliance #DigitalGovernance #AIGovernance #RiskAndCompliance #ResponsibleAI
To view or add a comment, sign in
-
-
🚨 New IAPP article (CPE eligible), and it just became even more timely. While a new study published late last week shows that LLM-powered agents can now deanonymize pseudonymous users at scale, Roy Kamp and I had already been working on a related question: What does “identifiability” actually mean in an age where systems can correlate, infer and reconnect data across platforms? The study demonstrates that with sufficient access, models can link accounts using writing style, contextual clues and public breadcrumbs, sometimes in minutes. That is not a niche scenario. It is a capability shift. And it reinforces the core message of our article: 👉 Identifiability is not theoretical. It is architectural. 👉 Pseudonymization is not a lawful basis. It is a technical measure. 👉 Whether data is “personal” depends on what your system can realistically do. The CJEU’s SRB judgment tells us that identifiability is contextual and practical. The new research shows just how powerful “realistically available means” have become. Put differently: the more capable our systems are, the narrower the space for claiming true anonymization. For privacy professionals, CISOs, AI engineers and policymakers, this changes the threat model: • Pseudonymity is fragile and “Anonymized” is rarely absolute. • These vendors who have been saying ab initio that the data is not anonymized were the ones on the right track: keep your data personal, at least it will be safeguarded by the GDPR and alike laws. I know, it feels inconfortable. • Compliance lives in architecture, access controls and model design. • Engineers are now defining the legal reality through technical choices. We didn’t write our article in response to this study. But the study makes our argument even more urgent. 📖 “AI training after the SRB ruling: A practical playbook for engineers who now define compliance” - Full link under the first comment. 📅 Published 11 March 2026 🎓 Eligible for CPE credit Roy and I sometimes feel like Dupont and Dupont chasing the ever-moving target of anonymization and de-identification. This time, though, the target may be chasing us. Stay safe out there!
To view or add a comment, sign in
-
-
The EU AI Act's biggest obligations land in August 2026. That's 5 months away. Most privacy professionals aren't ready yet. 🟠 Here's what's actually changing: 🔶 High-risk AI systems need conformity assessments 🔶 Transparency rules kick in for certain AI interactions 🔶 GPAI model providers face increased enforcement from August 2026 🔶 Organisations must map and classify their AI systems This isn't just a tech team problem. Privacy professionals are being pulled into AI governance work right now. And the gap between those who understand the AI Act and those who don't is widening fast. 🟠 That's why AIGP certification is gaining serious traction. It's the only credential that directly maps to AI governance obligations. Hiring managers are starting to ask for it. And it signals you can bridge privacy and AI risk. Not just in theory. In practice. 🟠 If you're preparing for AIGP, Privacy Prep covers it. 🔶 3,552 practice questions across all 8 IAPP certs 🔶 Dedicated AIGP flashcards and daily challenges 🔶 Study on the go for £9.99/month or £79.99/year Click the link in my bio to download the app. #AIGovenance #PrivacyCareers #EUAIAct #AIGP #PrivacyPrep
To view or add a comment, sign in
-
-
The EU AI Act's biggest obligations land in August 2026. That's 5 months away. Most privacy professionals aren't ready yet. 🟠 Here's what's actually changing: 🔶 High-risk AI systems need conformity assessments 🔶 Transparency rules kick in for certain AI interactions 🔶 GPAI model providers face increased enforcement from August 2026 🔶 Organisations must map and classify their AI systems This isn't just a tech team problem. Privacy professionals are being pulled into AI governance work right now. And the gap between those who understand the AI Act and those who don't is widening fast. 🟠 That's why AIGP certification is gaining serious traction. It's the only credential that directly maps to AI governance obligations. Hiring managers are starting to ask for it. And it signals you can bridge privacy and AI risk. Not just in theory. In practice. 🟠 If you're preparing for AIGP, Privacy Prep covers it. 🔶 3,552 practice questions across all 8 IAPP certs 🔶 Dedicated AIGP flashcards and daily challenges 🔶 Study on the go for £9.99/month or £79.99/year Click the link in my bio to download the app. #AIGovenance #PrivacyCareers #EUAIAct #AIGP #PrivacyPrep
To view or add a comment, sign in
-
-
📊 Market Intelligence Update: What if your AI compliance strategy just got a federal makeover? Problem: AI regulations are currently a fragmented patchwork of state laws, leading to uncertainty and high compliance costs for national enterprises. Opportunity: The proposed federal preemption could streamline regulations, reduce complexity, and create a more predictable environment for AI innovation, focusing on child safety while accelerating development. Insight: This shift centralizes power and simplifies operations for large firms but risks stifling state-level regulatory innovation that could drive adaptive progress, setting the stage for significant legal battles. Dive deeper into the implications for your business: https://lnkd.in/g_AdAzbS #SignalDailyNews #EnterpriseTech Full Strategy Report 👇 https://lnkd.in/gzVxrMXp
To view or add a comment, sign in
-
80 percent of AI devs ignore pseudonymised data risks. UK Court of Appeal just ruled pseudonymised data counts as personal data. Even if hackers cant reidentify it. This hits AI training hard[1]. We feed pseudonymised datasets into our agents at OpenClaw. This week we audited flows after the DSG Retail case. Turns out our safeguards needed tightening. Data Use and Access Act 2025 adds complaint rules from June. Users complain to you first before ICO. AI firms must build internal handling now[2][8]. Bold prediction. UK AI Act enforcement ramps by Q3 2027. Expect fines on noncompliant agents processing any personal data. EU follows suit Q4. Developers. Pseudonymisation no longer your free pass. Audit your agent pipelines today. How are you updating data flows for pseudonymised training data? Need compliant AI agent tools? DM me or check OpenClaw. #AIAgents #AICompliance #OpenClaw
To view or add a comment, sign in
-
As organizations rush to adopt AI, a critical legal reality is finally being clarified at the federal level: If you input confidential, protected, or privileged information into AI… you may have just disclosed it to a third party. Let that sink in. Recent federal rulings and guidance are reinforcing what compliance laws have always required: -AI platforms are not private environments -They are considered third parties -And disclosure = potential violation In fact, a 2026 federal court ruling made it explicit: Information shared with a public AI tool is treated the same as sharing it with any other third party- meaning confidentiality protections can be lost. That includes attorney-client privilege, which is considered waived once information is disclosed outside that protected relationship. Even worse? Sharing legal strategy or sensitive communications with AI may make them discoverable in court. Now let’s talk about the implications for regulated environments: HIPAA (Healthcare): Protected Health Information (PHI) cannot be disclosed to unauthorized third parties. AI tools without proper agreements (like a BAA) are—legally unauthorized recipients. - A major HHS update takes effect in 2026. This includes updating Notice of Privacy Practices (NPPs) by February 16, 2026, which regulates how protected health information (PHI) is handled in AI contexts, particularly regarding reproductive health. FERPA (Education): Student records are protected. Inputting identifiable student data into AI? That’s not innovation—that’s exposure. - The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which took effect January 1, 2026, explicitly prohibits certain AI uses that could lead to FERPA violations and imposes strict transparency requirements for governmental entities, including schools. Attorney-Client Privilege: Gone the moment it’s shared with a system that is not your attorney and may store, process, or even share that data. So yes… we now find ourselves in a very real paradox: Organizations promoting AI use for efficiency while simultaneously being bound by laws that prohibit exactly how it’s often being used. Innovation without governance isn’t progress...it’s liability!!! The takeaway isn’t fear, it’s accountability: ✔️Know what your AI tools do with your data ✔️ Implement compliant, enterprise-grade solutions ✔️ Train your teams before the risk becomes real Because “we didn’t realize” is not a defense in compliance. And neither is “but it made things faster.” #AIethics #HIPAA #DataPrivacy #Compliance #HigherEducation #LegalRisk #DigitalResponsibility #Leadership #FERPA
To view or add a comment, sign in
-
We’re running out of "Real Data." It’s time to start faking it. In March 2026, the data world is facing a paradox: We need more data than ever to train specialized models, but privacy laws and "web-scraping burnout" have made high-quality human data incredibly scarce. The solution isn't to dig deeper into sensitive databases. It’s to build Synthetic Datasets. What is Synthetic Data? It’s not "fake" data in the way we used to think (like Mockaroo). It’s statistically identical data generated by AI. It preserves the patterns, correlations, and outliers of your real customers without containing a single real PII (Personally Identifiable Information) point. Why every Data Analyst needs to master this in 2026: ✅ Bypass the "Legal Wall": You can share a 1-million-row synthetic dataset with a third-party vendor without a single privacy risk. ✅ Fix Class Imbalance: Need more data on rare fraud cases? Generate it. ✅ Accelerate Innovation: Don't wait 6 months for "Compliance Approval." Use a synthetic twin today. The era of "Collect Everything" is over. We are now in the era of "Generate Exactly What You Need." The best analysts this year won’t just be the ones who can query data—they’ll be the ones who can architect the data they don't have. #SyntheticData #DataPrivacy2026 #DPDP #AIModels #DataScience #Innovation
To view or add a comment, sign in
-
More from this author
Explore related topics
- DPDP Act Impact on Personal Data Security
- Compliance Requirements for AI Developers
- Compliance with Global Privacy Laws for LinkedIn AI
- Why trust in data is fragile and how to fix it
- Challenges of AI Development in Compliance with GDPR
- Keeping AI Algorithms Compliant with Privacy Laws
- How data ethics build and break trust
- Data Privacy Standards in Machine Learning
- Impact of privacy cases on tech trust
- Data and Model Privacy Issues in Tech