Data Protection Practices

Explore top LinkedIn content from expert professionals.

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems

    204,368 followers

    How To Handle Sensitive Information in your next AI Project It's crucial to handle sensitive user information with care. Whether it's personal data, financial details, or health information, understanding how to protect and manage it is essential to maintain trust and comply with privacy regulations. Here are 5 best practices to follow: 1. Identify and Classify Sensitive Data Start by identifying the types of sensitive data your application handles, such as personally identifiable information (PII), sensitive personal information (SPI), and confidential data. Understand the specific legal requirements and privacy regulations that apply, such as GDPR or the California Consumer Privacy Act. 2. Minimize Data Exposure Only share the necessary information with AI endpoints. For PII, such as names, addresses, or social security numbers, consider redacting this information before making API calls, especially if the data could be linked to sensitive applications, like healthcare or financial services. 3. Avoid Sharing Highly Sensitive Information Never pass sensitive personal information, such as credit card numbers, passwords, or bank account details, through AI endpoints. Instead, use secure, dedicated channels for handling and processing such data to avoid unintended exposure or misuse. 4. Implement Data Anonymization When dealing with confidential information, like health conditions or legal matters, ensure that the data cannot be traced back to an individual. Anonymize the data before using it with AI services to maintain user privacy and comply with legal standards. 5. Regularly Review and Update Privacy Practices Data privacy is a dynamic field with evolving laws and best practices. To ensure continued compliance and protection of user data, regularly review your data handling processes, stay updated on relevant regulations, and adjust your practices as needed. Remember, safeguarding sensitive information is not just about compliance — it's about earning and keeping the trust of your users.

  • View profile for Marc Beierschoder
    Marc Beierschoder Marc Beierschoder is an Influencer

    Partner at Deloitte | Enterprise AI & Data | Turning AI ecosystems into measurable enterprise growth | Ecosystem & Strategic Accounts

    144,194 followers

    𝟔𝟔% 𝐨𝐟 𝐀𝐈 𝐮𝐬𝐞𝐫𝐬 𝐬𝐚𝐲 𝐝𝐚𝐭𝐚 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐬 𝐭𝐡𝐞𝐢𝐫 𝐭𝐨𝐩 𝐜𝐨𝐧𝐜𝐞𝐫𝐧. What does that tell us? Trust isn’t just a feature - it’s the foundation of AI’s future. When breaches happen, the cost isn’t measured in fines or headlines alone - it’s measured in lost trust. I recently spoke with a healthcare executive who shared a haunting story: after a data breach, patients stopped using their app - not because they didn’t need the service, but because they no longer felt safe. 𝐓𝐡𝐢𝐬 𝐢𝐬𝐧’𝐭 𝐣𝐮𝐬𝐭 𝐚𝐛𝐨𝐮𝐭 𝐝𝐚𝐭𝐚. 𝐈𝐭’𝐬 𝐚𝐛𝐨𝐮𝐭 𝐩𝐞𝐨𝐩𝐥𝐞’𝐬 𝐥𝐢𝐯𝐞𝐬 - 𝐭𝐫𝐮𝐬𝐭 𝐛𝐫𝐨𝐤𝐞𝐧, 𝐜𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 𝐬𝐡𝐚𝐭𝐭𝐞𝐫𝐞𝐝. Consider the October 2023 incident at 23andMe: unauthorized access exposed the genetic and personal information of 6.9 million users. Imagine seeing your most private data compromised. At Deloitte, we’ve helped organizations turn privacy challenges into opportunities by embedding trust into their AI strategies. For example, we recently partnered with a global financial institution to design a privacy-by-design framework that not only met regulatory requirements but also restored customer confidence. The result? A 15% increase in customer engagement within six months. 𝐇𝐨𝐰 𝐜𝐚𝐧 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐫𝐞𝐛𝐮𝐢𝐥𝐝 𝐭𝐫𝐮𝐬𝐭 𝐰𝐡𝐞𝐧 𝐢𝐭’𝐬 𝐥𝐨𝐬𝐭? ✔️ 𝐓𝐮𝐫𝐧 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐧𝐭𝐨 𝐄𝐦𝐩𝐨𝐰𝐞𝐫𝐦𝐞𝐧𝐭: Privacy isn’t just about compliance. It’s about empowering customers to own their data. When people feel in control, they trust more. ✔️ 𝐏𝐫𝐨𝐚𝐜𝐭𝐢𝐯𝐞𝐥𝐲 𝐏𝐫𝐨𝐭𝐞𝐜𝐭 𝐏𝐫𝐢𝐯𝐚𝐜𝐲: AI can do more than process data, it can safeguard it. Predictive privacy models can spot risks before they become problems, demonstrating your commitment to trust and innovation. ✔️ 𝐋𝐞𝐚𝐝 𝐰𝐢𝐭𝐡 𝐄𝐭𝐡𝐢𝐜𝐬, 𝐍𝐨𝐭 𝐉𝐮𝐬𝐭 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞: Collaborate with peers, regulators, and even competitors to set new privacy standards. Customers notice when you lead the charge for their protection. ✔️ 𝐃𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐀𝐧𝐨𝐧𝐲𝐦𝐢𝐭𝐲: Techniques like differential privacy ensure sensitive data remains safe while enabling innovation. Your customers shouldn’t have to trade their privacy for progress. Trust is fragile, but it’s also resilient when leaders take responsibility. AI without trust isn’t just limited - it’s destined to fail. 𝐇𝐨𝐰 𝐰𝐨𝐮𝐥𝐝 𝐲𝐨𝐮 𝐫𝐞𝐠𝐚𝐢𝐧 𝐭𝐫𝐮𝐬𝐭 𝐢𝐧 𝐭𝐡𝐢𝐬 𝐬𝐢𝐭𝐮𝐚𝐭𝐢𝐨𝐧? 𝐋𝐞𝐭’𝐬 𝐬𝐡𝐚𝐫𝐞 𝐚𝐧𝐝 𝐢𝐧𝐬𝐩𝐢𝐫𝐞 𝐞𝐚𝐜𝐡 𝐨𝐭𝐡𝐞𝐫 👇 #AI #DataPrivacy #Leadership #CustomerTrust #Ethics

  • View profile for Pooja Jain

    Storyteller | Lead Data Engineer@Wavicle| Linkedin Top Voice 2025,2024 | Linkedin Learning Instructor | 2xGCP & AWS Certified | LICAP’2022

    191,382 followers

    Do you think Data Governance: All Show, No Impact? → Polished policies ✓ → Fancy dashboards ✓ → Impressive jargon ✓ But here's the reality check: Most data governance initiatives look great in boardroom presentations yet fail to move the needle where it matters. The numbers don't lie. Poor data quality bleeds organizations dry—$12.9 million annually according to Gartner. Yet those who get governance right see 30% higher ROI by 2026. What's the difference? ❌It's not about the theater of governance. ✅It's about data engineers who embed governance principles directly into solution architectures, making data quality and compliance invisible infrastructure rather than visible overhead. Here’s a 6-step roadmap to build a resilient, secure, and transparent data foundation: 1️⃣ 𝗘𝘀𝘁𝗮𝗯𝗹𝗶𝘀𝗵 𝗥𝗼𝗹𝗲𝘀 & 𝗣𝗼𝗹𝗶𝗰𝗶𝗲𝘀 Define clear ownership, stewardship, and documentation standards. This sets the tone for accountability and consistency across teams. 2️⃣ 𝗔𝗰𝗰𝗲𝘀𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 & 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 Implement role-based access, encryption, and audit trails. Stay compliant with GDPR/CCPA and protect sensitive data from misuse. 3️⃣ 𝗗𝗮𝘁𝗮 𝗜𝗻𝘃𝗲𝗻𝘁𝗼𝗿𝘆 & 𝗖𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 Catalog all data assets. Tag them by sensitivity, usage, and business domain. Visibility is the first step to control. 4️⃣ 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 & 𝗗𝗮𝘁𝗮 𝗤𝘂𝗮𝗹𝗶𝘁𝘆 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 Set up automated checks for freshness, completeness, and accuracy. Use tools like dbt tests, Great Expectations, and Monte Carlo to catch issues early. 5️⃣ 𝗟𝗶𝗻𝗲𝗮𝗴𝗲 & 𝗜𝗺𝗽𝗮𝗰𝘁 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 Track data flow from source to dashboard. When something breaks, know what’s affected and who needs to be informed. 6️⃣ 𝗦𝗟𝗔 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 & 𝗥𝗲𝗽𝗼𝗿𝘁𝗶𝗻𝗴 Define SLAs for critical pipelines. Build dashboards that report uptime, latency, and failure rates—because business cares about reliability, not tech jargon. With the rising AI innovations, it's important to emphasise the governance aspects data engineers need to implement for robust data management. Do not underestimate the power of Data Quality and Validation by adapting: ↳ Automated data quality checks ↳ Schema validation frameworks ↳ Data lineage tracking ↳ Data quality SLAs ↳ Monitoring & alerting setup While it's equally important to consider the following Data Security & Privacy aspects: ↳ Threat Modeling ↳ Encryption Strategies ↳ Access Control ↳ Privacy by Design ↳ Compliance Expertise Some incredible folks to follow in this area - Chad Sanderson George Firican 🎯 Mark Freeman II Piotr Czarnas Dylan Anderson Who else would you like to add? ▶️ Stay tuned with me (Pooja) for more on Data Engineering. ♻️ Reshare if this resonates with you!

  • View profile for Sid Trivedi

    Partner at Foundation Capital

    17,903 followers

    $400M – that’s the price tag when sensitive #data ends up in the wrong hands. On May 11th, Coinbase – the largest US-based #crypto exchange (100M+ users, $330B in assets) – received a ransom demand for $20M. A threat actor claimed to have internal account documentation and customer data. Coinbase has refused to pay, instead boldly offering a $20M reward for information on the attackers. Coinbase’s May 14th SEC disclosure revealed the troubling root cause: overseas support agents were bribed to leak customer data, enabling targeted social-engineering attacks. While passwords and private keys appear safe, personal details – emails, phone numbers, addresses, government IDs, and account data – might have been compromised. The company is estimating a cost of $180M-$400M for remediation and voluntary customer reimbursements relating to the incident. This breach underscores a critical truth: insider access to sensitive data remains a massive, underestimated threat. Coinbase’s detection tools worked – identifying unauthorized access and firing the responsible individuals months earlier – but the data had already escaped. Identity management, DLP, and proactive data monitoring have never mattered more. AI agents add powerful new capabilities but also complicate the risk picture. If you’re a #founder building solutions around identity, insider risk, or data protection, I’d love to connect.

  • View profile for Amine El Gzouli

    Amazon Security | Sr. Security and Compliance Specialist | Turning InfoSec compliance into a growth engine: Reduce risk, cut red tape, and move at business speed

    5,372 followers

    “We are ISO 27001 certified, are we DORA compliant?” Not so fast. ISO 27001 and DORA both focus on cybersecurity and risk management, but they serve very different purposes. If you're a financial institution or an ICT provider working with financial institutions in the EU, DORA compliance is mandatory, and ISO 27001 alone won’t get you there. Let’s break it down: 1. Regulatory vs. Voluntary Framework ↳ ISO 27001 – A voluntary international standard for information security management. ↳ DORA – A mandatory EU regulation for financial entities and their ICT providers, with strict oversight and penalties for non-compliance. 2. Scope and Focus ↳ ISO 27001 – Offers a customizable scope tailored to organizational needs, focusing on information security (confidentiality, integrity, availability) based on specific risk assessments and chosen controls. ↳ DORA – Enforces a standardized scope across financial entities, extending beyond security to operational resilience. It ensures institutions can withstand, respond to, and recover from ICT disruptions while maintaining service continuity. 3. Key Compliance Gaps 🔸 Incident Reporting ↳ ISO 27001 – Requires incident management but doesn’t impose strict deadlines or mandate reporting to regulators, as it is a flexible standard. ↳ DORA – 4 hours to report a major incident, 72 hours for an update, 1 month for a root cause analysis. 🔸 Security Testing ↳ ISO 27001 – Requires vulnerability management but leaves testing methods and frequency to organizational risk. ↳ DORA – Annual resilience testing, threat-led penetration testing every 3 years, continuous vulnerability scanning. 🔸 Third-Party Risk Management: ↳ ISO 27001 – Covers supplier risk but with general security controls. ↳ DORA – Enforces contractual obligations, exit strategies, and regulatory audits for ICT providers working with financial institutions. 4. How financial institutions and ICT providers can address the delta? ✅ Perform a DORA Gap Analysis – Identify missing controls beyond ISO 27001. (Hopefully, you're not still at this stage now that DORA has been mandatory since January 17, 2025.) ✅ Upgrade Incident Response – Implement real-time monitoring and reporting mechanisms to meet DORA’s deadlines. ✅ Enhance Security Testing – Introduce formalized resilience testing and threat-led penetration testing. ✅ Strengthen Third-Party Risk Management – Update contracts, prepare for regulatory audits, and ensure exit strategies comply with DORA. ✅ Improve Business Continuity Planning – Move from cybersecurity alone to full digital operational resilience. 💡 ISO 27001 is just the tip of the iceberg - beneath the surface lie significant gaps that only DORA addresses. 👇 What’s the biggest challenge in aligning with DORA? Let’s discuss. ♻️ Repost to help someone. 🔔 Follow Amine El Gzouli for more.

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,502 followers

    Today, National Institute of Standards and Technology (NIST) published its finalized Guidelines for Evaluating ‘Differential Privacy’ Guarantees to De-Identify Data (NIST Special Publication 800-226), a very important publication in the field of privacy-preserving machine learning (PPML). See: https://lnkd.in/gkiv-eCQ The Guidelines aim to assist organizations in making the most of differential privacy, a technology that has been increasingly utilized to protect individual privacy while still allowing for valuable insights to be drawn from large datasets. They cover: I. Introduction to Differential Privacy (DP): - De-Identification and Re-Identification: Discusses how DP helps prevent the identification of individuals from aggregated data sets. - Unique Elements of DP: Explains what sets DP apart from other privacy-enhancing technologies. - Differential Privacy in the U.S. Federal Regulatory Landscape: Reviews how DP interacts with existing U.S. data protection laws. II. Core Concepts of Differential Privacy: - Differential Privacy Guarantee: Describes the foundational promise of DP, which is to provide a quantifiable level of privacy by adding statistical noise to data. - Mathematics and Properties of Differential Privacy: Outlines the mathematical underpinnings and key properties that ensure privacy. - Privacy Parameter ε (Epsilon): Explains the role of the privacy parameter in controlling the level of privacy versus data usability. - Variants and Units of Privacy: Discusses different forms of DP and how privacy is measured and applied to data units. III. Implementation and Practical Considerations: - Differentially Private Algorithms: Covers basic mechanisms like noise addition and their common elements used in creating differentially private data queries. - Utility and Accuracy: Discusses the trade-off between maintaining data usefulness and ensuring privacy. - Bias: Addresses potential biases that can arise in differentially private data processing. - Types of Data Queries: Details how different types of data queries (counting, summation, average, min/max) are handled under DP. IV. Advanced Topics and Deployment: - Machine Learning and Synthetic Data: Explores how DP is applied in ML and the generation of synthetic data. - Unstructured Data: Discusses challenges and strategies for applying DP to unstructured data. - Deploying Differential Privacy: Provides guidance on different models of trust and query handling, as well as potential implementation challenges. - Data Security and Access Control: Offers strategies for securing data and controlling access when implementing DP. V. Auditing and Empirical Measures: - Evaluating Differential Privacy: Details how organizations can audit and measure the effectiveness and real-world impact of DP implementations. Authors: Joseph Near David Darais Naomi Lefkovitz Gary Howarth, PhD

  • View profile for Kashmir Hill

    Technology reporter at The New York Times and author of “Your Face Belongs To Us”

    13,056 followers

    As an investigative reporter, when I come across things that I think are wrong in the world, I start looking into them and if the facts suggest that there is indeed a problem, I write a story, hoping that it will lead to change. Last year, the thing that I looked at was automakers collecting data about how and where people were driving their cars. And change has come. The most egregious practice I found was at General Motors, which was collecting information -- as often as every 3 seconds -- from people's cars about when they sped, slammed on the brakes, or made harsh turns, as well as when, where, and how far they drove. G.M. was selling that information to the insurance industry which was using it to give people risk scores that could affect whether they could get auto insurance and how much they would pay for it. The vast majority of consumers had NO IDEA this was happening until I reported it in March of last year: https://lnkd.in/eAc_cTSh This line of reporting has had real impact. There is a federal class action lawsuit happening, but also, this week the Federal Trade Commission banned G.M. from selling individual drivers' behavior and location for five years, and put the automaker under a 20 year consent decree saying, essentially, it can't do creepy stuff like this with people's data without their explicit consent: https://lnkd.in/ey6FxP4q Another happening this week: The Texas attorney general, which has also sued G.M. for privacy violations, filed a lawsuit against an Allstate subsidiary called Arity: https://lnkd.in/ec2EEKQp. Arity was doing something similar -- gathering data about people's driving from smartphone apps, such as Life360, and selling it to insurance companies, a practice I wrote about in June: https://lnkd.in/eucJKfnX I'm not opposed to people's driving being monitored to determine their insurance rates if they are aware and consent to it. It could lead to safer roads. BUT THEY NEED TO KNOW THAT IT IS HAPPENING.

  • View profile for Jen Easterly

    CEO, RSAC | Leader | Speaker | Advisor | Optimist | #MoveFast&BuildThings

    122,732 followers

    On 13 Nov, the Cybersecurity and Infrastructure Security Agency & the Federal Bureau of Investigation (FBI) released a statement (https://lnkd.in/ezrFy_4j) on the US government's investigation into PRC targeting of telco infrastructure: “PRC-affiliated actors have compromised networks at multiple telecommunications companies to enable the theft of customer call records data, the compromise of private communications of a limited number of individuals who are primarily involved in government or political activity, and the copying of certain information that was subject to U.S. law enforcement requests pursuant to court orders. We expect our understanding of these compromises to grow as the investigation continues." With the investigation ongoing, folks should take basic steps now to protect their personal communications. With gratitude to CISA's Senior Technical Advisor Bob Lord (https://lnkd.in/e-WxWiFF) consider the below steps: - Enable FIDO authentication or FIDO https://lnkd.in/ezzyha7t for email & social media accounts - Migrate off SMS MFA for all other logins. Migrate to FIDO/passkeys if you can, otherwise to an authenticator app - Use a password manager for all passwords. Use a strong pass phrase (https://lnkd.in/ebPpTAU5) for the vault password. - Set a telco PIN to reduce chances of a SIM-swap attack - Update the OS and all apps and turn on auto update Additional tips: 1. Encrypt all text and voice communications (some options): - Signal works well on iPhones & Android phones. - iMessage is great if all your contacts are within the Apple ecosystem, though that’s limiting - Collaboration suites like Google Workspace or Teams can work but don’t always encrypt as you might assume. For example, Teams encrypts data point-to-point, meaning it’s decrypted on Microsoft’s servers before re-encrypting it to the recipient. If you want end-to-end encryption, there’s an option, but it’s off by default and only supports two people on the call. - WhatsApp might be ok for some people based on their threat model but understand metadata it keeps (https://lnkd.in/eQkP-Ety) & how it's used (https://lnkd.in/eiZmxgi4). 2. If you use an iPhone disable these carrier-provided services that increase the attack surface: - Disable: Settings > Apps > Messages > Send as Text Message - Disable: Settings > Apps > Messages > RCS Messaging > RCS Messaging 3. Protect DNS lookups (some options): - Apple iCloud Private Relay - Cloudflare’s 1.1.1.1 resolver - Quad9’s 9.9.9.9 resolver 4. Use recent hardware: Apple (13 or newer) or Google (Pixel 6 or newer) 5. Depending on your threat model, consider enabling Lockdown Mode on iPhones: It will disable some features, but it’s manageable

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (91,000+ subscribers), Mother of 3

    126,704 followers

    🚨 AI Privacy Risks & Mitigations Large Language Models (LLMs), by Isabel Barberá, is the 107-page report about AI & Privacy you were waiting for! [Bookmark & share below]. Topics covered: - Background "This section introduces Large Language Models, how they work, and their common applications. It also discusses performance evaluation measures, helping readers understand the foundational aspects of LLM systems." - Data Flow and Associated Privacy Risks in LLM Systems "Here, we explore how privacy risks emerge across different LLM service models, emphasizing the importance of understanding data flows throughout the AI lifecycle. This section also identifies risks and mitigations and examines roles and responsibilities under the AI Act and the GDPR." - Data Protection and Privacy Risk Assessment: Risk Identification "This section outlines criteria for identifying risks and provides examples of privacy risks specific to LLM systems. Developers and users can use this section as a starting point for identifying risks in their own systems." - Data Protection and Privacy Risk Assessment: Risk Estimation & Evaluation "Guidance on how to analyse, classify and assess privacy risks is provided here, with criteria for evaluating both the probability and severity of risks. This section explains how to derive a final risk evaluation to prioritize mitigation efforts effectively." - Data Protection and Privacy Risk Control "This section details risk treatment strategies, offering practical mitigation measures for common privacy risks in LLM systems. It also discusses residual risk acceptance and the iterative nature of risk management in AI systems." - Residual Risk Evaluation "Evaluating residual risks after mitigation is essential to ensure risks fall within acceptable thresholds and do not require further action. This section outlines how residual risks are evaluated to determine whether additional mitigation is needed or if the model or LLM system is ready for deployment." - Review & Monitor "This section covers the importance of reviewing risk management activities and maintaining a risk register. It also highlights the importance of continuous monitoring to detect emerging risks, assess real-world impact, and refine mitigation strategies." - Examples of LLM Systems’ Risk Assessments "Three detailed use cases are provided to demonstrate the application of the risk management framework in real-world scenarios. These examples illustrate how risks can be identified, assessed, and mitigated across various contexts." - Reference to Tools, Methodologies, Benchmarks, and Guidance "The final section compiles tools, evaluation metrics, benchmarks, methodologies, and standards to support developers and users in managing risks and evaluating the performance of LLM systems." 👉 Download it below. 👉 NEVER MISS my AI governance updates: join my newsletter's 58,500+ subscribers (below). #AI #AIGovernance #Privacy #DataProtection #AIRegulation #EDPB

Explore categories