Photo de couverture de Fairpatterns
Fairpatterns

Fairpatterns

Technologie, information et Internet

Paris, Île-de-France 3 010 abonnés

AI Native Human Safety Tech

À propos

Fair Patterns is the B2B antivirus for humans, a multimodal AI that finds and fixes digital manipulation and legal violations. Our AI scans sites, apps, social media to find and fix: - dark patterns - privacy violations - consent manipulation - addictive design - predatory and exploitative pricing - …. We’re defining a new category of B2B Human Safety Tech.

Site web
https://fairpatterns.ai
Secteur
Technologie, information et Internet
Taille de l’entreprise
11-50 employés
Siège social
Paris, Île-de-France
Type
Société civile/Société commerciale/Autres types de sociétés
Fondée en
2023
Domaines
darkpatterns, design, legaldesign, fairpatterns, Deceptive Design, Deceptive patterns, Compliance, Digital rights, Automated Detection, UX, Digital Service Act, Innovation, Design Thinking, CX, AI, LLM, Computer vision et AI Safety

Lieux

Employés chez Fairpatterns

Nouvelles

  • Voir la Page de l’organisation de Fairpatterns

    3 010  abonnés

    For years, navigation apps have been optimised for one thing: time. You could see why a route was recommended. You could verify it. GenAI changes that. Navigation is becoming conversational. You ask for something scenic, something calm, something lively. The app interprets it. And you have no way to check whether its interpretation matches reality, or whether something else influenced it. Researchers just mapped out what can go wrong. 👉 A route described as "lively" that runs through a paid partner zone. 👉 A calm, reassuring persona suggesting a route based on outdated safety data. 👉 Personal data bundled into features that never needed it. Some of these are intentional. Some are not. What they share is that a more human-sounding interface makes them harder to question. The paper argues for the opposite of seamless design. Disclose when a route is commercially influenced. Hedge when the data is uncertain. Make the privacy-respecting option easy to find. The more an interface sounds like a trusted friend, the more important it is that it is actually being honest. Swipe through for the full breakdown 👉

  • Voir la Page de l’organisation de Fairpatterns

    3 010  abonnés

    YouTube technically offers a non-profiling recommender system. The DSA requires it. What the DSA did not anticipate was that YouTube would make it take 7 steps to find, greet users with a blank homepage when they get there, and then display a permanent prompt asking them to turn profiling back on every single time they open the app. EDRi has just filed a formal complaint with the Belgian Digital Services Coordinator documenting this issue. 👉 Opting out of profiling forces you to delete your entire watch history. 👉 The alternative system is never mentioned in any menu. 👉 Re-enabling profiling takes two clicks. Disabling it takes seven steps. 👉 The language frames are profiling positively and opting out as a loss. Addictive design shows up when it’s not about content, but about making the privacy-respecting option so inconvenient that almost nobody uses it. Swipe through for the full breakdown 👉 🔗 Read the case: https://lnkd.in/eQdKE7DB

  • Voir la Page de l’organisation de Fairpatterns

    3 010  abonnés

    Meta and YouTube knew exactly what they were doing when they built addictive products. They documented it internally, raised concerns internally, and kept going anyway. A California jury just awarded $6M against both platforms after their own internal documents were read aloud in court. 👉 A strategy memo targeting tweens before they were old enough to sign up. 👉 Retention data on 11-year-olds that nobody was supposed to talk about. 👉 18 experts warning against beauty filters. Shipped anyway. 👉 Leadership in the room. Kids on the other side of the screen. The jury deliberated 44 hours across nine days. Negligence. Malice. Foreseeable harm. Swipe through to see what the documents actually said 👉 Read the full story: https://lnkd.in/eDuVkR6y

  • Voir la Page de l’organisation de Fairpatterns

    3 010  abonnés

    The verdict came in yesterday. Meta and YouTube were found liable for how the platforms were built to keep a teenage girl watching. 👉 Infinite scroll. 👉 Autoplay. 👉 Algorithmic recommendations. 👉 Constant notifications. The four mechanics that kept a 6-year-old glued to a screen until she was a teenager in crisis. We have been saying for years that addictive design causes real harm. Now a courtroom has confirmed it. Swipe through 👉 🔗 Read the full story: https://lnkd.in/eDuVkR6y

  • Voir la Page de l’organisation de Fairpatterns

    3 010  abonnés

    🚨 Meta’s potential fine just doubled to €10M, and the reason is a classic dark pattern. Users on Facebook and Instagram were given a choice between algorithmic and chronological feeds. But that choice didn’t persist. The platform reset preferences during navigation or after reopening the app - effectively pushing users back to profiling-based recommendations. The Amsterdam Court of Appeals made it clear: 👉 A choice that doesn’t stick is not a real choice. Under the Digital Services Act: 🔹 Article 38 requires a genuine non-profiling option 🔹 Article 25 prohibits nudging users without consent This case shows that: 👉 UX decisions like default settings and persistence can be legal risks. 🔗 Read the case: https://lnkd.in/eRAnwRHw

  • Voir la Page de l’organisation de Fairpatterns

    3 010  abonnés

    📑 On 23 February 2026, 61 regulators from across the globe published a Joint Statement on AI-Generated Imagery and the Protection of Privacy through the Global Privacy Assembly. Their concern is specific: AI systems, particularly those integrated into social media platforms, now enable the creation of realistic images and videos of real individuals without their knowledge or consent. The statement highlights non-consensual intimate imagery, defamatory depictions, and the exploitation of children and vulnerable groups as areas of urgent regulatory focus. ✒️ The expectations are clear. Organisations developing or deploying AI content generation systems must implement robust safeguards against the misuse of personal information. They must be transparent about system capabilities, acceptable uses, and the consequences of misuse. They must provide effective and accessible removal mechanisms for harmful content. And they must address specific risks to children through enhanced protections and age-appropriate information for children, parents, guardians, and educators. The co-signatories also remind organisations that generating non-consensual intimate imagery can constitute a criminal offence in many jurisdictions. AI-generated imagery is already causing real harm to real people and regulators are no longer treating it as a hypothetical. For companies building or deploying generative AI, the message is unmistakable: privacy, dignity, and safety are not features to be added later. 🌐 Learn more on: https://lnkd.in/d-eSY9zJ 🔔 Follow us on Instagram: https://lnkd.in/d5fAwAzA 📃 Read the full joint statement: https://lnkd.in/eYNQAT79

  • Voir la Page de l’organisation de Fairpatterns

    3 010  abonnés

    👉 In 2022, US children aged 0 to 17 generated $11 billion in advertising revenue for major social media platforms. That was four years ago. With the rapid pace of technological development and the explosion of AI-powered content algorithms since then, imagine what that number could be in 2026. 💬 At FairPatterns, we prioritize building a safer internet environment for children. That means following the latest scientific research on how digital platforms affect young users and translating those findings into actionable insights for the tech industry, regulators, and parents. A 2025 review article published in Cureus shows that social media has become a global phenomenon driven by rapid expansion across Facebook, Instagram, YouTube, Snapchat, and TikTok. In 2024, the number of active social media users worldwide surpassed 5 billion and is projected to reach over 6 billion by 2028. Between 93 and 97 percent of teenagers aged 13 to 17 use at least one social media platform, and adolescent girls aged 16 to 24 spend more than three hours daily scrolling. Beyond addiction, social media algorithms that incorporate artificial intelligence technology raise significant ethical concerns, particularly for teenagers. These platforms are fully committed to maximizing profits by serving advertising companies that target specific demographics through continuous feeds designed to keep users on their platforms for as long as possible. Innovation should not come at the cost of a child's well-being. When platforms deliberately design for addiction, they are not pushing boundaries. They are crossing them. And increasingly, regulators agree. FairPatterns uses multimodal AI to detect dark patterns, addictive design, and consent manipulation across digital platforms. Protecting children online starts with identifying the design patterns that put them at risk. 🌐 Learn more on: https://lnkd.in/d-eSY9zJ 🔔 Follow us on Instagram: https://lnkd.in/d5fAwAzA 📚 Source: De D, El Jamal M, Aydemir E, Khera A. (2025). Social Media Algorithms and Teen Addiction: Neurophysiological Impact and Ethical Considerations. Cureus, 17(1), e77145. https://lnkd.in/dDXhqBY9

  • Voir la Page de l’organisation de Fairpatterns

    3 010  abonnés

    📃 Australia is drawing a line against dark patterns. The Australian Government has released the Competition and Consumer Amendment (Unfair Trading Practices) Bill 2026, a draft bill proposing sweeping reforms to the Australian Consumer Law. If passed, these changes would take effect on 1 July 2027. At the heart of the proposal is a general prohibition on unfair trading practices, built around a two-limb test. Conduct is prohibited if it unreasonably manipulates a consumer or distorts their decision-making environment, AND causes or is likely to cause detriment, whether financial or otherwise. The Explanatory Memorandum makes the intent clear. This targets conduct that exploits cognitive and behavioural biases, including obstructive design, false urgency cues, and confirm shaming tactics that pressure consumers into decisions they would not otherwise make. The draft bill also introduces strict rules for subscription contracts. Suppliers must disclose subscription type, cancellation terms, and renewal conditions before sign-up. Ongoing reminders are required throughout the contract, and cancellation must be easy to find and straightforward to complete. If a consumer signed up online, they must be able to cancel online. On drip pricing, the bill mandates immediate disclosure of all mandatory transaction-based charges. Fees must be legible, prominent, unambiguous, and displayed in close proximity to the base price. The enforcement framework matches the ambition. Corporate penalties can reach up to $50 million, three times the benefit obtained, or 30% of the company's adjusted turnover during the breach period, whichever is greater. Individuals face penalties of up to $2.5 million. Businesses that invest in ethical design now will be ahead of the curve. Those relying on dark patterns are building on borrowed time.

  • Voir la Page de l’organisation de Fairpatterns

    3 010  abonnés

    As part of the Building Trust in AI event (March 2–12), held by DataCamp, our CEO Marie Potel-Saville joined Sara Vienna from Metalab for a session on trustworthy AI product design, from dark patterns in conversational interfaces to privacy-by-design that users actually notice. We talked about badly designed AI products and the real harm they cause. As Marie put it: "The real question is not whether users should trust AI products. The endgame is ensuring people actually understand what these tools do, understand the mechanics behind them, understand the two-sided markets where the free side means you end up paying with your personal data. AI products should enhance critical thinking in humans, ideally preserve it, then enhance it as well." This is what we fight for at FairPatterns. We empower users to make their own free and informed choices. Read the blog article to learn more: https://lnkd.in/dj4Fx89K

    • Aucune description alternative pour cette image

Pages affiliées

Pages similaires

Parcourir les offres d’emploi