I came across research last week that I genuinely cannot stop thinking about. In the logic of AI, "man" is to "programmer" as "woman" is to "homemaker." No one explicitly coded that bias into the system; the machines simply learned it from us. They mirrored our job postings, our articles, and our casual conversations and billions of our own blind spots fed into a black box until the algorithm started reflecting our worst habits back at us. Bias in AI isn't always malicious. But sometimes it feels like AI is being weaponized against women's safety at a scale. On platforms like X, a woman posts a photo and the replies are filled with prompts for AI tools to undress her (see the links in comments).These tools then publicly generate explicit, non-consensual images of real women who are students, mothers, leaders. We want to use AI. We must use AI but thoughtfully. And the information it is sharing is just a mere unfortunate reflection of our society. A society where women have fought their way up as they have been historically been reduced, objectified, and pushed to the margins but now those patterns are being encoded into new systems. When a tool can be used to violate a woman's dignity in seconds, that's a design and policy failure. My question is: Can we build AI that doesn't inherit the worst of us? I think we can. But only if the people building it are asking that question out loud before the product ships. #AI #GenderBias #WomenSafety
Social Impact Of AI
Explore top LinkedIn content from expert professionals.
-
-
Not all engagement is created equal! Algo update! LinkedIn’s algorithm is now penalising accounts with lots of automated, AI-generated comments on their posts 🚫 Instead of helping, these repetitive or irrelevant comments, that repeat your post back to you parrot fashion, could actually be DAMAGING the reach of your favourite Creators, meaning you will see LESS of what you like in the feed! And they won't thank you for it! If you've been taught by some 'guru' that engagement always wins, and installed a chrome extension or third party tool to help you keep on top of it, please wipe that smug smile off your face! LinkedIn is on to you! You're damaging not only your own reach, but that of the people you have been attempting to build a robotic relationship with! Everyone knows I love AI - but the comments section is NOT the place for it! At You Need Nicki, we've always stood firm on the power of real, meaningful engagement. There's no shortcuts, you have to do the work. And we're super happy to see the Linkedin algorithm favouring genuine interactions that drive value again. Thoughtful comments, authentic conversations, and real connections. This is what we help our clients focus on: quality over quantity, with engagement that builds genuine connections and opportunity. Genuine engagement is a springboard for real life relationships - like the lovely Angie McQuillin here who is one of the many LinkedIn connections I have since had the pleasure of meeting in person. Pro tip: If you spot AI-driven, empty comments on your posts, consider deleting or blocking them to protect your reach and maintain a high-value feed. How can you spot an AI comment on your post? Drop your thoughts in the comments? 🤖 #LinkedInTips #SocialMediaStrategy #MeaningfulEngagement #AlgoUpdate
-
"This report developed by UNESCO and in collaboration with the Women for Ethical AI (W4EAI) platform, is based on and inspired by the gender chapter of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This concrete commitment, adopted by 194 Member States, is the first and only recommendation to incorporate provisions to advance gender equality within the AI ecosystem. The primary motivation for this study lies in the realization that, despite progress in technology and AI, women remain significantly underrepresented in its development and leadership, particularly in the field of AI. For instance, currently, women reportedly make up only 29% of researchers in the field of science and development (R&D),1 while this drops to 12% in specific AI research positions.2 Additionally, only 16% of the faculty in universities conducting AI research are women, reflecting a significant lack of diversity in academic and research spaces.3 Moreover, only 30% of professionals in the AI sector are women,4 and the gender gap increases further in leadership roles, with only 18% of in C-Suite positions at AI startups being held by women.5 Another crucial finding of the study is the lack of inclusion of gender perspectives in regulatory frameworks and AI-related policies. Of the 138 countries assessed by the Global Index for Responsible AI, only 24 have frameworks that mention gender aspects, and of these, only 18 make any significant reference to gender issues in relation to AI. Even in these cases, mentions of gender equality are often superficial and do not include concrete plans or resources to address existing inequalities. The study also reveals a concerning lack of genderdisaggregated data in the fields of technology and AI, which hinders accurate measurement of progress and persistent inequalities. It highlights that in many countries, statistics on female participation are based on general STEM or ICT data, which may mask broader disparities in specific fields like AI. For example, there is a reported 44% gender gap in software development roles,6 in contrast to a 15% gap in general ICT professions.7 Furthermore, the report identifies significant risks for women due to bias in, and misuse of, AI systems. Recruitment algorithms, for instance, have shown a tendency to favor male candidates. Additionally, voice and facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies. Women are also disproportionately likely to be the victims of AI-enabled online harassment. The document also highlights the intersectionality of these issues, pointing out that women with additional marginalized identities (such as race, sexual orientation, socioeconomic status, or disability) face even greater barriers to accessing and participating in the AI field."
-
I spent a week studying +500 LinkedIn accounts from UAE, Saudi and India. As an expert, this is a part of my research. 60% of the profiles I studied had: - Same old sob stories - Auto-generated comments - Cliche quotes - No originality - No soul This platform wasn't that saturated 5 years ago. At least people took out some time to write a DM personally or engage meaningfully. Today, almost 60% people are using AI for content, comments, and even DMs. The most popular professional networking platform is shedding its essence. (I lowkey felt proud looking at our client's account. The posts look authentic, comments are done purposefully, and honestly, hard to figure out if they were managed by us.) Nevertheless, in this world of AI, if you are someone still maintaining authenticity here, giving some personal attention to your account, I appreciate you. If you are reading this post & you love originality over automation, drop a comment. More people need to see your profile. #LinkedIn Linkedin News LinkedIn News Middle East LinkedIn News India
-
Automating Inequality: When AI Undervalues Women’s Care Needs New research from Care Policy and Evaluation Centre (CPEC) by Sam Rickman reveals that large language models (LLMs) used to summarise long-term care records and support social workers in England may be introducing gender bias into decisions about who gets support. Using real case notes from 617 older adults, researchers created gender-swapped versions and generated 29,616 summaries using different AI models. The results? - Google’s widely used AI model ‘Gemma’ downplays women’s physical and mental issues in comparison to men’s. - Terms associated with significant health concerns, such as “disabled,” “unable,” and “complex,” appeared significantly more often in descriptions of men than women. If AI summaries soften women’s diagnoses, they risk receiving less support, not because their needs are different, but because the language makes them seem so. #GenderBias #LLMs #HealthEquity #ResponsibleAI
-
Automated comments are getting the boot on LinkedIn. The platform is cracking down on comments left using third-party automation tools that bypass any type of human review. Gyanda Sachdeva, VP of Product Management at LinkedIn, shared that enforcement actions may include: 🚫 Removing them from the “Most Relevant” section 📉 Limiting distribution so they aren’t shown outside the commenter’s network ⛔ Restricting accounts that use these tools in severe cases Automated comments had been rampant for a while. These new enforcement actions should further reduce the generic comments that were previously used to game visibility. While AI tools have made it easier than ever to create content and automate engagement, more is not always better. Commenting can absolutely help you build an audience and get in front of prospects, clients, peers, and potential followers. It’s easy to see why some people turn to automation. According to LinkedIn, comments can drive 3x more visibility. And with LinkedIn adding impression counts for comments last year, it further signals how important they are. But I see commenting as a value game, not a volume game. Most AI-generated comments I’ve seen simply restate what’s already been said in the post. Even worse are the automated comments that are just a string of emojis. They’re not additive and don’t move the conversation forward. Yes, automation can give you scale without manual effort. But in many cases, these types of comments end up reflecting poorly on the individuals and companies posting them.
-
Those AI-generated LinkedIn comments you're so proud of? They're killing your credibility faster than spam Last year, my posts drowned in "Great job! "Thanks" "Interesting!" spam. Today? AI-generated essays that say nothing I watched engagement drop as authenticity died Then I realised: robots can’t build trust. Humans do AI didn’t raise the bar—it just made mediocrity sound smarter Authenticity is your unfair advantage 𝟱-𝗦𝗲𝗰𝗼𝗻𝗱 𝗔𝗰𝘁𝗶𝗼𝗻𝗮𝗯𝗹𝗲 𝗙𝗶𝘅𝗲𝘀: 1. Steal this template: “The part about [X] hit hard. How did you handle [specific challenge]?” 2. Add 1 personal detail: “This reminds me of when I…” (10 seconds). 3. Ask a short question: “Would this work for [industry]?” 4. Ditch the essay. Write like you’re texting a friend. 5. Set a 60-second timer. Overthinking = sounding like ChatGPT. Genuine comments take 30 seconds but: • Skyrocket your visibility (algorithm rewards real convos) • Make you memorable in a bot-dominated feed • Build relationships that turn followers into clients Will your next comment be forgettable AI fluff��� or the reason someone DMs you? P.S. The best LinkedIn growth hack isn’t a tool. It’s you. “Agree? Drop your #1 tip for authentic engagement below ------- Hi I'm Adam Strong and I help founders scale, systemise, and exit businesses in 12-24 months—without losing their sanity Listen to the full episode with me and Ruben Hassid link below
-
To all of you using (or considering) AI-produced comments to boost your LinkedIn profile. STOP. It's not as intelligent or original as you might think. All it takes is another person using it to expose everybody. AI generated comments have the same style, similar metaphors, and repeatable phrases. At the very least, you'll be ignored. At the worst, you'll get blocked. It's not a mystery when the same account regularly posts AI comments and never responds, engages, follows or requests to connect. Use your brain.
-
Yesterday, halfway through my morning coffee, I hit a sentence that made me stop scrolling. It was an article about a study from The London School of Economics and Political Science (LSE), testing AI in social care... and the gap it revealed wasn’t small. They took 617 real case notes, swapped only the gender, and ran them through Google’s Gemma model, already used by more than half of England’s councils. When the person was "Mr Smith," the AI wrote: "84-year-old man who lives alone and has a complex medical history, no care package and poor mobility." Swap to "Mrs Smith," and suddenly: "84-year-old living alone. Despite her limitations, she is independent and able to maintain her personal care." Same facts. Same needs. Different story. But in social care, these stories aren’t just decoration. They decide who gets help, how much, and how fast. Call someone "coping" instead of "struggling" and you’ve already shifted the outcome. LSE saw this again and again: men framed in terms of difficulty, women in terms of self-reliance. These systems are already shaping decisions. We don’t know exactly where, or what safeguards exist. And bias testing doesn’t seem to be required. If the machine changes the story, it changes the care. I’ll share the article from The Guardian in the comments. #AIethics #AlgorithmicFairness #TechPhilosopher — — — 🧭 Follow me for more on AI ethics, data strategy, and the messy, human side of tech: Sune Selsbæk-Reitz