Our new paper, “Detecting bias in algorithms used to disseminate information in social networks and mitigating it using multiobjective optimization,” is now out in PNAS Nexus. Algorithms that determine how information spreads — from health campaigns to social media posts — are often optimized for one goal: maximizing reach. But what happens when reach comes at the expense of equity? Paper: https://lnkd.in/epNgJSnP ArXiv: https://lnkd.in/eAkVWwAK In this work, led by Vedran Sekara and with Ivan Dotu, Manuel Cebrian, and Manuel García-Herranz, we show that state-of-the-art influence maximization algorithms — the same kind used to identify “influencers” in social networks — systematically leave parts of the network behind. Some groups receive information late, others not at all. In other words, algorithmic bias can emerge not from data, but from the mathematical definition of the problem itself. To address this, we developed a multiobjective algorithm that balances spread and fairness. The result: we can significantly reduce informational inequality with only a minimal loss in reach. This suggests that optimization and equity don’t have to be opposing goals. As algorithms increasingly shape who gets access to opportunities, resources, and knowledge, this kind of fairness-aware design becomes essential — not just for social media or marketing, but for public health, disaster response, and social resilience.
Inclusive Algorithm Design
Explore top LinkedIn content from expert professionals.
Summary
Inclusive algorithm design is the practice of creating AI and computational systems that prioritize fairness, representation, and accessibility for people from all backgrounds, identities, and abilities. This approach aims to prevent bias, recognize diverse needs, and ensure everyone benefits from technology—not just a select few.
- Expand representation: Actively include perspectives from underrepresented communities and ensure datasets reflect a wide range of experiences, identities, and abilities.
- Engage stakeholders: Collaborate with local experts, community members, and end users to validate data choices and design requirements, making sure solutions address real-world needs.
- Audit and improve: Regularly review algorithms for bias, test with diverse prompts, and refine systems to close inclusion gaps and reduce inequities in AI outcomes.
-
-
How can we ensure that #ArtificialIntelligence respects human rights and societal values? This paper delves into the challenges and solutions for integrating human rights into the design and implementation of #AI systems. It introduces a framework called "Design for Values," which draws on methodologies like Value Sensitive Design and Participatory Design. The paper presents a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements. This is accomplished through a structured, inclusive, and transparent process that aims to bridge the socio-technical gaps often present in AI development. 1️⃣ Socio-Technical Gaps: The paper identifies a critical gap between technical solutions and societal context, often resulting in AI systems that may inadvertently violate human rights. 2️⃣ Design for Values Framework: The paper introduces a comprehensive framework that aims to bridge these gaps by translating moral and social values into design requirements for AI systems. 3️⃣ Tripartite Methodology: The framework employs a three-pronged approach—Conceptual, Empirical, and Technical investigations—to ensure that the design process is iterative and integrative. 4️⃣ Stakeholder Engagement: The paper emphasizes the importance of involving societal stakeholders in the design process to ensure that the AI systems are aligned with human rights and societal norms. 5️⃣ Local Meaning and Context: The paper stresses the need to consider local social practices and language to make the AI systems more context-sensitive and ethical. The paper provides a well-structured roadmap for designing AI systems that are aligned with human rights and societal values. It offers actionable insights and methodologies that can be applied across various domains, making it a must-read for anyone involved in the development or governance of AI technologies. ✍🏻 Evgeni Aizenberg and jeroen van den hoven. Designing for human rights in AI. Big Data & Society 2020 7:2. DOI: 10.1177/2053951720949566 ✅ Sign up for our newsletter to stay updated on the most fascinating studies related to digital health and innovation: https://lnkd.in/eR7qichj
-
New responsible AI paper with Christina N. Harrington, PhD and Shaun Kane! We synthesized categories of disability, health, and accessibility representation and intersectional harms, as evaluated by people with diverse disabilities and health conditions who created AI images with us during an interactive interview study. We'll present this at the upcoming Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO) conference! Takeaways: * Disability representation is about so much more than disabilities; not everyone who experiences health conditions or ableism identifies as disabled. Our expansion to disability, health, and accessibility (DHA) was intentional and is in deference to Sami Schalk's book, Black Disability Politics and the greater Disability Justice Movement. * DHA representation in AI images not only refers to how people look but also concerns access technologies, objects, actions, and motions that may signify symptoms, and when combined with generic terms like activities, participants expected that they be shown done in an accessible manner (e.g., an image of disabled people doing yoga should show adaptive yoga techniques). * We point out intersectional harms including: environments were often depicted as upper class, colorism (in addition to predominance of default AI images of white disabled people), and body size homogenization. * Often, successive prompts did not result in "better" images, but could even be worse than original images--something our participants identified particularly when they prompted for multiple representation characteristics (e.g., people with different races, ages, and disabilities). What do we do now? * Proactively include DHA in evals: Models change but people with disabilities and health conditions deserve access to realistic representations which dignify and celebrate diverse life experiences and identities. If you can make better AI image representations than this paper, great. We still need to evaluate representation inclusive of DHA; this need will not go away. * Community members need not just evaluate outputs, but should develop test prompts in their own words. * Our categories could scaffold term taxonomies and test prompt sets. * Intersectional harms is *another* reason for those of us focusing on different aspects of representation to work together. * Iteratively prompting AI models was an interesting method to spur evaluations, warranting future research. * Accessible research is not just for accessibility research: I and others have said this but it is always worth mentioning. The substance of evaluations doesn't matter if the tools, environments, and processes for eval completion aren't accessible. https://lnkd.in/ehsQ_cAd
-
AI is only as inclusive as the voices driving its development. The way we build and implement AI today will determine how it serves tomorrow. The choice is ours. It has the potential to reshape industries, but if left unchecked, it risks deepening societal divides and the inclusion gap. While we see progress. Western-centric AI development has perpetuated biases by relying on incomplete data and overlooking underserved regions. To shift this narrative, we need to move beyond the buzzwords and focus on tangible actions. Here’s how: → Diversify the data: We must actively collect and incorporate data from underrepresented regions, ensuring AI systems reflect diverse needs and experiences. → Empower diverse talent: AI development must include voices from all communities. We need initiatives that nurture talent in underserved populations to bring fresh perspectives into tech. → Engage globally: Policymakers, tech companies, and healthcare providers must collaborate, ensuring AI solutions are designed for global accessibility. → Hold ourselves accountable: Regular audits for bias in AI systems should become the norm. → Rethink governance: We need inclusive AI governance that prioritizes representation, particularly when it comes to health and social welfare. → Learn from local experts: Before implementing AI in new regions, tech developers must work alongside local experts to understand cultural nuances and real-world needs. Moreover, applying the 4D Framework: Develop, De-identify, Decipher, De-bias. We can create AI systems that are not just smarter, but also fairer, more inclusive, and global. It’s time to change the conversation. But this isn't just about building better tech. It's about expanding access, education, and funding to communities that have been left behind. It’s about ensuring that every person, no matter where they live, has a seat at the table. AI’s future doesn’t belong to one group. It belongs to all of us. The real question is: Will we design it for everyone?
-
Feel like you have the perfect dataset to train an inclusive AI model? Think again. Have you truly considered someone's full, embodied experience beyond the digital footprint that exists about them? ❓ "𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗱𝗮𝘁𝗮 𝘁𝗿𝗮𝗶𝗹𝘀 𝘁𝗲𝗹𝗹 𝗮 𝘀𝘁𝗼𝗿𝘆, 𝗯𝘂𝘁 𝗶𝘀 𝗶𝘁 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝘀𝘁𝗼𝗿𝘆?" ❓ Often, we build training datasets based on available data. Unfortunately, this reifies existing digital inequalities as some people have more digital traces than others. 𝗧𝗼 𝗯𝘂𝗶𝗹𝗱 𝗺𝗼𝗿𝗲 𝗶𝗻𝗰𝗹𝘂𝘀𝗶𝘃𝗲 𝗔𝗜 𝗺𝘆 𝗿𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱𝗮𝘁𝗶𝗼𝗻𝘀 ��𝗿𝗲: 1. Start by assessing your dataset for who is, and who is not, included. 2. Think about the lives of the people included, build 5 “day in the life” personas. 3. Assess each “day in the life” for what data points that are digitally captured. 𝘼𝙣𝙙 𝙬𝙝𝙖𝙩 𝙩𝙝𝙚𝙮 𝙢𝙞𝙨𝙨. 4. Groundstruth your findings with your end users, asking them to fill in any blanks and validate your assessment. 5. Refine your data sources to more holistically capture the realities of the people you aim to serve. Or, gulp, walk away from AI for now. (yes, that's an option). If you're interested in inclusive AI, I recommend reading the amazing work of Alexandra R. , Alex Kessler, and Jacobo Menajovsky through Center for Financial Inclusion (CFI). Read the full report: https://lnkd.in/eNwFW4Ha
-
Using AI to Recognize Exclusion: The Microsoft Inclusive Tech Lab just published a great resource showing how to use generative AI as a thinking partner, not a checklist, to uncover where our designs might unintentionally exclude people. It walks you through prompts for websites, apps, and games, all built around the inclusive-design principle “Recognize Exclusion.” The AI generates examples of where people could get left out (perceivability, operability, understandability), which you can then validate with real people with disabilities.This is such a great example of how AI can expand empathy and awareness, instead of just automating compliance. Definitely worth a read: https://lnkd.in/gzshGu7e #a11y
-
As AI systems become more prevalent, teams must use inclusive design to build with marginalized communities to reduce the possibility of harm to their communities caused by AI. Ioana Tanase and I partnered with Hanna Wallach's team to leverage the inclusive design methodology to build generative AI systems that consider the disability community. Here is an overview of the process: 1. Identify the risks by partnering with the disability community to understand fairness-related risks affecting people with disabilities. 2. Turn those findings into a systematized concept to help develop methods to measure the risks. 3. Revise systems as needed. 4. Monitor the technology to ensure a better experience for people with disabilities. Check out this article to learn more about the process and the importance of measurement to reduce harm. https://lnkd.in/ezhEdPbc Big thanks to our partners at Microsoft Research: Chad Atalla, Dan Vann, Emily Corvi, Hannah Washington, Tricia McDonough, and Stefanie Reed.