Engineering Ethics In Practice

Explore top LinkedIn content from expert professionals.

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    66,704 followers

    "this position paper challenges the outdated narrative that ethics slows innovation. Instead, it proves that ethical AI is smarter AI—more profitable, scalable, and future-ready. AI ethics is a strategic advantage—one that can boost ROI, build public trust, and future-proof innovation. Key takeaways include: 1. Ethical AI = High ROI: Organizations that adopt AI ethics audits report double the return compared to those that don’t. 2. The Ethics Return Engine (ERE): A proposed framework to measure the financial, human, and strategic value of ethics. 3. Real-world proof: Mastercard’s scalable AI governance and Boeing’s ethical failures show why governance matters. 4. The cost of inaction is rising: With global regulation (EU AI Act, etc.) tightening, ethical inaction is now a risk. 5. Ethics unlocks innovation: The myth that governance limits creativity is busted. Ethical frameworks enable scale. Whether you're a policymaker, C-suite executive, data scientist, or investor—this paper is your blueprint to aligning purpose and profit in the age of intelligent machines. Read the full paper: https://lnkd.in/eKesXBc6 Co-authored by Marisa Zalabak, Balaji Dhamodharan, Bill Lesieur, Olga Magnusson, Shannon Kennedy, Sundar Krishnan and The Digital Economist.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    58,789 followers

    AI occupies a unique position in terms of dual-use technologies (DUT), reflecting its potential for both beneficial applications and military utilisation. AI's dual-use nature poses significant regulatory and ethical challenges, notably in its military dimensions which remain largely outside the ambit of civilian legislation such as the proposed AI Act. DUT are those with potential applications in both civilian and military domains. The essence of DUT lies in its versatility; the same technology that propels advancements in healthcare, education, and industry can also be adapted for surveillance, autonomous weaponry, and cyber warfare. This inherent ambiguity in application makes the governance of DUT, especially AI, a complex task. The AI Act primarily addresses civilian uses of AI, focusing on ethical guidelines, data protection, and transparency. Military applications of AI, by contrast, remain largely outside the scope of this act and other similar legislative efforts globally. The nuanced aspect of dual-use capabilities in AI brings software contracts into focus, serving as a critical instrument in governing the use, deployment, and development of AI technologies. Software contracts between developers, vendors, and users sometimes contain dual-use provisions to explicitly govern the use of the technology in both civilian and military contexts. These provisions are designed to ensure that the deployment of AI technologies aligns with legal standards, ethical norms, and, when applicable, international regulations. Dual-use clauses in software contracts may include restrictions on usage, export controls, compliance with international law, and requirements for end-use monitoring. Restrictions on Usage: Contracts may specify permissible uses of the software, explicitly prohibiting or restricting its application in military settings without proper authorisation. This helps in mitigating the risks associated with unintended or unauthorised military use of AI technologies. Export Controls: Given the potential military applications of AI, software contracts often include clauses related to export controls, requiring compliance with national and international regulations governing the export of dual-use technologies. This ensures that AI technologies do not inadvertently contribute to proliferation or escalate geopolitical tensions. Compliance with International Law: Provisions may also require that the use of AI technologies, particularly in military contexts, complies with international humanitarian law and other relevant legal frameworks. This is crucial in ensuring that the deployment of AI in warfare adheres to principles of distinction, proportionality, and necessity. It is clear that addressing the dual-use dilemma of AI extends beyond contractual measures. It requires a holistic approach that combines legal frameworks, ethical considerations, and international cooperation.

  • View profile for Tannika Majumder

    Senior Software Engineer at Microsoft | Ex Postman | Ex OYO | IIIT Hyderabad

    48,802 followers

    It was 8:15 AM when a mom’s phone rang. It was her son, panic in his voice: “Mom, I forgot my assignment at home. It’s due in the first period. Please, can you bring it to school?” She could’ve snapped. → “Why weren’t you more careful?” → “I told you to double-check!” But she didn’t. Ten minutes later, she was at the school gate, assignment in hand. Her son rushed over and everything went well. Her son said, “Thanks for not yelling at me, Mom.” And she just smiled. Because in her mind, she knew this: The moment you help someone through a mess without making them feel small is the moment they start trusting you. That evening, after the panic was over, they sat together and talked about building better habits, packing the bag the night before, making a checklist, owning up to mistakes. She knew the lesson would stick because she stood by him when he needed it. This is the same way senior engineers should handle juniors. You don’t build trust by exploding at the first sign of trouble. You build it by showing up, especially when it’s inconvenient. When a junior messes up, the urge to lecture is real. But support comes first, lessons come after. Because good engineers don’t stay just for the perks. They stay where they feel safe enough to make mistakes and learn. And that’s how you build teams that stick together, at home or at work.

  • View profile for Bhavik Kothari

    Principal Engineer, Amazon

    15,172 followers

    Some obvious and not so obvious challenges of being a Principal Engineer   Paradox of Belonging: You are part of all teams, yet you are part of none. The role can be surprisingly isolating - you're connected to everyone but deeply anchored nowhere. It is important to find the right circles of trust, peer mentorship and individuals you can share your challenges with.   Freedom-Responsibility Paradox: You enjoy significant autonomy in being able to choose what to work on, however there is an implicit expectation and accountability for resounding impact. You constantly find yourself needing to validate if you are truly solving the right problems. The solve is to create an impact framework to assess your work's potential value while maintaining consistent feedback loops to validate not just progress but also the choices. The freedom isn't about doing what you want; it's about taking ownership of finding the highest leverage problems to solve.   Bandwidth Challenges: It is easy to become a "social resource" - the person in every meeting, involved in every key decision and helping everyone who asks. This leads to burnout from context switching, disconnect from hands-on tech and diluted impact across many initiatives. The trick is to transform from being a reactive social resource into a strategic force multiplier by establishing clear engagement frameworks, scalable solutions and protecting your bandwidth for high-leverage activities.   Being Truly Present: You find yourself physically present in one meeting while your mind is already racing ahead to the next three. This primarily stems from over-scheduled calendars with high-stakes decisions being made across multiple domains, leading to reduced effectiveness and lower quality decision making. You therefore need to create space between commitments and develop systems enabling full presence in fewer, more impactful discussions. The goal isn't to be in every meeting, it's to be fully present in the right ones.   Perfection Trap: As a responsible engineer, one always seeks to do a thorough analysis and exhaustive trade-off evaluation to make high quality decisions. However as PEs working on broader, ambiguous problems, you realize perfect decisions often compete with good enough decisions that offer progress and unblock teams. Accept that good decisions now is better than perfect decisions later. Authority Paradox: Contrary to perception, PEs possess little to no authority by default. While often being tasked with broader, cross functional initiatives, PEs lack the traditional levers of control. Unlike people managers, PEs cannot simply delegate tasks or make direct assignments. Instead, our effectiveness hinges on our ability to inspire, persuade and align diverse teams towards a common goal. PEs can't simply issue orders or delegate based on hierarchical authority. Rather, we must earn respect through a combination of technical expertise, strategic vision, and most crucially, trust.  

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,502 followers

    The guide "AI Fairness in Practice" by The Alan Turing Institute from 2023 covers the concept of fairness in AI/ML contexts. The fairness paper is part of the AI Ethics and Governance in Practice Program (link: https://lnkd.in/gvYRma_R). The paper dives deep into various types of fairness: DATA FAIRNESS includes: - representativeness of data samples, - collaboration for fit-for-purpose and sufficient data quantity, - maintaining source integrity and measurement accuracy, - scrutinizing timeliness, and - relevance, appropriateness, and domain knowledge in data selection and utilization. APPLICATION FAIRNESS involves considering equity at various stages of AI project development, including examining real-world contexts, addressing equity issues in targeted groups, and recognizing how AI model outputs may shape decision outcomes. MODEL DESIGN AND DEVELOPMENT FAIRNESS involves ensuring fairness at all stages of the AI project workflow by - scrutinizing potential biases in outcome variables and proxies during problem formulation, - conducting fairness-aware design in preprocessing and feature engineering, - paying attention to interpretability and performance across demographic groups in model selection and training, - addressing fairness concerns in model testing and validation, - implementing procedural fairness for consistent application of rules and procedures. METRIC-BASED FAIRNESS utilizes mathematical mechanisms to ensure fair distribution of outcomes and error rates among demographic groups, including: - Demographic/Statistical Parity: Equal benefits among groups. - Equalized Odds: Equal error rates across groups. - True Positive Rate Parity: Equal accuracy between population subgroups. - Positive Predictive Value Parity: Equal precision rates across groups. - Individual Fairness: Similar treatment for similar individuals. - Counterfactual Fairness: Consistency in decisions. The paper further covers SYSTEM IMPLEMENTATION FAIRNESS, incl. Decision-Automation Bias (Overreliance and Overcompliance), Automation-Distrust Bias, contextual considerations for impacted individuals, and ECOSYSTEM FAIRNESS. -- Appendix A (p 75) lists Algorithmic Fairness Techniques throughout the AI/ML Lifecycle, e.g.: - Preprocessing and Feature Engineering: Balancing dataset distributions across groups. - Model Selection and Training: Penalizing information shared between attributes and predictions. - Model Testing and Validation: Enforcing matching false positive/negative rates. - System Implementation: Allowing accuracy-fairness trade-offs. - Post-Implementation Monitoring: Preventing model reliance on sensitive attributes. -- The paper also includes templates for Bias Self-Assessment, Bias Risk Management, and a Fairness Position Statement. -- Link to authors/paper: https://lnkd.in/gczppH29 #AI #Bias #AIfairness

  • View profile for Sami Eltamawy

    Director and Head of Security, Privacy & IT at FreeTrade | InfoSec Instructor | Ex-Meta

    13,608 followers

    New Blog Post: Ever faced a dilemma at work between sticking to your principles and being liked? In the fast-evolving fields of security, privacy, and IT, this is an everyday challenge. Inspired by the book The Courage to Be Disliked, my latest blog post delves into why making ethical decisions matters more than being popular. Key Takeaways: 1) Saying "no" can be essential to protect sensitive data and maintain efficiency. 2) Build relationships, not based on popularity, but on professional respect and integrity. 3) Balance positive interactions with a steadfast commitment to ethical standards. Staying true to your professional integrity not only safeguards your organization's interests but also earns respect and trust over time. Whether you're in a leadership position or aspiring to influence through your role, nurturing the courage to stand by your values makes all the difference. 📖 Read the full article to explore how you can apply these insights! https://lnkd.in/eAs2jvi3 Remember: True respect is built on consistent ethical choices, not fleeting popularity. Stay principled and lead with courage! #EthicalLeadership #Security #Privacy #ProfessionalIntegrity #CourageToBeDisliked

  • View profile for Dave Kline
    Dave Kline Dave Kline is an Influencer

    Become the Leader You’d Follow | Founder @ MGMT | Coach | Advisor | Speaker | Trusted by 250K+ leaders.

    164,975 followers

    Your team isn't lazy. They're confused. You need a culture of accountability that's automatic: When accountability breaks down, it's not because people don't care. It's because your system is upside down. Most leaders think accountability means "holding people responsible." Wrong. Real accountability? Creating conditions where people hold themselves responsible. Here's your playbook: 📌 Build the Base Start with a formal meeting to identify the real issues. Don't sugarcoat. Document everything. Set a clear date when things will change. 📌 Connect to Their Pain Help your team understand the cost of weak accountability: • Stalled career growth • Broken trust between teammates • Mediocre results that hurt everyone 📌 Clarify the Mission Create a mission statement so clear that everyone can recite it. If your team can't connect their role to it in one sentence, They can't make good decisions. 📌 Set Clear Rules Establish 3-5 non-negotiable behaviors. Examples:  • We deliver what we commit to  • We surface problems early  • We help teammates succeed 📌 Point to Exits Give underperformers a no-fault, 2-week exit window. This isn't cruelty. It's clarity. 📌 Guard the Entrance Build ownership expectations into every job description. Hire people who already act like owners. 📌 Make Accountability Visible Create expectations contracts for each role. Define what excellence looks like. Get signed commitments. 📌 Make It Public Use weekly scorecards with clear metric ownership. When everyone can see who owns what. Accountability becomes peer-driven. 📌 Design Intervention Create escalation triggers: Level 1: Self-correction Level 2: Peer feedback Level 3: Manager coaching Level 4: Formal improvement plan 📌 Reward the Right Behaviors Reward people who identify problems early. (not those who create heroic rescues) 📌 Establish Rituals Conduct regular reviews, retrospectives, and quarterly deep dives. 📌 Live It Yourself Share your commitments publicly. Acknowledge your mistakes quickly. Your team watches what you do, not what you say. Remember: The goal isn't to catch people failing. It's to create conditions where:  • Failure becomes obvious  • And improvement becomes inevitable. New managers struggle most with accountability:  • Some hide and let performance drop  • Some overcompensate and micromanage We can help you build the playbook for your team. Join our last MGMT Fundamentals program for 2025 next week. Enroll today: https://lnkd.in/ewTRApB5 In an hour a day over two weeks, you'll get:  • Skills to beat the 60% failure rate  • Systems to make management sustainable  • Live coaching from leaders with 30+ years experience If this playbook was helpful... Please ♻️ repost and follow 🔔 Dave Kline for more.

  • View profile for Jonathan Fisher, MD
    Jonathan Fisher, MD Jonathan Fisher, MD is an Influencer

    Physician Executive, Cardiologist, Author & Speaker | Strengthening the biological and emotional foundations of health, leadership, and performance

    31,576 followers

    Aviation and nuclear power reduced human error by redesigning systems, not by telling people to “cope better.” After fatal crashes in the 1970s and 1980s, aviation adopted Crew Resource Management (CRM), standardized checklists, and redesigned cockpits to reduce cognitive overload and miscommunication. Today, the commercial aviation fatality rate has dropped by over 95%. Following accidents like Three Mile Island and Chernobyl, the nuclear industry overhauled control rooms, alarm systems, and interface design to align with human cognitive limits. Global regulators now require rigorous human factors engineering. These industries improved safety not by asking individuals to be “more resilient,” but by reshaping environments to support human performance. Healthcare faces similar challenges: fragmented EHRs, constant workflow interruptions, and shift patterns that impair decision-making. Resilience and mindfulness have real value, but they don’t solve system design failures. Sustainable well-being and performance depend on environments built for human beings. That’s the shift high-reliability industries made. Healthcare must do the same. What do you think? #JustOneHeart #Healthcare #HealthcareLeadership #SystemsThinking #PatientSafety #HumanFactors

  • View profile for Paula Cipierre
    Paula Cipierre Paula Cipierre is an Influencer

    Global Head of Privacy | LL.M. IT Law | Certified Privacy (CIPP/E) and AI Governance Professional (AIGP)

    9,197 followers

    How and to what extent can ethical theories guide the design of AI systems? This is the question I'd like to tackle in this week's #sundAIreads. The reading I chose for this is "Ethics of AI: Toward a Design for Values Approach" by Stefan Buijsman, Michael Klenk, and jeroen van den hoven from the Delft University of Technology. It's a chapter in The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence, which is available open access here: https://lnkd.in/dmP7hBnJ. The authors argue that familiar ethical theories such as virtue ethics ("what character traits should I cultivate?"), deontology ("which moral principles should I follow?"), and consequentialism ("what actions maximize wellbeing?") are necessary, but insufficient to guide the responsible development and deployment of #AI systems. Instead the authors advocate for a #design approach to AI ethics, which entails identifying relevant values, embedding them in AI systems, and continuously evaluating whether and to what extent these efforts were successful. Of course, this is easier said than done. Why? Because: 1️⃣ Values come with trade-offs, e.g., #privacy versus #security or #usability. 2️⃣ Values can change, both in terms of what they mean and how important they are to people, e.g., #sustainability. 3️⃣ AI systems are socio-technical systems, i.e., AI ethics is "just as much about the people interacting with AI and the institutions and norms in which AI is employed." These challenges can be addressed by: ✅ Making trade-offs between values explicit and either trying to resolve them or at least documenting the reasoning behind why one value was chosen over the other. ✅ Designing for "adaptability, flexibility and robustness" to account for changing values over time. ✅ Considering the environment in which AI systems will be deployed, including not only the people who will use AI systems, but also those affected by their use. I first encountered the values-by-design literature during my postgraduate studies with Helen Nissenbaum at the NYU Steinhardt Department of Media, Culture, and Communication and have been a huge fan ever since. For an even more hands-on approach to translating ethical values into technical design, I recommend checking out Dr. Niina Zuber, Severin Kacianka, Alexander Pretschner, and Julian Nida-Rümelin's Ethics in Agile Software Development (EDAP) project at the Bayerisches Forschungsinstitut für Digitale Transformation (bidt) (https://lnkd.in/dNiBUxBF) and Dr Lachlan Urquhart's Moral-IT Deck (https://lnkd.in/d9J2WQNi).

  • View profile for Sarveshwaran Rajagopal

    Applied AI Practitioner | Founder - Learn with Sarvesh | Speaker | Award-Winning Trainer & AI Content Creator | Trained 7,000+ Learners Globally

    55,019 followers

    🔍 Everyone’s discussing what AI agents are capable of—but few are addressing the potential pitfalls. IBM’s AI Ethics Board has just released a report that shifts the conversation. Instead of just highlighting what AI agents can achieve, it confronts the critical risks they pose. Unlike traditional AI models that generate content, AI agents act—they make decisions, take actions, and influence outcomes. This autonomy makes them powerful but also increases the risks they bring. ---------------------------- 📄 Key risks outlined in the report: 🚨 Opaque decision-making – AI agents often operate as black boxes, making it difficult to understand their reasoning. 👁️ Reduced human oversight – Their autonomy can limit real-time monitoring and intervention. 🎯 Misaligned goals – AI agents may confidently act in ways that deviate from human intentions or ethical values. ⚠️ Error propagation – Mistakes in one step can create a domino effect, leading to cascading failures. 🔍 Misinformation risks – Agents can generate and act upon incorrect or misleading data. 🔓 Security concerns – Vulnerabilities like prompt injection can be exploited for harmful purposes. ⚖️ Bias amplification – Without safeguards, AI can reinforce existing prejudices on a larger scale. 🧠 Lack of moral reasoning – Agents struggle with complex ethical decisions and context-based judgment. 🌍 Broader societal impact – Issues like job displacement, trust erosion, and misuse in sensitive fields must be addressed. ---------------------------- 🛠️ How do we mitigate these risks? ✔️ Keep humans in the loop – AI should support decision-making, not replace it. ✔️ Prioritize transparency – Systems should be built for observability, not just optimized for results. ✔️ Set clear guardrails – Constraints should go beyond prompt engineering to ensure responsible behavior. ✔️ Govern AI responsibly – Ethical considerations like fairness, accountability, and alignment with human intent must be embedded into the system. As AI agents continue evolving, one thing is clear: their challenges aren’t just technical—they're also ethical and regulatory. Responsible AI isn’t just about what AI can do but also about what it should be allowed to do. ---------------------------- Thoughts? Let’s discuss! 💡 Sarveshwaran Rajagopal

Explore categories