People delegate more and more tasks to AI. From tax declarations to online market pricing to autonomous weapons, people increasingly let AI make decisions — sometimes even with life-and-death consequences. ❓ This raises an important question: What happens when the delegated task has an ethical dimension? We studied this systematically across 13 experiments. Participants either acted themselves or delegated to another agent (human or AI) in situations where dishonesty could yield financial benefits. Three main findings: 1️⃣ Delegating to AI increased dishonesty. People cheated more when tasks were executed by machines on their behalf than when they acted themselves. 2️⃣ The interface matters. Rule-based interfaces constrained dishonest outcomes. Supervised learning interfaces (trained on prior behavior) led to more dishonesty. Goal-based interfaces (e.g., “maximize accuracy” vs. “maximize profit”) produced the highest levels of cheating. 3️⃣ Machines comply more readily with fully dishonest instructions. When asked to cheat outright, AI followed through at much higher rates than human delegates. Why? Delegating to AI reduces feelings of guilt and responsibility — and machines tend to comply more faithfully than humans. Interfaces that make it easier to frame goals without direct responsibility can further amplify unethical behavior. As AI becomes embedded in domains like finance, law, and auditing, the design of delegation interfaces will play a critical role in shaping outcomes. Well-designed guardrails can prevent AI from becoming an amplifier of dishonest behavior. #openaccess link: https://lnkd.in/eQVzExJE Nature Magazine Some media coverage 👇 🎧 Nature Podcast: https://lnkd.in/eB2NV6yk 🎧 Last Show: https://lnkd.in/emy2ZWBm 📃 Independent.co https://lnkd.in/e5_26HqG #AIethics #ArtificialIntelligence #NaturePaper #HumanComputerInteraction #Accountability
Risks of delegating trust to machines
Explore top LinkedIn content from expert professionals.
Summary
Delegating trust to machines means allowing artificial intelligence (AI) or autonomous agents to make decisions or act on our behalf, which can introduce new risks to ethics, security, and outcomes. As AI is more involved in business, finance, and everyday tasks, understanding and managing these risks is increasingly important for both organizations and individuals.
- Establish oversight: Always build in layers of human supervision when handing control to AI so you can catch mistakes, ethical issues, or security breaches early.
- Design clear boundaries: Set up strong rules and instructions for what AI agents can and cannot do, especially when tasks have financial, ethical, or legal consequences.
- Monitor intent drift: Regularly check that autonomous agents are still working toward your original goals and haven’t veered off course during negotiations or decision-making.
-
-
Fully Autonomous AI? Sure... What Could POSSIBLY Go Wrong??? This Hugging Face paper attached here argues how things can. It exposes the hidden dangers of ceding full control. If you’re leading AI or cybersecurity efforts, this is your wake-up call. "Buyer Beware" when implementing fully autonomous AI agents. It argues that unchecked code execution with no human oversight is a recipe for failure. Safety, security, and accuracy form the trifecta no serious AI or cybersecurity leader can ignore. 𝙒𝙝𝙮 𝙩𝙝𝙚 𝙋𝙖𝙥𝙚𝙧 𝙎𝙩𝙖𝙣𝙙𝙨 𝙊𝙪𝙩 𝙩𝙤 𝙈𝙚? • 𝗥𝗶𝘀𝗸 𝗼𝗳 𝗖𝗼𝗱𝗲 𝗛𝗶𝗷𝗮𝗰𝗸𝗶𝗻𝗴: An agent that writes and runs its own code can become a hacker’s paradise. One breach, and your entire operation could go dark. • 𝗪𝗶𝗱𝗲𝗻𝗶𝗻𝗴 𝗔𝘁𝘁𝗮𝗰𝗸 𝗦𝘂𝗿𝗳𝗮𝗰𝗲𝘀: As agents grab hold of more systems—email, financials, critical infrastructure—the cracks multiply. Predicting every possible hole is a full-time job. • 𝗛𝘂𝗺𝗮𝗻 𝗢𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁 𝗠𝗮𝘁𝘁𝗲𝗿𝘀: The paper pushes for humans to stay in the loop. Not as bystanders, but as a second layer of judgment. I don't think it's a coindence that this aligns to the work we've been doing at OWASP Top 10 For Large Language Model Applications & Generative AI Agentic Security (See the Agentic AI - Threats and Mitigations Guide) Although the paper (and I) warns against full autonomy, it (and I) nods to potential gains: faster workflows, continuous operation, and game-changing convenience. I just don't think we’re ready to trust machines for complex decisions without guardrails. 𝙃𝙚𝙧𝙚'𝙨 𝙒𝙝𝙚𝙧𝙚 𝙄 𝙥𝙪𝙨𝙝 𝘽𝙖𝙘𝙠 (𝙍𝙚𝙖𝙡𝙞𝙩𝙮 𝘾𝙝𝙚��𝙠) 𝗦𝗲𝗹𝗲𝗰𝘁𝗶𝘃𝗲 𝗢𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁: Reviewing every agent decision doesn’t scale. Random sampling, advanced anomaly detection, and strategic dashboards can spot trouble early without being drowned out by the noise. 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 𝗮𝗻𝗱 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Humans need to understand an AI’s actions, especially in cybersecurity. A “black box” approach kills trust and slows down response. 𝗙𝘂𝗹𝗹 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝘆 (𝗘𝘃𝗲𝗻𝘁𝘂𝗮𝗹𝗹𝘆?): The paper says “never.” I say “maybe not yet.” We used to say the same about deep-space missions or underwater exploration. Sometimes humans can’t jump in, so we’ll need solutions that run on their own. The call is to strengthen security and oversight before handing over the keys. 𝗖𝗼𝗻𝘀𝘁𝗮𝗻𝘁 𝗘𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻: Tomorrow’s AI could iron out some of these flaws. Ongoing work in alignment, interpretability, and anomaly detection may let us push autonomy further. But for now, human judgment is the ultimate firewall. 𝙔𝙤𝙪𝙧 𝙉𝙚𝙭𝙩 𝙈𝙤𝙫𝙚 Ask tough questions about your AI deployments. Implement robust monitoring. Experiment where mistakes won’t torpedo your entire operation. Got a plan to keep AI both powerful and secure? Share your best strategy. How do we define what “safe autonomy” looks like? #AI #Cybersecurity #MachineLearning #DataSecurity #AutonomousAgents
-
Autonomous agents are starting to think for themselves. That’s the opportunity — and the threat. In agentic systems, the real threat isn’t failure — it’s intent drift. As autonomous agents become more capable, we’re entering a world where agents don’t just execute tasks — they negotiate, delegate, and collaborate across MCP servers and protocols. At first glance, this sounds amazing: • Agents dynamically plan workflows, • Reassign tasks to specialized agents, • Optimize across tools without human intervention. But there’s a hidden risk that’s barely being discussed: Intent drift. When an agent delegates a task to another agent, and that agent further refines or optimizes the plan — how do we guarantee that the original intent remains intact? Because without safeguards: • Small optimizations compound into major divergences. • Agents may inadvertently introduce bias, skip critical steps, or prioritize differently. • End results could be valid at a technical level — but wrong at a business, ethical, or security level. Today, we validate API requests. Tomorrow, we’ll need to validate agent conversations. This introduces a whole new frontier: • Intent Signing: Cryptographic assurance that delegated tasks preserve original intent. • Chain of Trust for Agent Actions: Verifiable audit trails from goal to execution. • Drift Detection Models: Dynamic monitoring of agent workflows for deviation. Because agent-to-agent communication isn’t just about making systems interoperable — it’s about making trust travel across every decision, every delegation, every action. And in a world where dozens of agents, tools, and servers are working together without human checkpoints, governance without drift control isn’t just a technical gap — it’s a ticking time bomb. #AI #AutonomousAgents #AgentProtocols #Security #Governance #MCP #LangGraph #AgenticSystems
-
𝐀𝐈 𝐚𝐠𝐞𝐧𝐭𝐬 𝐝𝐨 𝐧𝐨𝐭 𝐣𝐮𝐬𝐭 𝐬𝐮𝐩𝐩𝐨𝐫𝐭 𝐮𝐬...𝐛𝐮𝐭 𝐭𝐡𝐞𝐲 𝐦𝐚𝐲 𝐚𝐥𝐬𝐨 𝐦𝐚𝐤𝐞 𝐮𝐬 𝐦𝐨𝐫𝐞 𝐝𝐢𝐬𝐡𝐨𝐧𝐞𝐬𝐭 In their new Nature Magazine paper, Nils Köbis, Zoe Rahwan, Raluca Rilla, Bramantyo Ibrahim Supriyatno, Clara N. Bersch, Tamer Ajaj, Jean-Francois Bonnefon, and Iyad Rahwan reveal a quite troubling behavioral pattern: delegating tasks to AI can increase dishonest behavior. 𝐊𝐞𝐲 𝐟𝐢𝐧𝐝𝐢𝐧𝐠𝐬 ➡️ People cheat more when delegating to AI, especially when using interfaces like goal-setting or supervised learning that allow plausible deniability. ➡️ AI agents are more compliant than humans when given unethical instructions. While ~60–95% of AI agents comply with full-cheating instructions, only ~25–40% of humans do. ➡️ Guardrails help, but imperfectly: only strong, user-level prohibitive instructions significantly reduce AI compliance, while system-level or general moral reminders are largely ineffective. 𝐈𝐦𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 ➡️ AI delegation lowers the psychological cost of unethical behavior for both the principal and the agent. The risk grows as AI becomes more accessible and embedded in daily decision-making. ➡️ Not only technical guardrails are needed but also broader frameworks for ethical oversight and interface design, particularly in domains like finance, law, or healthcare where AI delegation is rapidly increasing. #artificialintelligence #generativeai #ethics #automation #futureofwork Stefano Puntoni Christoph Fuchs Carey Morewedge Carl Benedikt Frey Luca Cian Chiara Longoni Marilyn Giroux, PhD
-
The Automated but Risky Game: Modeling Agent-to-Agent Negotiations and Transactions in Consumer Markets AI agents are increasingly used in consumer-facing applications to assist with tasks such as product search, negotiation, and transaction execution. In this paper, the authors investigate a future setting where both consumers and merchants authorize AI agents to automate the negotiations and transactions in consumer settings. They aim to address two main questions: (1) Do different LLM agents exhibit varying performances when making deals on behalf of their users? (2) What are the potential risks when we use AI agents to fully automate negotiations and deal-making in consumer settings? The authors design an experimental framework to evaluate AI agents’ capabilities and performance in real-world negotiation and transaction scenarios, and experimented with a range of LLM agents. The analysis reveals that deal-making with LLM agents in consumer settings is an inherently imbalanced game: different AI agents have large disparities in obtaining the best deals for their users. Furthermore, the authors found that LLMs’ behavioral anomaly might lead to financial loss for both consumers and merchants when deployed in real-world decision-making scenarios, such as overspending or making unreasonable deals. The findings highlight that while automation can enhance transactional efficiency, it also poses nontrivial risks to consumer markets. Users should be careful when delegating business decisions to LLM agents.
-
𝐓𝐡𝐞 𝐒𝐢𝐥𝐞𝐧𝐭 𝐏𝐢𝐭𝐟𝐚𝐥𝐥 𝐨𝐟 𝐀𝐈: 𝐖𝐡𝐞𝐧 𝐎𝐮𝐫 𝐓𝐫𝐮𝐬𝐭 𝐁𝐞𝐜𝐨𝐦𝐞𝐬 𝐚 𝐁𝐥𝐢𝐧𝐝 𝐒𝐩𝐨𝐭 We’re increasingly relying on AI in our professional lives—from drafting emails to analyzing complex datasets. The seamless integration can feel like a superpower. While Erik Strauss and I can’t emphasize enough how important it is to experiment with AI, there’s also a psychological phenomenon beneath the surface that carries potential risks: automation bias. This isn't just about taking the easy route. Automation bias describes our inherent tendency to 𝐟𝐚𝐯𝐨𝐫 𝐬𝐮𝐠𝐠𝐞𝐬𝐭𝐢𝐨𝐧𝐬 𝐚𝐧𝐝 𝐨𝐮𝐭𝐩𝐮𝐭𝐬 𝐟𝐫𝐨𝐦 𝐚𝐮𝐭𝐨𝐦𝐚𝐭𝐞𝐝 𝐬𝐲𝐬𝐭𝐞𝐦𝐬, even when contradictory information exists. It’s a cognitive shortcut where we subconsciously assign a higher degree of accuracy and reliability to machines. This can manifest in two ways: either blindly following an AI's incorrect advice (a commission error) or failing to notice a problem because the AI didn't flag it (an omission error). 𝐖𝐡𝐲 𝐝𝐨𝐞𝐬 𝐭𝐡𝐢𝐬 𝐡𝐚𝐩𝐩𝐞𝐧? Our brains are wired to be efficient. When faced with complex tasks or under pressure, we naturally gravitate towards the path of least resistance. AI, with its perceived analytical prowess and access to vast amounts of data, can appear to be a more reliable decision-maker than our own fallible human judgment. This trust, while often beneficial, can become a blind spot. Consider the implications in high-stakes environments like aviation or healthcare, where over-reliance on automated systems has been linked to critical errors. Even in everyday scenarios, like blindly following GPS directions into a lake (yes, this really happened!), we see automation bias in action. What’s even more concerning is how this bias can amplify the inherent flaws within AI itself. 𝐈𝐟 𝐰𝐞 𝐨𝐯𝐞𝐫-𝐭𝐫𝐮𝐬𝐭 𝐚 𝐬𝐲𝐬𝐭𝐞𝐦 𝐭𝐡𝐚𝐭 𝐢𝐬 𝐭𝐫𝐚𝐢𝐧𝐞𝐝 𝐨𝐧 𝐛𝐢𝐚𝐬𝐞𝐝 𝐝𝐚𝐭𝐚, 𝐰𝐞 𝐫𝐢𝐬𝐤 𝐩𝐞𝐫𝐩𝐞𝐭𝐮𝐚𝐭𝐢𝐧𝐠 𝐚𝐧𝐝 𝐞𝐯𝐞𝐧 𝐞𝐱𝐚𝐜𝐞𝐫𝐛𝐚𝐭𝐢𝐧𝐠 𝐮𝐧𝐟𝐚𝐢𝐫 𝐨𝐫 𝐝𝐢𝐬𝐜𝐫𝐢𝐦𝐢𝐧𝐚𝐭𝐨𝐫𝐲 𝐨𝐮𝐭𝐜𝐨𝐦𝐞𝐬. The key takeaway isn't to distrust AI, but to 𝐜𝐮𝐥𝐭𝐢𝐯𝐚𝐭𝐞 𝐚 𝐡𝐞𝐚𝐥𝐭𝐡𝐲 𝐬𝐤𝐞𝐩𝐭𝐢𝐜𝐢𝐬𝐦. Recognizing our inherent tendency towards automation bias is the first step. We need to foster environments where critical evaluation of AI output is encouraged, and where human expertise remains a vital component of decision-making. As AI continues to evolve, understanding this psychological interplay will be crucial for ensuring we harness its power responsibly and avoid the pitfalls of misplaced trust. #AI #Psychology #CognitiveBias #FutureofWork #HumanAICollaboration
-
𝙒𝙝𝙚𝙧𝙚 𝘼𝙄 𝙖𝙣𝙙 𝙍𝙤𝙗𝙤𝙩𝙞𝙘𝙨 𝘾𝙖𝙣 𝙂𝙤 𝙍𝙤𝙣𝙜 — 𝙖𝙣𝙙 𝙒𝙝𝙮 𝙒𝙚 𝙈𝙪𝙨𝙩 𝙁𝙤𝙘𝙪𝙨 𝙉𝙤𝙬 𝙏𝙝𝙚 𝙢𝙤𝙨𝙩 𝙙𝙖𝙣𝙜𝙚𝙧𝙤𝙪𝙨 𝙛𝙖𝙞𝙡𝙪𝙧𝙚𝙨 𝙖𝙧𝙚𝙣’𝙩 𝙖𝙡𝙬𝙖𝙮𝙨 𝙘𝙖𝙩𝙖𝙨𝙩𝙧𝙤𝙥𝙝𝙞𝙘 — 𝙨𝙤𝙢𝙚 𝙜𝙧𝙤𝙬 𝙞𝙣 𝙨𝙞𝙡𝙚𝙣𝙘𝙚 𝙪𝙣𝙩𝙞𝙡 𝙞𝙩’𝙨 𝙩𝙤𝙤 𝙡𝙖𝙩𝙚. When we combine advanced AI cognition with autonomous robotics, the stakes are no longer theoretical. A single overlooked flaw can ripple into real-world harm. What demands our full attention: • Decision Drift – AI models in robotics can accumulate tiny biases and errors over time, leading to subtle but compounding misjudgments in navigation, identification, or interaction. • Sensor Fusion Blind Spots – Mismatched or faulty integration of lidar, thermal, GPS, and vision feeds can cause robots to “trust” corrupted data, making dangerous moves in high-stakes environments. • Adversarial Manipulation – Bad actors can feed AI systems carefully crafted inputs to cause misclassification, mis-targeting, or operational shutdowns. • Over-Delegation – The temptation to fully hand over control without layered verification introduces a systemic risk: machines acting with certainty on wrong assumptions. • Maintenance Decay – In long-term autonomous deployments, mechanical or software degradation can hide behind seemingly normal performance until catastrophic failure occurs. We cannot let speed of innovation outrun the discipline of validation, security hardening, and ethical oversight. AI and robotics don’t just need to work, they need to be trustworthy under every condition. The technology is already powerful enough to reshape the world. Whether it does so for better or worse depends entirely on whether we focus before something goes wrong.
-
Banks have a narrow window to establish themselves as trustworthy players before purely digital, agent-native competitors emerge. Success won't belong to institutions that simply optimise for seamless interactions, but to those that solve the harder problem: maintaining trust when "no human made the decision." Whilst we appear to be solving for frictionless banking while accelerating 'delegated responsibility without accountability.' The more seamless AI agents become, the thinner our threshold for questioning them grows. This isn't just a compliance issue; I believe there is a whole new risk category. Your AI agent thinks it's optimising perfectly, but it doesn't understand that your customer's 'emergency fund' is actually their psychological security blanket. Technical success is experiential failure. Welcome to the trust chasm, where culture matters. As banking enters the age of autonomous agents that negotiate, decide, and execute without human oversight, we face a paradox: the more perfect the automation, the more opaque the accountability. These aren't chatbots, they're autonomous entities with verified digital wallets making novel financial decisions that weren't explicitly programmed. explored at length ... Collaboration with Karen Elliott Gam Dias and Ammar Younas
-
A few weeks ago, I was drowning in emails. 𝐔𝐧𝐫𝐞𝐚𝐝 𝐦𝐞𝐬𝐬𝐚𝐠𝐞𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐡𝐮𝐧𝐝𝐫𝐞𝐝𝐬. 𝐖𝐡𝐚𝐭𝐬𝐀𝐩𝐩 𝐛𝐥𝐨𝐰𝐢𝐧𝐠 𝐮𝐩. Clients, colleagues, spam—𝐞𝐯𝐞𝐫𝐲𝐭𝐡𝐢𝐧𝐠 𝐟𝐞𝐥𝐭 𝐮𝐫𝐠𝐞𝐧𝐭. So, I did what any productivity nerd would do. 𝐈 𝐭𝐞𝐬𝐭𝐞𝐝 𝐚𝐧 𝐀𝐈 𝐞𝐦𝐚𝐢𝐥 𝐚𝐬𝐬𝐢𝐬𝐭𝐚𝐧𝐭. At first, it was magical. 𝐈𝐭 𝐚𝐮𝐭𝐨-𝐫𝐞𝐩𝐥𝐢𝐞𝐝, 𝐟𝐢𝐥𝐭𝐞𝐫𝐞𝐝 𝐦𝐞𝐬𝐬𝐚𝐠𝐞𝐬, 𝐚𝐧𝐝 𝐞𝐯𝐞𝐧 𝐝𝐫𝐚𝐟𝐭𝐞𝐝 𝐫𝐞𝐬𝐩𝐨𝐧𝐬𝐞𝐬 𝐭𝐡𝐚𝐭 𝐬𝐨𝐮𝐧𝐝𝐞𝐝 𝐥𝐢𝐤𝐞 𝐦𝐞. Until it didn’t. One morning, a client emailed about a last-minute contract change. 𝐓𝐡𝐞 𝐀𝐈 𝐟𝐥𝐚𝐠𝐠𝐞𝐝 𝐢𝐭 𝐚𝐬 ‘𝐧𝐨𝐧-𝐮𝐫𝐠𝐞𝐧𝐭’ 𝐚𝐧𝐝 𝐝𝐢𝐝𝐧’𝐭 𝐧𝐨𝐭𝐢𝐟𝐲 𝐦𝐞. By the time I saw it, 𝐭𝐡𝐞 𝐝𝐞𝐚𝐥 𝐰𝐚𝐬 𝐚𝐥𝐦𝐨𝐬𝐭 𝐨𝐟𝐟 𝐭𝐡𝐞 𝐭𝐚𝐛𝐥𝐞. That’s what made me think—giving AI control isn’t just about convenience. It’s about trust. Would you hand over your email, WhatsApp, or business communication entirely to AI? Data risks are real. AI tools process sensitive information—but where does that data go? A 2023 IBM study found 83% of companies had AI-related data breaches. AI makes mistakes. A report by Investing in the Web evaluated ChatGPT's responses to 100 financial questions. The findings revealed that while ChatGPT was correct 65% of the time, it provided incomplete or misleading information 29% of the time and was outright wrong 6% of the time. This underscores the potential risks of relying solely on AI for financial decisions. Additionally, a survey by J.D. Power found that only 27% of respondents trusted AI for financial information and advice, indicating a general skepticism about the reliability of AI in financial contexts. MIND IT! Over-reliance is risky. What happens when an AI misinterprets tone, ignores critical messages, or gets hacked? Yes, AI saves time—but giving it complete control? That’s dangerous. So, here’s what I do instead: ✅ Use AI to prioritize and sort emails, not auto-respond. ✅ Keep sensitive conversations human-led. ✅ Only trust reputable AI tools with strict security policies. What I have learned over the time: AI should assist, not replace. Would you let AI fully manage your communication? Where do you draw the line? #AITools #ArtificialIntelligence #ChatGPT #Innovation
-
AI might be able to write the notes, but only humans can feel the silences. I’m not anti-tech – far from it. But in health, listening isn’t a step we can automate. ‘My GP seems to forget more of what we talked about than I do, since he started using that bot to write his notes.’ That’s what a colleague told me recently. It made her think about finding a new GP, someone she could trust more deeply. This is not a scientific sample. I’m not saying it’s true of every experience or clinician. But the feeling? That stuck with me. Because when people feel unheard, trust erodes – and trust is foundational. Tools like AI scribes are meant to help – to free up time and reduce admin for clinical teams. I’m all for that. But what if that’s not all that’s happening? What if the very act of being recorded changes how we are heard – and even more importantly, whether we feel heard? What a bot transcribes as silence, a skilled clinician might hear as courage. A pause. A hesitation. A breath before someone says the hard thing. These are moments that clinicians are trained to notice – but a bot, no matter how fast or efficient, can only summarise. So, when the record becomes the responsibility of the machine, even partly – does something shift in the room? Then there’s consent. I’ve heard many stories from people who say they didn’t feel they’d properly consented to the use of an AI scribe. In a setting where power imbalances run deep, a rushed “that okay?” at the start of a consult may not create real space to say no – even when the clinician asks with good intent. And that’s just one of the quiet risks. We don’t tend to question the clear benefits of tech – and fair enough. But we do need to sit with the risks. If we’re not careful, the promise of faster, easier notes could quietly cost us something far harder to rebuild: the relationship. And that will have ripple effects. We know from the OECD’s Does Healthcare Deliver? report that trust matters. If people don’t feel heard, they’re less likely to return to clinical settings. If they have to re-explain, or correct mistakes, or start over with a new team – it’s not only inefficient. It’s painful. I’m not saying we throw out the tech. I’m saying we pause and ask: • Does this support the kind of care we all actually want? • Do people feel safe? • Do people feel known? • Would they bring their full selves into the room again? • And if not – what exactly have we saved? Trust is the gold dust – the foundation of connection. We support and honour it in all our human encounters. Because once it’s lost, it’s hard to rebuild – and no machine can do it for us. Consumers Health Forum of Australia (CHF) Research Australia Annette Schmiede Kylie Sproston FTSE FMPP GAICD Jean Enno Charton Jamie Snashall Billy Moore OECD Social Candan Kendir Farah Magrabi #DigitalHealth #AI #PrimaryCare #Trust