🎯2025 InternetNZ survey: “68% of New Zealanders are worried about the potential malicious use of AI and the lack of regulation surrounding it” Innovation and Regulation are not opposites. Well-designed regulation provides guardrails that give everyone more confidence to innovate safely. 🎯2025 University of Melbourne survey: “Only 44% of New Zealanders believe the benefits of AI outweigh the risks.” I've talked to a lot of people in recent months about how their organisations are adopting AI - many feel like they have to figure out where the safeguards are themselves in the absence of a national approach. Nobody wants to get it wrong. 🎯2025 KPMG survey: "Only 34% of New Zealanders are excited about AI, while 60% are worried" Because even if you don't have government regulation, you still have customers and users who care about how AI is used. And they may not be that forgiving when it's pushed too far. ➡️ This is why I've signed the open letter at https://regulateai.nz/ [in a personal capacity]. I'm quite pro-technology, I just want it to be used safely. Unfortunately, we are already seeing many harms arise from the poor use of artificial intelligence - poor because people are using AI in harmful ways, and poor because the technology itself still has a long way to go. Some people feel like they are alone in a sea of hype, that everyone is pushing for more AI while they feel uncomfortable about where things are going. The survey data shows they are not alone - about half the country is wary of where the technology is taking us. It serves all of us to be thinking about how regulation can help us manage the risks arising from AI, so that we can all safely see the benefits of this technology into the future. We don't have to pursue AI at all costs. Happy to be alongside Christopher McGavin, Andrew Lensen, Cassandra Mudgway, Joshua Yuvaraj, Ali Knott, Michael Daubs, Olivia J. Erdelyi, Allyn Robins, Anthony Robins, Ethan Plaut, Caleb Moses, Peter Thompson, Ian W., Kevin Shedlock, Hui Ma, Heitor Murilo Gomes, Emily O'Riordan, Lee Timutimu, Frith Tweedie, Olivier Jutel, aimee whitcroft, Marcus Frean, Grant Dick, Veronica Liesaputra, Brendan McCane, Stephen Cranefield
New Zealanders worried about AI risks, lack of regulation
More Relevant Posts
-
Can we regulate trust? (And should we even try?) A recent study from the The AI Collective Institute made me question a lot about AI policy because if numbers don't lie, it seems that national AI regulation doesn't correlate with higher public trust in AI. 🌐 Source (and interactive map): https://shorturl.at/vkF7p Published in September 2025 by Liel Zino, MPP, Policy Director, "Can We Regulate Trust?" is a rigorous empirical analysis that does something most policy papers don't: it challenges the assumption, the premise, we've all been relying on. Across 47 countries worldwide she built a cross-national database combining public sentiment data from the KPMG 2025 Global Trust in AI Survey with detailed regulatory landscape analysis using OLS regression analysis to test the relationship between regulation and trust (57% of countries studied have implemented national AI regulatory frameworks, 43% haven't). Result? There's no significant correlation between having a national AI regulatory framework and public trust levels. So, if regulation doesn't build trust, what does? The study shows something unexpected: daily AI usage correlates with higher trust, in other words, more people use AI in their everyday lives, the more they trust it. Not the laws. Not the policies. The experience itself. Are we solving the wrong problem? We've been focused on top-down control when maybe we should be focused on bottom-up exposure? What does "trust" even mean in this context? Is it confidence the technology works? Belief it won't harm us? Faith in the institutions building it? If familiarity breeds trust, but people are already worried about AI... how do you get them to that first interaction? Who goes first? As usual questions never end and that's a good but we need also answers. Let me clear, this doesn't mean regulation is useless: we absolutely need it for accountability, safety, preventing harm and to having a common growing whether through global-scale regulation or harmonizing similar policies but regulation alone won't make us trust ChatGPT, DeepSeek or convince workers that AI won't destroy their livelihoods. So, again, it seems we're stuck a paradox: We need rules, but rules don't create trust. We need usage, but people won't use something they don't trust. Public trust in AI has dropped from 61% (2019) to 53% (2024) in a couple of years: hat's not just a trend, it's a warning signal and if regulation isn't the answer and we're in a trust-usage paradox, then what? Zino proposes AI literacy programs, government adoption of AI, public-private partnerships, continue regulation. All good ideas but are we already too late? And once people decide AI is dangerous, how do we walk that back? Just stop using it? No, definitely. It will never work. "Trust will not emerge on its own. It must be deliberately cultivated through education, exposure, and inclusion." #ArtificialIntelligence #AI #AIEthics #AIGovernance #AITrustworthy #Legal
To view or add a comment, sign in
-
😭 This TRAGICOMIC video of Ursula von der Leyen presenting the EU's AI first strategy (hint: only ONE PERSON clapped) is all you need to watch today to understand what's happening in Europe: First, a bit of context: Ursula von der Leyen is the president of the European Commission. In this video, she is announcing the EU Commission's two new strategies in the context of its AI continent plan: the "Apply AI Strategy" and the "AI in Science Strategy." There is nothing inherently bad with these two strategies. There are many interesting points that will likely help the EU improve its internal indicators and become more competitive in the AI race, particularly in comparison with the U.S. and China. Also, if you've been reading my newsletter (if you are not, subscribe below!), you know that these moves, including the AI continent plan and Europe's drastic narrative shift towards innovation and competitiveness, are a reaction to growing internal pressure in the EU (particularly after the Draghi report from September 2024). The problem with this clip (which symbolically represents the weaknesses of the EU's own strategy...) is the inconsistency of some of the arguments brought by Ursula. First, She is happily embracing a total "AI first" strategy, which I've criticized multiple times in my newsletter. This approach deals with AI as an end in itself, not as a means or as a tool that might help or might not help. There are various legal and ethical challenges from this distortion, which might lead to inefficiencies and hidden costs (link to one of my essays below). Ursula also says that "when AI is in the loop, we reach better solutions (?): fast, reliable (?), affordable." Recent reports seem to actually show the opposite. AI-powered results might be faster and initially affordable, but they're often unreliable and sub-optimal, especially without heavy human review. Also, as I've been writing over the past years, AI-first strategies often conveniently ignore the time and cost to review, correct, and oversee AI outputs and deployment in general. Not to mention the reputational harm when AI should not have been used (see the recent Deloitte case in Australia) or when AI gets it all wrong. My personal opinion is that the EU's focus should be on AI infrastructure, research, and development. AI deployment will likely follow as a natural consequence, especially with fully European models like Apertus and Tilde. The AI-first strategy presented here seems to follow the opposite logic. Lastly, I must say that her comment after the lack of applause was weird... "that was symbolic for the uptake of AI: one person starting, the rest following." So the EU's strategy in AI is to promote herd mentality? - 👉 NEVER MISS my essays and curations on AI: join my newsletter's 80,300+ subscribers (link below). 👉 To learn more about the legal and ethical challenges of AI and the EU AI Act, join the 25th cohort of my AI Governance Training in November (below).
To view or add a comment, sign in
-
AI is unreliable without heavy human review. The same goes for coaching, advice, business strategy, ghostwriting, counseling, wordsmithing, branding, and everything else you normally turn to an organizational psychologist or an executive coach for. But you don't believe me, because that's how I earn a living. You'll go all chatGPT and it will sound fabulous to you because you don't know any better. There, I've said it. This is a fabulous review Luiza Jarovsky, PhD
Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (92,000+ subscribers), Mother of 3
😭 This TRAGICOMIC video of Ursula von der Leyen presenting the EU's AI first strategy (hint: only ONE PERSON clapped) is all you need to watch today to understand what's happening in Europe: First, a bit of context: Ursula von der Leyen is the president of the European Commission. In this video, she is announcing the EU Commission's two new strategies in the context of its AI continent plan: the "Apply AI Strategy" and the "AI in Science Strategy." There is nothing inherently bad with these two strategies. There are many interesting points that will likely help the EU improve its internal indicators and become more competitive in the AI race, particularly in comparison with the U.S. and China. Also, if you've been reading my newsletter (if you are not, subscribe below!), you know that these moves, including the AI continent plan and Europe's drastic narrative shift towards innovation and competitiveness, are a reaction to growing internal pressure in the EU (particularly after the Draghi report from September 2024). The problem with this clip (which symbolically represents the weaknesses of the EU's own strategy...) is the inconsistency of some of the arguments brought by Ursula. First, She is happily embracing a total "AI first" strategy, which I've criticized multiple times in my newsletter. This approach deals with AI as an end in itself, not as a means or as a tool that might help or might not help. There are various legal and ethical challenges from this distortion, which might lead to inefficiencies and hidden costs (link to one of my essays below). Ursula also says that "when AI is in the loop, we reach better solutions (?): fast, reliable (?), affordable." Recent reports seem to actually show the opposite. AI-powered results might be faster and initially affordable, but they're often unreliable and sub-optimal, especially without heavy human review. Also, as I've been writing over the past years, AI-first strategies often conveniently ignore the time and cost to review, correct, and oversee AI outputs and deployment in general. Not to mention the reputational harm when AI should not have been used (see the recent Deloitte case in Australia) or when AI gets it all wrong. My personal opinion is that the EU's focus should be on AI infrastructure, research, and development. AI deployment will likely follow as a natural consequence, especially with fully European models like Apertus and Tilde. The AI-first strategy presented here seems to follow the opposite logic. Lastly, I must say that her comment after the lack of applause was weird... "that was symbolic for the uptake of AI: one person starting, the rest following." So the EU's strategy in AI is to promote herd mentality? - 👉 NEVER MISS my essays and curations on AI: join my newsletter's 80,300+ subscribers (link below). 👉 To learn more about the legal and ethical challenges of AI and the EU AI Act, join the 25th cohort of my AI Governance Training in November (below).
To view or add a comment, sign in
-
“First, She is happily embracing a total "#AI first" strategy, which I've criticized multiple times in my newsletter. This approach deals with #AI as an #end in itself, not as a #means or as a #tool that might help or might not help. First, She is happily embracing a total "AI first" strategy, which I've criticized multiple times in my newsletter. This approach deals with AI as an end in itself, not as a means or as a tool that might help or might not help. There are various legal and ethical challenges from this distortion, which might lead to inefficiencies and hidden costs (link to one of my essays below). Ursula also says that "when AI is in the loop, we reach better solutions (?): fast, reliable (?), affordable." Recent reports seem to actually show the opposite. AI-powered results might be faster and initially affordable, but they're often unreliable and sub-optimal, especially without heavy human review. Also, as I've been writing over the past years, AI-first strategies often conveniently ignore the time and cost to review, correct, and oversee AI outputs and deployment in general. Not to mention the reputational harm when AI should not have been used (see the recent Deloitte case in Australia) or when AI gets it all wrong #Ursula also says that "when #AI is in the #loop, we reach better #solutions (?): fast, reliable (?), #affordable." Recent #reports seem to actually #show the #opposite. AI-powered results might be faster and initially affordable, but they're often unreliable and sub-optimal, especially without heavy human review. Also, as I've been writing over the past years, #AI-#first #strategies often conveniently ignore the #time and #cost to #review, #correct, and oversee AI #outputs and #deployment in general. Not to mention the reputational harm when AI should not have been used (see the recent #Deloitte case in Australia) or when AI gets it all wrong.” Elisabetta Verardi
Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (92,000+ subscribers), Mother of 3
😭 This TRAGICOMIC video of Ursula von der Leyen presenting the EU's AI first strategy (hint: only ONE PERSON clapped) is all you need to watch today to understand what's happening in Europe: First, a bit of context: Ursula von der Leyen is the president of the European Commission. In this video, she is announcing the EU Commission's two new strategies in the context of its AI continent plan: the "Apply AI Strategy" and the "AI in Science Strategy." There is nothing inherently bad with these two strategies. There are many interesting points that will likely help the EU improve its internal indicators and become more competitive in the AI race, particularly in comparison with the U.S. and China. Also, if you've been reading my newsletter (if you are not, subscribe below!), you know that these moves, including the AI continent plan and Europe's drastic narrative shift towards innovation and competitiveness, are a reaction to growing internal pressure in the EU (particularly after the Draghi report from September 2024). The problem with this clip (which symbolically represents the weaknesses of the EU's own strategy...) is the inconsistency of some of the arguments brought by Ursula. First, She is happily embracing a total "AI first" strategy, which I've criticized multiple times in my newsletter. This approach deals with AI as an end in itself, not as a means or as a tool that might help or might not help. There are various legal and ethical challenges from this distortion, which might lead to inefficiencies and hidden costs (link to one of my essays below). Ursula also says that "when AI is in the loop, we reach better solutions (?): fast, reliable (?), affordable." Recent reports seem to actually show the opposite. AI-powered results might be faster and initially affordable, but they're often unreliable and sub-optimal, especially without heavy human review. Also, as I've been writing over the past years, AI-first strategies often conveniently ignore the time and cost to review, correct, and oversee AI outputs and deployment in general. Not to mention the reputational harm when AI should not have been used (see the recent Deloitte case in Australia) or when AI gets it all wrong. My personal opinion is that the EU's focus should be on AI infrastructure, research, and development. AI deployment will likely follow as a natural consequence, especially with fully European models like Apertus and Tilde. The AI-first strategy presented here seems to follow the opposite logic. Lastly, I must say that her comment after the lack of applause was weird... "that was symbolic for the uptake of AI: one person starting, the rest following." So the EU's strategy in AI is to promote herd mentality? - 👉 NEVER MISS my essays and curations on AI: join my newsletter's 80,300+ subscribers (link below). 👉 To learn more about the legal and ethical challenges of AI and the EU AI Act, join the 25th cohort of my AI Governance Training in November (below).
To view or add a comment, sign in
-
Lots of criticism has been circulating about Minister Solomon’s AI Strategy Task Force. In this Financial Post article (see below), I seem to be the only voice arguing that both the timeframe and composition are, in fact, largely appropriate. Timeframe: The one-month mandate signals urgency and ambition. We need to move at the speed of AI, not the speed of government. 31 days, is enough time to gather opinions, ideas, and evidence, that can then feed into a strategy. Composition: Like everyone else, there are viewpoints that I would like to see more heavily represented in the task force – ideally by someone with the initials JB 😉. But overall, the group strikes a reasonable balance and checks most of the key boxes. Yes, there’s a tilt toward the economic opportunity side of AI, but that’s understandable. We’re in an economic emergency, and Canada is already behind on AI adoption. There are some really good folks on the task force. Let’s give them and the process the benefit of the doubt. https://lnkd.in/gykzQTPS
To view or add a comment, sign in
-
🇺🇸 🤖 Preparing for #AI Agent Governance by Partnership on AI (PAI) 📑🌐 As #AI agents emerge as the next frontier of artificial intelligence, policymakers, researchers, and businesses face an urgent challenge: how to govern systems that can reason, plan, and act autonomously in digital environments. PAI outlines a comprehensive research agenda, highlighting the uncertainties, risks, and opportunities tied to #AI agents while offering a roadmap for evidence-based governance required under European Commission's EU #AI Act. 🔹 3 Core Requirements for #AI Agent Governance 📌 Understand the Technology & Policy Landscape – Clarify how #AI agents differ from existing systems, identify applicable legal frameworks, and assess how jurisdictions are responding to their rise. 📌 Understand Risks & Opportunities – Evaluate potential benefits such as improved healthcare and efficiency, alongside risks including privacy breaches, economic disruption, and systemic failures. 📌 Understand Policy Interventions – Explore innovation-enabling approaches (sandboxes, testbeds), monitoring and post-deployment accountability, licensing, and global governance coordination. 💡 What Policymakers & Businesses Can Do Now ✅ Invest in evidence-gathering mechanisms like regulatory sandboxes and testbeds. ✅ Prioritize international cooperation to avoid fragmented governance and compliance burdens. ✅ Anticipate labor, economic, and societal impacts to ensure fair and equitable adoption of #AI agents. ✅ Support transparency and accountability infrastructure to track agent actions, prevent misuse, and safeguard public trust. 💡 Special thanks to the authors & contributors: 🔵 Jacob Pratt 🔵 Ahmed Saleh 🔵 Arianna Manzini, PhD (Oxon) 🔵 Bill Thompson 🔵 Dr. Christina Jayne Colclough 🔵 Daniel Treacher 🔵 David Wakeling 🔵 Deon Woods Bell 🔵 Edmund Towers 🔵 Elham Tabassi 🔵 Harry Farmer 🔵 Jam Kraprayoon 🔵 Jamie Bernardi 🔵 Joelle Pineau 🔵 Lama Nachman 🔵 Laurence Diver, PhD 🔵 Lewis Hammond 🔵 Lisa Titus, PhD 🔵 Merlin Stein 🔵 Nicol Turner Lee 🔵 Peter Cihon 🔵 Peter Slattery, PhD 🔵 Ruchika Joshi 🔵 Dr. Rumman Chowdhury 🔵 Sebastian Hallensleben 🔵 Sebastien A. Krier 🔵 Shameek Kundu 🔵 Siddharth Peter de Souza 🔵 Vasilios Mavroudis 🔵 William Bartholomew. 🔵 Claire Leibowicz 🔵 John Howells 🔵 Madhulika Srikumar 🔵 Rebecca Finlay 🔵 Stephanie Bell 🔵 Stephanie Ifayemi 🔵 Talita Dias. 🔵 And others! Read more at: https://lnkd.in/dt-Pc_r8 --------------------------- ✨ 📢 For further insights, follow us on LinkedIn or contact us at contact@ai-and-partners.com for insights on navigating #AI regulations and supporting your #AI projects! https://lnkd.in/e4tnSJse #AI #agentic #Governance #policy #Trust
To view or add a comment, sign in
-
🇺🇸 🤖 Preparing for #AI Agent Governance by Partnership on AI (PAI) 📑🌐 As #AI agents emerge as the next frontier of artificial intelligence, policymakers, researchers, and businesses face an urgent challenge: how to govern systems that can reason, plan, and act autonomously in digital environments. PAI outlines a comprehensive research agenda, highlighting the uncertainties, risks, and opportunities tied to #AI agents while offering a roadmap for evidence-based governance required under European Commission's EU #AI Act. 🔹 3 Core Requirements for #AI Agent Governance 📌 Understand the Technology & Policy Landscape – Clarify how #AI agents differ from existing systems, identify applicable legal frameworks, and assess how jurisdictions are responding to their rise. 📌 Understand Risks & Opportunities – Evaluate potential benefits such as improved healthcare and efficiency, alongside risks including privacy breaches, economic disruption, and systemic failures. 📌 Understand Policy Interventions – Explore innovation-enabling approaches (sandboxes, testbeds), monitoring and post-deployment accountability, licensing, and global governance coordination. 💡 What Policymakers & Businesses Can Do Now ✅ Invest in evidence-gathering mechanisms like regulatory sandboxes and testbeds. ✅ Prioritize international cooperation to avoid fragmented governance and compliance burdens. ✅ Anticipate labor, economic, and societal impacts to ensure fair and equitable adoption of #AI agents. ✅ Support transparency and accountability infrastructure to track agent actions, prevent misuse, and safeguard public trust. 💡 Special thanks to the authors & contributors: 🔵 Jacob Pratt 🔵 Ahmed Saleh 🔵 Arianna Manzini, PhD (Oxon) 🔵 Bill Thompson 🔵 Dr. Christina Jayne Colclough 🔵 Daniel Treacher 🔵 David Wakeling 🔵 Deon Woods Bell 🔵 Edmund Towers 🔵 Elham Tabassi 🔵 Harry Farmer 🔵 Jam Kraprayoon 🔵 Jamie Bernardi 🔵 Joelle Pineau 🔵 Lama Nachman 🔵 Laurence Diver, PhD 🔵 Lewis Hammond 🔵 Lisa Titus, PhD 🔵 Merlin Stein 🔵 Nicol Turner Lee 🔵 Peter Cihon 🔵 Peter Slattery, PhD 🔵 Ruchika Joshi 🔵 Dr. Rumman Chowdhury 🔵 Sebastian Hallensleben 🔵 Sebastien A. Krier 🔵 Shameek Kundu 🔵 Siddharth Peter de Souza 🔵 Vasilios Mavroudis 🔵 William Bartholomew. 🔵 Claire Leibowicz 🔵 John Howells 🔵 Madhulika Srikumar 🔵 Rebecca Finlay 🔵 Stephanie Bell 🔵 Stephanie Ifayemi 🔵 Talita Dias. 🔵 And others! Read more at: https://lnkd.in/dt-Pc_r8 --------------------------- ✨ 📢 For further insights, follow us on LinkedIn or contact us at contact@ai-and-partners.com for insights on navigating #AI regulations and supporting your #AI projects! https://lnkd.in/e4tnSJse #AI #agentic #Governance #policy #Trust
🇺🇸 🤖 Preparing for #AI Agent Governance by Partnership on AI (PAI) 📑🌐 As #AI agents emerge as the next frontier of artificial intelligence, policymakers, researchers, and businesses face an urgent challenge: how to govern systems that can reason, plan, and act autonomously in digital environments. PAI outlines a comprehensive research agenda, highlighting the uncertainties, risks, and opportunities tied to #AI agents while offering a roadmap for evidence-based governance required under European Commission's EU #AI Act. 🔹 3 Core Requirements for #AI Agent Governance 📌 Understand the Technology & Policy Landscape – Clarify how #AI agents differ from existing systems, identify applicable legal frameworks, and assess how jurisdictions are responding to their rise. 📌 Understand Risks & Opportunities – Evaluate potential benefits such as improved healthcare and efficiency, alongside risks including privacy breaches, economic disruption, and systemic failures. 📌 Understand Policy Interventions – Explore innovation-enabling approaches (sandboxes, testbeds), monitoring and post-deployment accountability, licensing, and global governance coordination. 💡 What Policymakers & Businesses Can Do Now ✅ Invest in evidence-gathering mechanisms like regulatory sandboxes and testbeds. ✅ Prioritize international cooperation to avoid fragmented governance and compliance burdens. ✅ Anticipate labor, economic, and societal impacts to ensure fair and equitable adoption of #AI agents. ✅ Support transparency and accountability infrastructure to track agent actions, prevent misuse, and safeguard public trust. 💡 Special thanks to the authors & contributors: 🔵 Jacob Pratt 🔵 Ahmed Saleh 🔵 Arianna Manzini, PhD (Oxon) 🔵 Bill Thompson 🔵 Dr. Christina Jayne Colclough 🔵 Daniel Treacher 🔵 David Wakeling 🔵 Deon Woods Bell 🔵 Edmund Towers 🔵 Elham Tabassi 🔵 Harry Farmer 🔵 Jam Kraprayoon 🔵 Jamie Bernardi 🔵 Joelle Pineau 🔵 Lama Nachman 🔵 Laurence Diver, PhD 🔵 Lewis Hammond 🔵 Lisa Titus, PhD 🔵 Merlin Stein 🔵 Nicol Turner Lee 🔵 Peter Cihon 🔵 Peter Slattery, PhD 🔵 Ruchika Joshi 🔵 Dr. Rumman Chowdhury 🔵 Sebastian Hallensleben 🔵 Sebastien A. Krier 🔵 Shameek Kundu 🔵 Siddharth Peter de Souza 🔵 Vasilios Mavroudis 🔵 William Bartholomew. 🔵 Claire Leibowicz 🔵 John Howells 🔵 Madhulika Srikumar 🔵 Rebecca Finlay 🔵 Stephanie Bell 🔵 Stephanie Ifayemi 🔵 Talita Dias. 🔵 And others! Read more at: https://lnkd.in/dt-Pc_r8 --------------------------- ✨ 📢 For further insights, follow us on LinkedIn or contact us at contact@ai-and-partners.com for insights on navigating #AI regulations and supporting your #AI projects! https://lnkd.in/e4tnSJse #AI #agentic #Governance #policy #Trust
To view or add a comment, sign in
-
In my latest article, I explore how the global community is responding — from OECD and UNESCO principles to initiatives like the Global Partnership on AI (GPAI). I look at what’s being done to regulate and safeguard GPAI, the values driving these efforts, and the gaps that remain. 📖 Read it here: https://lnkd.in/e7Ar3ECn ⚖️ I’d love to hear your thoughts on how we can make AI governance more effective, inclusive, and accountable. #AIGovernance #AIethics #TechPolicy #DigitalLaw #Consulting
To view or add a comment, sign in
-
AI seems to be in almost every headline these days. But beneath the headlines, a quieter crisis is unfolding: public trust in AI is on the verge of collapsing. In just five years, Global AI trust levels have been declining rapidly, dropping from 61% to 53% in the U.S alone. This is not a side issue. It is an underlying pandemic. And if left unaddressed, it will stall innovation, deepen inequality, and limit AI’s ability to serve the public good. For years, the policy community has assumed regulation is the right path that’ll lead us to building public confidence in AI. But what if that assumption is wrong? In our first report at the AI Collective Institute ‘Can We Regulate Trust?’ we analyzed 47 countries to test whether national AI regulation can fix this crisis. The results were clear: - National AI regulations do not increase public trust. - The strongest predictor of trust is daily use of AI. Familiarity and lived experience matter more than rules on paper. The evidence is converging: exposure builds trust. Regulation alone cannot. Alongside the report, we’ve launched an interactive website where you can explore the data from 47 countries, see trust levels in your own state, and navigate the global AI regulatory landscape. Read and explore through the link in the comments. Ignoring the trust deficit in AI is no longer an option. This IS the defining challenge of the AI era, and the future of innovation depends on how and if we meet it. Grateful for all of the great advisors who supported me through this research, and for my great team at the AI collective community for changing the AI space on a daily basis Chappy Asel Catherine McMillan AJ Green Elizabeth Farrell Alex Barnes Stanley C. Katherine Vittini Jabari Grubb Noah Frank and many many more.
To view or add a comment, sign in
-
-
At The AI Collective we’re tackling the questions that really matter 💡 New report from our AIC Institute and Liel Zino, MPP, makes it clear - regulation alone won’t build public trust in AI. What actually works is daily exposure and real-life use. Great to see our team bringing solid data and insights instead of speculation 🚀 👉 Link in the comments
AI and Emerging Tech Policy | Policy Lead @ The AI Collective Institute | Former Digital Israel | Tech Enthusiast
AI seems to be in almost every headline these days. But beneath the headlines, a quieter crisis is unfolding: public trust in AI is on the verge of collapsing. In just five years, Global AI trust levels have been declining rapidly, dropping from 61% to 53% in the U.S alone. This is not a side issue. It is an underlying pandemic. And if left unaddressed, it will stall innovation, deepen inequality, and limit AI’s ability to serve the public good. For years, the policy community has assumed regulation is the right path that’ll lead us to building public confidence in AI. But what if that assumption is wrong? In our first report at the AI Collective Institute ‘Can We Regulate Trust?’ we analyzed 47 countries to test whether national AI regulation can fix this crisis. The results were clear: - National AI regulations do not increase public trust. - The strongest predictor of trust is daily use of AI. Familiarity and lived experience matter more than rules on paper. The evidence is converging: exposure builds trust. Regulation alone cannot. Alongside the report, we’ve launched an interactive website where you can explore the data from 47 countries, see trust levels in your own state, and navigate the global AI regulatory landscape. Read and explore through the link in the comments. Ignoring the trust deficit in AI is no longer an option. This IS the defining challenge of the AI era, and the future of innovation depends on how and if we meet it. Grateful for all of the great advisors who supported me through this research, and for my great team at the AI collective community for changing the AI space on a daily basis Chappy Asel Catherine McMillan AJ Green Elizabeth Farrell Alex Barnes Stanley C. Katherine Vittini Jabari Grubb Noah Frank and many many more.
To view or add a comment, sign in
-
Explore related topics
- Why You Need AI Regulation Standards
- Risks of Regulating AI Development
- Risks of AI Adoption in Society
- Public Sentiment Regarding AI Regulations
- How to Follow AI Regulation and Ethical Technology Practices
- Risks and Benefits of Trust in Tech
- The Role of AI in Data Privacy Regulation
- Reasons AI Security is a Growing Concern
- Addressing User Concerns About AI Data Use
- How AI Affects Trust and Safety
Agreed! The idea that strictures strangle innovation is wildly incorrect - it's a line sold by the people who want to behave poorly, at the cost of all of us and our fellow earth inhabitants. [Proof: look at places like Africa, where very real strictures result in amazing creativity. Talk with any creativity scientists / theorists / practitioners / actual people. NO BOUNDS always ends up in the same but worse...because it makes for lazy thinking and allows hegemonists and monopolists to do their thing. Remember that "the free market" awards monopolies as The Final Win.] And yes, I have lived experience in all of this too, across continents, hemispheres, cultures and industries.