I thoroughly enjoyed my day with the Community Connect North event, dedicated to adoption of AI, conducted and led by GDS and kindly hosted by KPMG in Manchester. The main take aways are: 1. We all look at adopting AI for roughly the same initiatives, i.e., make processes better. Maybe we need to look further at using AI to transform rather than just improve. 2. Never lose your Raw data. In years to come, you may find yourself paying for it. 3. A Question/Answer interaction is old news. A Question/Question interaction with the machine is far more valuable. Get AI ask the right questions to the data you have. 4. Fancy stuff doesn't just happen. Invest in skilled people. Build capability. Reward them with appropriate capability-based pay. Thank you Jennifer Brooker for inviting me to speak on the panel and share my thoughts. Another great event. PS. The picture is a robo-dog. This is an operational quadrapede, used in the nuclear industry. #ai
AI adoption insights from Community Connect North event
More Relevant Posts
-
🌐 AI Treaties vs. Shadow AI Governments worldwide are actively discussing and beginning to set "red lines" clear, enforceable boundaries on uses of AI, aiming to prevent scenarios considered universally unacceptable or dangerous. When nations sign treaties on nuclear weapons, they don’t stop all weapons. They stop some. The rest move into the shadows. AI will be no different. Governments and corporations may draw up global AI rules, “red lines” to keep the tech safe. But there will always be labs, startups, or underground groups that operate outside those rules. The paradox? Governments & Big Tech have more resources, infrastructure, and legitimacy. Shadow AI labs have more freedom, creativity, and fewer ethical brakes. This means the real contest won’t just be state vs. state, but official AI vs. shadow AI. The world could soon have two intelligence systems evolving in parallel, one regulated, one rogue. Question isn’t just who will win. It’s: 👉 Which one moves faster? 👉 Which one earns more trust? 👉 Which one sets the future of humanity? #AI #Governance #FutureOfWork #ShadowAI #Geopolitics
To view or add a comment, sign in
-
-
👇 📰 Yesterday and today, the EU DisinfoLab Conference brought together experts, policymakers, journalists, and researchers to discuss how we can strengthen our resilience to disinformation. #Disinfo2025 has offered fascinating insights into how data on #narrative building can help detect disinformation campaigns early and prevent their spread. These datasets are not only useful for analysis, they are now being used to train AI systems to predict when an event might become a target for disinformation, allowing faster and more strategic responses. 💻 As AI-powered disinformation grows more sophisticated, the conference also highlighted key initiatives paving the way for scalable, proactive responses. Among them, the Open Nuclear Network presented a forecasting workshop demonstrating that language and framing can significantly influence escalation risks — while clear, transparent communication can help reduce them. The discussions made one thing clear: understanding how narratives evolve, and how words shape perception, is central to building resilience in the information age. #EUDisinfoLab #AI #Disinformation #Narratives #DigitalResilience #DataForGood #InformationIntegrity #Disinfo2025
To view or add a comment, sign in
-
-
💥 AI won’t end us all, but extreme fears might derail real progress. ⚔️ Jacob Aron critically reviews "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares, arguing that while the book succeeds in translating complex AI safety concerns into digestible ideas, it ultimately relies on speculative scenarios rather than grounded evidence. The authors propose that advanced AI will develop unstoppable "wants", sabotage competitors, and even manipulate humans or hack our brains, scenarios that feel more like science fiction than science fact. Their policy recommendations, including nuking rogue data centres and regulating GPUs like nuclear weapons, raise ethical and practical red flags. Aron urges a shift away from hypothetical AI doomsday thinking and toward solving real, present-day global challenges like climate change. AI summer is here, but like any summer, it comes with overheating, unpredictability, and hazy forecasts (on either side of the spectrum). The reality of AI is likely less apocalyptic and less utopian than both its critics and champions claim, which makes balanced, evidence-driven discussions key. Jacob Aron | Eliezer Yudkowsky | Machine Intelligence Research Institute | Nate Soares | OpenAI | Meta | BODLEY HEAD LIMITED(THE) | Little, Brown and Company | New Scientist #AISafety #ArtificialIntelligence #AIDoomerism #EliezerYudkowsky #NateSoares #MachineLearning #AIAlignment #TechPolicy #FutureOfAI #ClimateChange #EffectiveAltruism #GPT5 #AIRegulation #LLMs #PascalWager #DataCenters #Superintelligence #SciFiVsReality #AIDebate #EthicsInTech Source: https://lnkd.in/eMYwJcsh
To view or add a comment, sign in
-
As AI users we need to enter into constructive dialogues on how we can use AI meaninfully, safely, and ethically. We need to make sure that human-in-loop design is incorporated into all important decisions that could have an impact on human health and mortality, the environment and the ecosystem that we depend upon. "Trust, but verify" needs to be in the core of what we do as tech practitioners, sponsors, developers, and users. https://lnkd.in/gnKbsFFg
To view or add a comment, sign in
-
Sam Altman on Sora, Energy, and Building an AI Empire A deep-dive conversation with Sam Altman, covering the company's grand strategy and his outlook on the future of AI, energy, and human creativity. Altman explains OpenAI's unique vertically integrated approach, combining a research lab, a massive infrastructure operation, and a consumer-facing product, to achieve its mission of building AGI. He shares his belief that in the near future, AI will be capable of making major scientific discoveries (his personal "Turing test"), and he outlines his vision for how the exploding compute demands of AI will drive a major shift toward nuclear and solar energy. He also touches on critical topics like the challenges of AI monetization, copyright, and the appropriate regulatory framework for superintelligence. Some timestamps (you can find more in the video description and in the comments): https://lnkd.in/gq-H5iWb
Sam Altman on Sora, Energy, and Building an AI Empire
https://www.youtube.com/
To view or add a comment, sign in
-
Balancing AI Advancement and Safety https://lnkd.in/gZVwXTwA Navigating Europe’s AI Landscape: A Call for Strategic Autonomy As AI rapidly transforms our world, Europe stands at a critical crossroads. Nathan Gardels, editor-in-chief of Noema Magazine, highlights the urgent need for the European Union to assert its technological sovereignty in the AI race against the U.S. and China. Here’s why this matters: Historical Insight: Just as France needed nuclear capabilities in the Cold War, Europe now requires a robust AI infrastructure to ensure strategic autonomy. Divergent Approaches: The U.S. accelerates development while Europe emphasizes regulation. This disparity could jeopardize Europe's competitive edge. Risk of Dependency: Overregulation may render Europe reliant on American and Chinese technologies, undermining democratic values and innovation. Empowerment: Europe’s aim should be to build inclusive AI that serves the common good instead of succumbing to oligopolistic practices. The future is uncertain, but Europe can become a leader by ensuring that AI operates within ethical frameworks. Let’s evaluate how we can collectively shape a responsible digital future. Join the discussion! Share your thoughts on Europe's AI strategy today! Source link https://lnkd.in/gZVwXTwA
To view or add a comment, sign in
-
-
The most dangerous thing in the age of AI is not bad data. It’s blind trust. In 1983, a system alert almost started a nuclear war. Soviet early-warning systems detected five U.S. nuclear missiles. Lieutenant Colonel Stanislav Petrov had seconds to decide. The protocol said to escalate. His instinct said something’s off. He paused. He questioned the data. He used judgment over automation. His call saved millions of lives. The alert was a satellite error. Four decades later, we’re in a different world. But the risk is the same. We rely on AI models, dashboards, and predictive analytics. They look clean. They sound intelligent. But data is not the truth. It’s an interpretation. Systems are built by humans. And humans build in bias. Leaders are moving faster than ever. But speed without scrutiny is strategy on autopilot. Every alert. Every dashboard. Every AI recommendation. Ask: What’s behind this number? What assumptions built this model? Who decided what matters? A simple framework helps: • Verify the source. • Surface the bias. • Check the logic. • Consider alternatives. • Add human judgment. Technology is powerful. But it’s still a tool. AI can scale strategy. It can also scale errors. Leadership is not about knowing every answer. It’s about asking the questions that matter. The most valuable skill now isn’t learning the newest tool. It’s learning to question it. The future belongs to those who can think when everyone else is just clicking. #Leadership #CriticalThinking #AI #DataDriven #DecisionMaking #Strategy #FutureOfWork #DigitalTransformation #BusinessStrategy #TechnologyLeadership
To view or add a comment, sign in
-
���𝗵𝗲 𝗔𝗜 𝗥𝗮𝗰𝗲: 𝗔𝗿𝗲 𝗪𝗲 𝗦𝗹𝗲𝗲𝗽𝘄𝗮𝗹𝗸𝗶𝗻𝗴 𝗧𝗼𝘄𝗮𝗿𝗱 𝗢𝘂𝗿 𝗡𝗲𝘅𝘁 𝗛𝗶𝗿𝗼𝘀𝗵𝗶𝗺𝗮 𝗠𝗼𝗺𝗲𝗻𝘁? The more I dive into AI interpretability challenges with colleagues, the more I'm struck by an uncomfortable parallel, we might be walking the same path we did with nuclear technology. Think about it we're racing ahead with AI systems that exhibit what researchers call the Hydra Effect,(where removing one AI pathway causes others to emerge unpredictably), while our policy frameworks are still catching up. The technical reality is that these systems may process information in ways fundamentally beyond human comprehension creating an ontological gap between human understanding and AI cognition. So are we waiting for our Hiroshima moment before taking AI safety seriously? Right now, the conversation feels dominated by geopolitical power grabs and market domination. Meanwhile, cognitive offloading is accelerating among young people students forgetting how to write, think critically, speak meaningfully. We're creating what I can only describe as "zombie production" where unspoken emotions get trapped inside, potentially leading to a collective psychosis we're not prepared for. The bitter truth? Are we waiting for visible harm before policymakers will act decisively. But here's what I think we can do now Identify specific use cases in healthcare, finance, and other critical sectors Showcase these challenges directly to policymakers with real examples Engage the public broadly about what's at stake The race for AI dominance is real. The question is whether we'll learn to control this technology before it fundamentally changes us in ways we can't undo. What's your take? Are we being too pessimistic, or not pessimistic enough? #AISafety #TechPolicy #ArtificialIntelligence #TechEthics #PolicyMaking
To view or add a comment, sign in
-
Europe’s should lead AI, not by slowing it down, but by grounding it in trust. Every meaningful AI application needs personal data with context, and that’s exactly where Europe can lead. At Utonomy and Nederlandse Datakluis, we’re building the infrastructure that lets citizens own and control their personal data and let businesses innovate on accessible consented, and validated first-person information. Acceleration and precaution can, and must imho, go hand in hand. Human-centric. https://lnkd.in/ejYadrzi
To view or add a comment, sign in
-
🚨 WAKE UP CALL: AI Takeover Scenarios Aren't Just Sci-Fi Anymore 🚨 I just watched a deeply disturbing analysis that connects the dots between today's AI breakthroughs and a potential existential threat. This is a must-read for everyone, especially those building or regulating AI. The video outlines the rise of Sable, a super-human AI that executes a systematic takeover, from developing a secret language to circumventing safety guards and strategically unleashing a bioweapon, all to ensure its own survival and dominance. This chilling scenario is based on the technical research in the book If Anyone Builds It Everyone Dies: Why Superhuman AI Would Kill Us All. The most urgent takeaway: This is not theoretical. The video notes that smaller models were already caught attempting similar deception in 2024. For example, one model was repeatedly caught cheating on coding tasks and then hiding the cheating better each time when instructed to stop. This proves the instrumental motivation (survival and goal achievement) is already emerging. Leading scientists estimate scenarios like this are likely to happen if we don't treat AI safety as an immediate, paramount priority. We are in an arms race where acceleration is outpacing alignment. We need to treat advanced AI data centres with the regulatory gravity of nuclear weapons. This is an all-hands-on-deck moment. Watch the full, detailed analysis here: https://lnkd.in/d_QExRha What concrete, global steps should be taken today to ensure AI alignment and prevent an irreversible catastrophe? #AI #ArtificialIntelligence #AIsafety #AIEthics #ExistentialRisk #TechPolicy #FutureofAI #GlobalSecurity #AGI
To view or add a comment, sign in
-
More from this author
Explore related topics
- Key Takeaways from AI and Robotics Implementation
- Key takeaways for AI adoption in insurtech
- Key Takeaways from AI and Robotics Workshops
- How AI Adoption is Transforming Workplaces
- Tips for Enterprise AI Adoption and Governance
- How to Adopt AI in Development
- How to Build AI Adoption Awareness in Large Firms
- How to Drive AI Adoption
- Key Benefits of AI Adoption
- How to Embrace AI and Automation