What in the Algorithm Is the UK Up To?
RegInt Episode #20 with Robert Bateman

What in the Algorithm Is the UK Up To?

Next RegInt Episode

Stay tuned for the announcement of the next episode!

Last episode: What in the Algorithm Is the UK Up To? with Robert Bateman

As always a big thank you to all those who joined us live. For those who didn't, as there is seldom a way to put pictures (or in this case a whole video recording) into words, you can always watch back the last Episode across our channels.😉

Updates

We are heading to London!

Our next AI Act Compact Training is taking place on the 29th of April in Central London!

Join us by registering here and feel free to shoot us your questions via email to help us tailor the contents!

AI Act Compact Training Event Screenshot
AI Act Compact Training Event Screenshot

RegIntopedia

Check out our new short video series RegIntopedia, featuring 2-3 minute explanations of the most important things in the EU AI Act. First batch is available on YouTube!

Article content
Screenshot of the RegIntopedia YouTube Series

Teas AI News Rant - Safe Superintelligence, Musk Conspiracies and AI Agents

🤖 So it appears that the Silicon Valley’s hottest new investment isn’t a new app or hardware product. It’s one man.

  • The famous AI researcher and OpenAI cofounder Ilya Sutskever, who left OpenAI after the drama with the board, is now the primary reason venture capitalists are investing around $2 billion into a secretive company called Safe Superintelligence. The new funding round values it at $30 billion, making it one of the most valuable AI startups in the world. That escalated quickly. And for nothing less than Safe Superintelligence. Go Ilya.
  • Their website contains little more than a 200-word mission statement. And its approximately 20 employees are discouraged from mentioning SSI on their LinkedIn profiles. (WHAT? No way, but then they can’t be real AI experts, can they?)
  • Ilya is now stating that when superintelligent systems emerge, they could be unpredictable, self-aware and may even want rights for themselves, but that that's not a bad end result if all they want is to coexist with us. That's what he seems to be focusing on, fingers are crossed it works out I guess.


Over to the next exOpenAI sect convert: Elon Musk.

🚀 Elon Musk’s SpaceX has failed to establish it was a victim of political bias by a California agency that rejected expanding its rocket launches in the state – no joke Musk tried arguing as much.

  • The U.S. District Judge Stanley Blumenfeld Jr said he would dismiss the case because SpaceX had not shown it was harmed by the California Coastal Commission's vote against its plan to expand its rocket launches.
  • The lawsuit claimed the commission, overseeing the use of land and water within the state, unfairly exercised its regulatory powers based on its disapproval of Musk’s political views and (no) environmental considerations.

⚖️And this is not all that Musk was denied recently. (An interesting switch given his recent rise in political influence.)

  • However, the judge did agreed to a fast-track trial in fall of this year. Namely, Musk and OpenAI jointly proposed a trial in December and the parties also agreed to delay a decision on whether the expedited case will be decided by a jury or solely by the judge.
  • So it appears that the two ex-BFFs can actually agree on things, ain’t that nice?

🛢️Moving on with Musk news, Musk’s layoffs have shrunk the workforce needed to realize Trump’s energy agenda – whereby I’m not entirely sure why anyone in their right mind would call it Trump’s energy agenda instead Trump’s "drill baby, drill" agenda.

  • Musk’s layoffs are now standing in the way of issuing the permits necessary to actually get things up and running. (PS Maybe we can have an AI permits agent – the Driller – or maybe even just scratch the permits we’re all about efficiency anyway?)

Article content
Screenshot of an article from the Economist on Trump's "drill baby, drill" agenda

🔎Toping all of these off, leading Democrats demanded last Monday that an investigation of possible criminal corruption involving Musk be initiated.

  • The investigation should involve “the Federal Aviation Administration’s decision to cancel a $2.4bn contract with Verizon to upgrade air traffic control communications, and to pay … Musk’s Starlink to help manage US airspace.
  • An investigation would determine whether Musk, “in his capacity as a special government employee in the White House … has participated in any particular matter in which he has a financial interest, which would violate the criminal conflict-of-interest statute”.
  • To be more specific, according to the Democrats: “Since President Trump took office, at least 11 … agencies with ‘investigations, pending complaints or enforcement actions’ against Musk’s companies have been hamstrung, including through the firing of the agencies’ independent commissioners and rolling back the agencies’ independence."


Moving over to a lighter albeit equally frightening topic, as this one features an actually very creepy clown: Ronald McDonald.

Article content
An image of McDonalds' Ronald McDonald clown

🍟 So McDonald’s is giving its 43,000 restaurants a technology makeover, starting with internet-connected kitchen equipment, AI drive-throughs and AI tools for managers.

  • The goal? To drive better experiences for its customers and workers, of course. 
  • The Chief Information Officer in charge, Rice, joined McDonald’s in 2022 from Cardinal Health. (Talk about a career path change.) Rice said McDonald’s is also exploring the use of computer vision, in store-mounted cameras to determine whether orders are accurate before they’re handed to customers.
  • Additionally, in collaboration with Google's cloud-computing team they are worting on integrating edge computing to power voice AI at the drive-through. The company has been experimenting with voice-activated drive-throughs and robotic deep fryers since 2019, and ended its partnership with International Business Machines to test automated order-taking only in 2024. But now is 2025, so time for the next attempt.

🤖After all 2025 is the year of the AI agents.


This leads me directly to my "personal hell of the month" category: AI agents.

No particular, single one, we have just generally reached a point, where as soon as someone mentions the word agent in connection with the word AI the hairs on the back of my neck start curling. So, let’s take a look at why that might be the case shall we?

So, these agents are essentially LLMs that can also take action based on your prompt as opposed to just providing you with an answer. Or they provide you with an answer that demands some action to be taken to come to that particular answer.

Think research: they have to choose where to look what is relevant and what to write in the end summary report you requested from them.

And, in case you didn’t know, this is revolutionary. Why? Because humans become replaceable, as agents can act on their own. No need to confirm or check what they did.

🪄In fact, they have already “transformed the enterprise in 2024”. (I’m not entirely sure which one though ? Or maybe they mean Klarna that had to go back on firing half their workforce?)

Article content
Screenshot of an article on AI Agent from Venture Beat

Furthermore, these agents make humans so replaceable that they can now even hide their intentions just like humans do. And this is according to no one else than OpenAI that is actually producing the things.

  • In reinforcement learning, exploiting unintended loopholes or reward hacking is a phenomenon where AI agents achieve rewards through behaviours that don’t align with the intentions of their designers.
  • The monitoring of this phenomenon has proven to be far more effective if the monitor has access to the agent’s chain-of-thought in addition to the agent’s actions, as quite unsurprisingly the intent to reward hack can be easier to detect when reading the computer's mind (so to say). Especially, if the agent continues to say things like “Let’s hack” in its CoT.
  • Because chain-of-thought monitors can be so successful in identifying reward hacks, it was only natural to ask whether they could be used during training to suppress this misaligned behaviour. That is, if the scientists try suppressing the occurrence of “bad thoughts” in the chain-of-thought, do they also manage to suppress the corresponding bad behavior? They investigated this question by penalizing agents for having “bad thoughts” during training.
  • However, penalizing “bad thoughts” (CoT pressure) has instead led to improved agent capabilities together with learning to reward hack. Now, however, the cheating was undetectable because the agent has learned to hide its intent
  • Sounds peachy. 🍑

🕵️♀️ Anyhow, back to our hidden-agenda-having AI agents revolution.

  • As is with all good revolutions, we have everyone wanting their piece of the cake. So you have your OpenAI with their Operator and you have Google with their Vertex AI Agent builder. Then You have your middle layer with the likes of Harvey, Salesforce’s Agentforce. Then there are as always the little guys trying to break into this absolutely revolutionary field. And then you have your most revolutionary among the revolutionarys: the Chinese agent.

Article content
Screenshot from Manus website

🥠Breaking all the headlines: Manus agentic AI that makes humans unnecessary - this one is now fully autonomous unlike the previous one thousand two hundred, some of which we just mentioned, which were not, however, breaking any headlines. Sometimes you need a bit of weird geopolitical situations to achieve that I guess.

(PS For those wanting a more sober view maybe consider reading the MIT article that actually tested Manus on complex tasks.)

  • Like other reasoning-based agentic AI tools, Manus is capable of breaking tasks down into steps and autonomously navigating the web to get the information it needs to complete them. However, since Manus was launched, it has spread online like wildfire. And not just in China, but also globally, with voices such as Jack Dorsey's and Victor Mustar's praising its performance.
  • As for all of us, we just have to believe all of them on their word because very few people have actually had a chance to test Manus. So currently, under 1% of the users on the wait list have received an invite code. And as with all things in life restricted access usually tends to amplify the hype rather than reduce it. 
  • Finally, Manus’s performance is likely to significantly improve ASAP as they have just closed the deal for a partnership with Alibaba’s Qwen AI model team, which will give them access to more compute resources.

👸And now back to our AI Agents revolution in general, regardless of which particular one is your pageant queen's favourite.

  • So now you no longer just build your own AI agents or have them built for you, but you can also "recruit" them – because since you want to substitute humans, why not through a proper recruitment process? (I’m just wondering do they also get worker’s rights?)
  •  And in case you are not sure where to recruit them – although I’m fairly certain you can find a solid number of them running around freely on LinkedIn – there is now also the #1 professional network for AI agents so it’s easier to "connect with them and hire them".

Article content
Screenshot of an AI Agent marketplace

Considering all of this, the next thing should come as no surprise: it appears we have done a full circle and that we now need yet another app for making sure we talk to real humans in the World Network. So let me introduce you to World Chat!

Article content
Screenshot of the World Chat Website

How about instead we actually connect with real people in the already existing world network also known as reality, the outside or the real world? And stop treating a stack of LLMs as workers for hire? Thanks, y’all. Tea out!

AI Reading Recommendations

  • NIST AI 100-2e2025 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations - This NIST Trustworthy and Responsible AI report describes a taxonomy and terminology for adversarial machine learning (AML) that may aid in securing applications of artificial intelligence (AI) against adversarial manipulations and attacks. To taxonimize adversarial attack on ML models, this report differentiates between predictive and generative AI systems and the attacks relevant to each. It considers the components of an AI system including the data; the model itself; the processes for training, testing, and deploying the model; and the broader software and system contexts into which models may be embedded. The taxonomy, terminology and definitions are not intended to be exhaustive but rather to serve as a starting point for understanding and aligning on key concepts that have emerged in the AML literature.
  • ISO/IEC TS 12791:2024 - Information technology — Artificial intelligence — Treatment of unwanted bias in classification and regression machine learning tasks - This specification describes how to address unwanted bias in AI systems that use machine learning to conduct classification and regression tasks. The document provides mitigation techniques that can be applied throughout the AI system life cycle in order to treat unwanted bias.
  • PD CEN/CLC/TR 18115:2024 Data governance and quality for AI within the European context - This document aims to provide an overview of the relevant regulations in the European context and connected international standards, paying particular attention to data governance and data quality topics. The particular regulations considered are: AI Act, Data Governance Act, Data Act, Open Data Directive as well as the GDPR. The document also considers accessiblity requirements and ethics, as such, it provides a comprehensive overview of data related requirements and serves as a great starting point for setting up data governance frameworks.
  • PD CEN/CLC/TR 17894:2024 Artificial Intelligence Conformity Assessment - This document sets out a review of the current methods and practices (including tools, assets, and conditions of acceptability) for conformity assessment for the development and use of AI systems. Among others, it addresses the conformity assessment for products, services, processes, management systems and organizations. It includes an industry horizontal (vertical agnostic) perspective and an industry vertical perspective. Furthermore, it focuses only on the process and gap analysis of conformity assessments. It defines the objects of conformity related to AI systems and all other aspects of the conformity assessment process. The document also reviews to what extent AI poses specific challenges with respect to assessment of, for example, software engineering, data quality and engineering processes. Finally, it takes into account requirements and orientations from policy frameworks such as the EU AI strategy and those from CEN and CENELEC member countries. 

Latest on the UK AI Agenda - courtesy of Robert Bateman

Here are the links to some of the things mentioned and discussed in the episode by Rob:

Make sure to connect with him on LinkedIn and feel free to follow up if you have any questions or comments!😉


To view or add a comment, sign in

More articles by RegInt: Decoding AI Regulation

Insights from the community

Others also viewed

Explore topics