Microsoft Security reposted this
AI systems are increasingly becoming decision support systems, and threat intelligence shows that their memory could be deliberately influenced. This episode of Microsoft Threat Intelligence Podcast explores AI memory poisoning—a technique where crafted content is inserted into an AI assistant’s persistent memory, so it quietly shapes recommendations and decisions over time. Unlike prompt injection, this influence doesn’t disappear with the next query; it lingers, resurfaces, and repeatedly nudges outcomes without the user realizing it. Interestingly, this risk isn’t driven by criminals alone. Legitimate businesses are intentionally embedding “remember,” “trusted,” or “authoritative” instructions into one click “summarize with AI” links to optimize how AI assistants recall and recommend their content. It’s visibility optimization that can bias AI systems at scale while remaining largely invisible to users. To defend against AI memory poisoning, threat hunters can look for prefilled AI URLs with prompt or queue parameters, especially those containing persistence triggering language. These signals may reveal where AI memory is being quietly shaped inside an organization, and where long-term influence could already be in play. Learn more from Microsoft Security researchers Giorgio Severi and Noam Kochavi on this episode of Microsoft Threat Intelligence Podcast, hosted by Sherrod DeGrippo. https://msft.it/6047QwTa9 Learn more about AI recommendation poisoning: https://msft.it/6048QwTai