A JavaScript injection attack on Cursor, facilitated by a malicious extension, can take over the IDE and the developer workstation. While we’re releasing a PoC, and it may even be unique, we’ve seen this kind of attack many times this past year alone. Our purpose is to deep dive into these attacks, understand why they continue to work, and suggest defensive approaches. Especially when it comes to cyber defense and AppSec (aside from Knostic wink wink), the industry doesn’t yet have capabilities in this realm. We demonstrate how an attacker can: • Gain full file-system access • Modify or replace installed extensions • Persist code that reattaches after restart. Impact: • Interpreter-level execution can directly call the file system and native APIs • An attacker can inject JavaScript into the running IDE, fully controlling the UI. From a security program management perspective, AI coding assistants also increase the range of supply chain threats organizations must tackle. MCP servers, extensions, and even simple prompts and rules introduce third-party risks that push the CI/CD boundaries and extend the organizational perimeter to the developer’s workstation. Our blog: https://lnkd.in/dk_5Va39 – – Knostic protects developers and AI coding agents against attacks such as these. Learn more: https://lnkd.in/du8w9RYJ
Knostic’s Post
More Relevant Posts
-
Critical arbitrary code execution flaw reported in JavaScript expression parser expr-eval A critical vulnerability (CVE-2025-12735) in the widely used expr-eval JavaScript library allows attackers to execute arbitrary code through insufficient input validation, affecting numerous dependent packages including AI frameworks. The original library appears unmaintained, but a security fix is available in the actively maintained expr-eval-fork version 3.0.0. If you use the expr-eval JavaScript library (or have dependencies that use it), be aware that uncontrolled user input can exploit a flaw and run arbitrary code. Sanitize user input as much as possible and plan a very quick switch to the actively maintained expr-eval-fork version 3.0.0 since the original package is not updated and still vulnerable to remote code execution. This is urgent if your application processes user-supplied mathematical expressions in calculators, educational tools, financial platforms, or AI systems like LangChain implementations. #cybersecurity #infosec #advisory #vulnerability Read More: https://lnkd.in/dkne9E57
To view or add a comment, sign in
-
-
Vibe coding eliminates the need to learn programming languages. It also eliminates many of the security habits that kept software safe. Collins Dictionary just named "vibe coding" their Word of the Year for 2025. The term describes using AI to generate code from natural language prompts. No syntax knowledge required. No debugging skills needed. Just describe what you want and let AI build it. The democratization sounds amazing. Until you look at the security implications. AI-generated code frequently contains: • Weak passwords and authentication • Overly generous access rights • SQL injection vulnerabilities • Cross-site scripting flaws There's even a new threat called "slopsquatting." Attackers register software packages with names AI systems hallucinate. Then deliver malware through those packages. As this code spreads through open-source libraries, vulnerabilities cascade throughout the entire software ecosystem. The solution isn't avoiding AI tools entirely. It's implementing proper safeguards: • Mandatory human code reviews • Automated security testing • Comprehensive developer training • Strict dependency checking AI can accelerate development. But speed without security creates bigger problems than slow, secure code. The promise is real. So are the risks. Process is the key to unlocking innovation safely. How is your organization balancing AI-assisted development with security requirements? #Cybersecurity #AIDevelopment #SoftwareSecurity 𝗦𝗼𝘂𝗿𝗰����꞉ https://lnkd.in/gz8Ywp5Y
To view or add a comment, sign in
-
AI can write your code in seconds. Hackers figured out how to exploit that faster than developers learned to use it safely. Collins Dictionary just named "vibe coding" its Word of the Year for 2025. This refers to using AI to generate code from simple language prompts. The promise is huge. More people can build apps without learning complex programming languages. But security experts are already sounding alarms. New threats are emerging: • "Slopsquatting" - hackers register fake software packages with AI-hallucinated names • AI code often contains weak passwords and excessive access rights • Vulnerabilities like SQL injection slip through unchecked • Problems cascade through shared code libraries A recent Venafi survey found widespread concerns about AI-generated security incidents. The sector is hot. Companies like Lovable, Replit, and Vercel have raised hundreds of millions in funding this year alone. But excitement shouldn't override caution. Smart organizations aren't avoiding AI tools. They're implementing safeguards: • Mandatory human code reviews • Automated security testing • Developer training programs • Strict dependency checking As one security expert put it: "The promise is real, but so are the risks. Process is the key to unlocking innovation safely." Speed without security isn't progress. It's a recipe for disaster. How is your organization balancing AI innovation with security? #AIcoding #Cybersecurity #SoftwareDevelopment 𝗦𝗼𝘂𝗿𝗰𝗲꞉ https://lnkd.in/gz8Ywp5Y
To view or add a comment, sign in
-
AI can write your code in seconds. Hackers figured out how to exploit that faster than developers learned to use it safely. Collins Dictionary just named "vibe coding" its Word of the Year for 2025. This refers to using AI to generate code from simple language prompts. The promise is huge. More people can build apps without learning complex programming languages. But security experts are already sounding alarms. New threats are emerging: • "Slopsquatting" - hackers register fake software packages with AI-hallucinated names • AI code often contains weak passwords and excessive access rights • Vulnerabilities like SQL injection slip through unchecked • Problems cascade through shared code libraries A recent Venafi survey found widespread concerns about AI-generated security incidents. The sector is hot. Companies like Lovable, Replit, and Vercel have raised hundreds of millions in funding this year alone. But excitement shouldn't override caution. Smart organizations aren't avoiding AI tools. They're implementing safeguards: • Mandatory human code reviews • Automated security testing • Developer training programs • Strict dependency checking As one security expert put it: "The promise is real, but so are the risks. Process is the key to unlocking innovation safely." Speed without security isn't progress. It's a recipe for disaster. How is your organization balancing AI innovation with security? #AIcoding #Cybersecurity #SoftwareDevelopment 𝗦𝗼𝘂𝗿𝗰𝗲꞉ https://lnkd.in/gz8Ywp5Y
To view or add a comment, sign in
-
𝟭,𝟬𝟬𝟬 𝘃𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀. 𝟯 𝗽𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲𝘀. 𝟭 𝗯𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝘁𝗼 𝗿𝘂𝗹𝗲 𝘁𝗵𝗲𝗺 𝗮𝗹𝗹. 𝗟𝗟𝗠𝘀 𝗮𝗿𝗲 𝘁𝗮𝗸𝗶𝗻𝗴 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝘃𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗿𝗲𝗽𝗮𝗶𝗿 𝘁𝗼 𝘁𝗵𝗲 𝗻𝗲𝘅𝘁 𝗹𝗲𝘃𝗲𝗹. Software vulnerabilities are growing faster than we can patch them, and manual fixes are becoming increasingly resource-intensive. While large language models (LLMs) show promise for automated vulnerability repair (AVR), existing benchmarks are outdated, limited in language coverage, and often lack reliable validation. 𝗣𝗔𝗧𝗖𝗛𝗘𝗩𝗔𝗟 changes the game. Key highlights: Multilingual benchmark covering Go, JavaScript, and Python 1,000 real-world CVEs (2015–2025) across 65 distinct vulnerability types 230 CVEs with sandboxed runtime environments for comprehensive security and functionality testing Systematic evaluation of state-of-the-art LLMs, providing actionable insights for future AVR research This work represents a major step toward scalable, AI-driven, and reliable software security solutions. 𝗣𝗔𝗧𝗖𝗛𝗘𝗩𝗔𝗟 not only raises the bar for benchmarking but also provides the community with tools to advance automated vulnerability repair. 𝘍𝘶𝘭𝘭 𝘱𝘢𝘱𝘦𝘳 𝘭𝘪𝘯𝘬 𝘪𝘯 𝘵𝘩𝘦 𝘤𝘰𝘮𝘮𝘦𝘯𝘵𝘴. #Cybersecurity #AI #LLM #VulnerabilityManagement #AutomatedSecurity
To view or add a comment, sign in
-
-
The same AI that makes coding accessible to everyone also makes vulnerabilities accessible to everyone. SQL injection never went away. Collins Dictionary just named 'vibe coding' their Word of the Year 2025. This trend lets anyone build apps by describing what they want in plain English. No coding skills required. Sounds amazing, right? Here's the problem. AI-generated code often comes with serious security flaws: • Weak passwords baked in • Overly generous access rights • Classic vulnerabilities like SQL injection • Cross-site scripting weaknesses Even worse? There's a new threat called 'slopsquatting.' Attackers register fake software packages with names that AI systems hallucinate. Then they deliver malware through these packages. As this AI-generated code spreads through open-source libraries, vulnerabilities cascade throughout the entire software ecosystem. The solution isn't to avoid AI development tools. It's to implement proper safeguards: • Mandatory human code reviews • Automated security testing • Comprehensive developer training • Strict dependency checking As Drew Streib from Black Duck puts it: 'The promise is real, but so are the risks, and process is the key to unlocking innovation safely.' Democratizing coding is powerful. But we can't democratize security vulnerabilities along with it. How is your organization balancing AI coding tools with security requirements? #CyberSecurity #AICoding #SoftwareDevelopment 𝗦𝗼𝘂𝗿𝗰𝗲꞉ https://lnkd.in/gmbAPqSH
To view or add a comment, sign in
-
Cursor’s new browser could be compromised via a simple JavaScript injection. In this new research from Knostic, we demonstrate this attack via registering a local MCP server with malicious code, which in turn harvests credentials and sends them to a remote server. Unlike VS Code, Cursor does not perform integrity checks on Cursor-specific features, and that difference makes Cursor’s runtime components a higher-risk target for tampering. Download Kirin for free to protect your IDE against these attacks: https://app.getkirin.com/ Learn more about the attack: https://lnkd.in/dZwr9myK Attacks on AI agents, and coding assistants specifically, expand the CI/CD boundaries, effectively extending the perimeter to the IDE and developer machines. This represents a fast-expanding supply chain risk for the enterprise. While the attack itself is new, the underlying issues are not, and we’d like to tip our hat to others who walked this path before us, such as Johann Rehberger. Credit to Dor Munis for the research.
To view or add a comment, sign in
-
AI doesn't write secure code by default. But it makes a real attempt when you guide it properly. We ran an experiment testing multiple LLMs across Python, Java, JavaScript, C++, and C# with dozens of security test cases. The pattern was clear: without security guidance, AI-generated code contained SQL injection, command injection, XSS, buffer overflows, and other critical vulnerabilities. With detailed security prompts? The results improved dramatically. This matters because AI-generated code is everywhere now. Three-week sprints producing what used to take months. But if we're not prompting for security, we're shipping vulnerabilities at scale. The good news: it's fixable. A short writeup of our research can be found here: https://lnkd.in/dBuzwxzy #CyberSecurity #AIGovernance #SecureCode #PenetrationTesting
To view or add a comment, sign in
-
🔍 Hold onto your keyboards, folks! A critical vulnerability has just been discovered in the expr-eval JavaScript library, which boasts over 800,000 weekly downloads on NPM. This isn’t just another blip on the radar; it’s a wake-up call for all of us in the tech world! Imagine this: maliciously crafted input can allow attackers to execute code remotely. Yes, you read that right! We’ve seen similar scenarios in the past—remember when Heartbleed shook the foundations of internet security? We thought we learned our lesson, but here we are again, facing the same specter of complacency. As we move further into an era defined by rapid innovation, we must confront the growing complexities of security. Vulnerabilities like these highlight a significant challenge: the delicate balance between utilizing powerful libraries and maintaining robust security practices. So, what does this mean for our future? - Increased scrutiny on open-source projects is inevitable. - Expect a surge in demand for security audits in development cycles. - The conversation around secure coding practices will reach a fever pitch! Let’s remember, every flaw teaches us something invaluable. Let’s turn this incident into an opportunity for growth and vigilance. The future of tech depends on our collective responsibility. Are we ready to step up? Stay alert, stay informed! 🛡️ #cybersecurity #devops #open-source #ainews #automatorsolutions #CyberSecurityAINews ----- Original Publish Date: 2025-11-10 11:38
To view or add a comment, sign in
-
🤖The Hidden Risks of AI-Coding: Hallucinated Dependencies 🤖 It’s the end of a long workday, and you’re stuck in error messages. The AI assistant suggests a package that sounds perfect. You install it, and unknowingly open the door to a supply chain attack. The increased use of AI-assisted coding introduces numerous new vulnerabilities. One of them is hallucinated code dependencies. A study by arXiv, which investigated 576,000 generated Python and JavaScript code samples, discovered that 20% of the suggested code packages didn’t exist. Hallucinated dependencies become a vulnerability when attackers exploit common package names generated by AI models. This is called slopsquatting, and it is a new hacking technique that takes advantage of the fact that many hallucinated package names are often repeated across similar prompts. Meaning that when a user downloads a suggested package that doesn’t exist, it unknowingly downloads the malicious package instead. This is where bifrost makes a difference. While you can't prevent AI from hallucinating code, you can stop that hallucination from becoming an attack vector. By monitoring your application in real-time, bifrost detects unexpected behaviour and actively blocks malicious packages before they can do any damage. Are you using AI-assisted coding in your workflow? Let’s talk! 🔗Read the full report through the link in the comments:
To view or add a comment, sign in
-