TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
AI Agents / Security / Software Development

Security Firm Snyk Tackles AI Coding’s Perfect Storm

As developers increasingly rely on AI coding tools and build AI-native applications, Snyk launches its most ambitious security platform yet, helping organizations move fast without compromising safety.
May 28th, 2025 10:06am by
Featued image for: Security Firm Snyk Tackles AI Coding’s Perfect Storm
Featured image via Unsplash+.

Danny Allan sees a “perfect storm” brewing when it comes to AI and software development.

“You have more code coming at a faster velocity than ever before,” Allan, the CTO of developer security specialist Snyk, told The New Stack. “You have more complex code because it’s AI-native applications and bringing in new attack surfaces, and, ironically, on the security side, you have attackers also using AI in a more dangerous way.”

Snyk has been working to make the business of developing code safer for programmers and their organizations since its founding 10 years ago. The company offers an eponymous developer security platform that encompasses nearly every aspect of the software development lifecycle (SDLC), identifying and resolving security issues in code, open source dependencies, container images, and Infrastructure as Code (IaC) configurations.

“We have always been big believers that you need to build security in from the very beginning, whether it be traditional software that you’re building or your AI-native applications,” he said.

Given its history in AppSec, it’s not surprising that Snyk saw the perfect storm and decided to fly into the heart of it. The company today is rolling out its Snyk AI Trust Platform, a tightly integrated collection of AI-native capabilities that leverage the testing tools it’s been known for.

Snyk AI Trust Platform

“This is the biggest launch in company history by far and it’s not even close,” Allan said. “One of the core aspects of it is that it is a platform for us. … When you’re buying into the platform, you are, by natural extension, getting all of the capabilities that are included within it. If I go back historically, we’ve had Snyk Open Source and Snyk Code and Snyk IaC and Snyk Container and all of these ways to secure traditional applications that have evolved over the last 10 years. However, there are some very new and innovative components of this that are very much tuned for the AI world.”

A Secure Development Environment

The new AI platform has similar features:

  • Snyk Assist: An AI-powered chatbot that includes just-in-time insights into the vendor’s features, next-step recommendations, and security intelligence.
  • Snyk Agent (below): Security AI agents that drive automated actions and fixes based on the company’s testing engines.
  • Snyk Guard: Provides AI governance through guardrails that take in changing risk factors and automatically assess, enforce, and adapt real-time security policies.
  • Snyk AI Readiness Framework: Gives organizations a working model for building a strategy for secure AI-based software development.

Snyk Agent

The Labs and Studio

Along with the AI Trust Platform, the company is launching Snyk Labs and Snyk Studio. Snyk Labs will be a central place for researching and experimenting with generative AI security solutions and also will serve as an incubation environment for related technologies. The first research efforts will look at AI security posture management, including an AI bill of materials (AI BoM) analysis to show AI models embedded in software.

Snyk is also building a GenAI model risk registry to address such risks as jailbreaking, which occurs when bad actors create prompts that enable them to slip past or override safeguards in large language models (LLMs).

Snyk Studio gives developers and tech vendors a place to work with Snyk security specialists to embed security context and controls into their AI-based code and workflows.

Snyk’s platform echoes efforts by other tech vendors — such as Dell Technologies, with its year-old Dell AI Factory — that are looking to give developers and other users of AI tools a centralized place where they can find the infrastructure and software they need to more easily run build, deploy, and manage their AI initiatives.

The Storm Approaches

That will be important as the AI perfect storm sets in, Snyk’s Allan said. The first element of the storm — broad use of AI tools — is already “wholly and fully upon us,” he said. Most developers are using AI in their work to one degree or another, with Allan saying that within Snyk’s customer base, 85% to 90% have adopted AI-based coding assistants. Developer skill firm HackerRank in a report in March, said that 97% of developers — from deep adopters to casual users — are leveraging the technology.

According to market research firm Statista, creating code is the most popular use of AI by developers, with 82% saying they’re doing it already and another 9.2% saying they’re interested. That’s followed by searching for answers, with 67.5% of programmers using the technology for tasks such as debugging and getting help (56.7%), documenting code (40.1%), and creating content (34.8%).

The second element — developing AI-native applications — is in its earlier stages, with Allan saying that somewhere between 20% and 50% of organizations are building such software.

“They’re dabbling with it, almost all of them, but if you’re talking fully in production, applications that are backed by LLMs, we’re still — and thankfully — earlier than we might be,” he said. “It’s not 100% of applications that are backed by diffusion models or LLMs.”

The third element, which involves attackers using AI, is more difficult to understand on a broad scale, though Allan said that “it’s not uniform.”

“We’re clearly seeing AI in some attacks like phishing,” he said. “Are we seeing AI-layered attacks going after software? Probably still early days on that. So, it depends on the type of attacker and the type of segment or vertical that you’re in. As to say whether that storm is fully overhead, I tend to think we’re a little bit earlier, but it’s harder for me to say, and I do think it depends on the vertical and industry.”

Securing AI Despite the Rush

A focus on security within an AI-driven development environment is needed, according to Snyk. The company last year released a report that found that in an eagerness to join the ranks of AI users, some organizations that were already using AI code-generation tools to accelerate app development either rushed or ignored security best practices.

For example, 58% of organizations surveyed said that security was the top hurdle to adopting AI developer tools, but that only 20% of them conducted a proof of concept before rolling out AI coding options. In addition, only 44% gave their developers training for AI coding tools.

At the same time, CTOs and CISOs were five times more likely than developers to be confident in the security of AI coding tools and twice as likely to believe they were “extremely” ready to adopt them.

Allan said both executives and developers are eager to get to work with AI tools, though they also feel a responsibility to ensure the safety of the code being created.

“Moving fast is a very good thing,” he said. “However, you absolutely want to pair that with both the controls to make sure that you’re fixing these issues as they’re coming out, and the visibility to not only measure the productivity, but the security of the software that’s being created.”

AI Agents Are Here

That responsibility will only grow as more AI agents enter the scene. Agents are small pieces of code that, unlike the better-known AI chatbots like OpenAI’s ChatGPTMicrosoft Copilot, and Google Gemini, work mostly autonomously to solve complex problems with little human oversight. They can find the data they need, collaborate with other agents, adapt to changes and learn from mistakes.

The AI agents — just like traditional software — needs to be secure, Allan said, noting that threat actors often target identity and credentials in their attacks.

“That is especially true in agents, because when you get into agent-to-agent communication, authorization becomes the real issue,” the CTO said. “How does the second agent know what the user is authorized for if it’s not passed across agents? One of the concerns that I have in an agentic world is, are you securing the agents that you’re creating to begin with?”

However, he doubts the world is ready to adopt full remediation of fixes by AI agents, without some human intervention. Allan added that “organizations want the manual acceptance of the generated fix rather than going fully agentic and fully autonomous for the remediation. There’s still a lack of complete trust in the AI system.”

Staying the Course in the AI Era

Allan said that with its AI Trust Platform, Snyk essentially is doing what it’s always done — allowing organizations to develop fast and stay secure. Only now it’s being done in the AI era. Now they can adopt AI tooling and the associated speed, productivity, and capabilities, and do so while trusting both the AI they’re using and the AI they’re creating.

“If you ask me, by the end of this year, I’d say we’re fully in the storm around coding assistance,” he said. “I expect by the end of this year; we will be fully in this storm of AI-native software. What we’re really providing is the confidence as they go in that direction, as that storm comes overhead, that the organization can adopt that AI without having sleepless nights and ending up in a very insecure state.”

Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.