TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
AI Engineering / Networking

Companies Must Embrace Bespoke AI Designed for IT Workflows

The industry is moving from process automation to decision automation and hyper-customization as an industry.
May 23rd, 2025 10:00am by
Featued image for: Companies Must Embrace Bespoke AI Designed for IT Workflows
Photo by Yannick Pipke on Unsplash.

Although LLMs have been readily available for the past few years, inroads into the IT sector have been minimal. We have seen successful generative AI (GenAI) model penetration into SaaS solutions and areas like help desks; however, successful GenAI integration into security software has been few and far between.

Generally speaking, it is not easy to repurpose an LLM to work within a security domain. LLMs are optimized for natural language; they can’t immediately understand or process security elements such as flow packets, logs, alerts, and knowledge graphs.

To build out effective genAI integration in the security sphere, it’s time to embrace bespoke, foundational AI for IT workflows.

AI Model Efficiency

The recent trend toward building out models more efficiently, as opposed to scaling at all costs, is a natural progression of GenAI tools in the enterprise space. Despite all the LLM hype, not every business problem requires an LLM solution. If you utilize LLMs within your infrastructure, it’s best to right-size them (distill them into smaller models that address specific business problems) while focusing on privacy, security, and explainability.

By right-sizing your models, compute is kept to a minimum, which prevents costs from being passed on to your customers. During the right-sizing process, it is important to consider where your customers’ pain points reside.

Agentic AI and Contextual Integrations

The hype around agentic AI is warranted. Secondly, the most effective AI tools are those that users don’t even notice, and this can be achieved via contextual integrations across applications.

Combining reasoning engines — GenAI models that explain their output — with specific data and context, you can drive great value. Agents can effectively gather real-time data through APIs and databases; they can filter and triage that data, contextualize it, and then automate workflows. Of course, we have many different agents in the IT environment for our various workflows.

Combining a Right-Sized LLM With Bespoke Models for IT

Although LLMs are optimized for natural language and have difficulty monitoring meta information and understanding flow packets, they can be right-sized and combined with bespoke models for IT workflows to create great value in security and IT operations.

Security. Let’s look at a quick security example to see this in practice. An organization may have a machine learning (ML) model to detect anomalies in email traffic based on historical patterns and dynamic thresholds. This anomaly detection ML model, combined with a decision tree-based model (which looks up variables from email headers to establish context), confirms a potentially suspicious email.

At that point, a small language model assesses the “call to action” within the suspicious email’s text. Agents look up the links within the email and scrape them for further processing. A phishing detection model then sifts through domain information, web headers, and other variables; malware checks are performed on attachments in a sandbox; and finally, a massive decision tree concludes that the email is suspicious.

IT operations. This bespoke strategy can also be used within the IT operations to optimize server costs without impacting customer experience. Organizations can create a dynamic IT operations cost management system by using a right-sized LLM alongside causal knowledge graphs and bespoke models for IT.

For example, an organization may utilize an ML model to forecast demand (by analyzing usage patterns and real-time traffic data). This ML model may suggest scaling back resources and avoiding over-provisioning, saving the organization money on cloud spending while maintaining a good user experience.

Statistical ML models can also indicate when trouble is brewing in the IT setup. Such ML models work in concert with knowledge graphs built over the entire IT infrastructure (with causal strengths learned over time). A real-time reasoning agent can then help IT engineers interpret the knowledge graph.

By using decision trees with adaptive thresholds and agents that track operational processes, analyze real-time data, synthesize insights, and make recommendations, your organization can save costs, reduce MTTR, and realize an overall increase in business value.

Using Open Source Models and Keeping the Data Within Your Ecosystem

While creating bespoke AI models for IT, it’s important to have a strong governance layer in place. Admittedly, this is easiest to do if you own the entire tech stack. With control over all of the layers in the stack, you aren’t dependent on third parties, and all data remains in your company’s ecosystem.

If you opt to build on top of self-hosted, open source models, such as DeepSeek, LLaMA, Qwen, or Mistral AI, your data never has to leave your database. Again, as you own the whole stack, your organization context remains in your organization’s instance. Sensitive data will never be shared externally, and your security permissions will keep context information within your own environment.

By using retrieval augmented generation (RAG) with LLMs that don’t contain any customer data, you can ensure that your customers’ data privacy is protected. This approach is far better than foundation models with public feeds or anonymized customer data.

To put things simply, from a security perspective, if you own the whole tech stack, agents become a great organic addition — not only to the permission layer, but also to the search layer, as well as other access layers, such as identity and access management (IAM) and privileged access management (PAM).

Generally speaking, your AI tools should always work on the data that you already have within your protected network; all permissions should be respected, and there should be an audit trail of every call made by the agents.

Key Takeaways

Companies should embrace agent-fueled, bespoke solutions when incorporating genAI tools into IT workflows. We’re moving from process automation to decision automation and hyper-customization as an industry.

While doing so, we must spend the right amount of money on compute, keep all data processing within our protected environments, and prioritize customer data privacy. I believe this is best accomplished through bespoke AI models for IT workflows.

Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.