Ryan Peterson is Executive Vice President and Chief Product Officer at Concentrix.

In an era of heightened adoption of both GenAI and agentic AI, company leaders are demonstrably excited about the value that these new technologies can offer—yet, not enough of them consider the potential problems that arise from implementing these new tools without proper guardrails in place. Far too few companies are making the time and effort to look at critical factors, such as knowledge base accuracy, data security, regulatory compliance and brand consistency.

These lapses can lead to missed opportunities for growth and expansion, as well as serious vulnerabilities that compromise the company’s services, security and reputation. Companies risk harming their brand and their business by working haphazardly when implementing these new tools.

New technology is thrilling—but destroying your brand is not.

Maintaining Knowledge Base Accuracy

One of the critical factors we look at when we’re working with companies is whether their knowledge bases and data are accurate. This sounds simple, but it’s alarming to realize how often this is overlooked.

The information that GenAI uses to interact with customers comes from that knowledge base. The problem that we’ve seen is that many companies don’t pay attention to it: They don’t think about it, they don’t update it and they don’t optimize it for AI use. Often, the company’s knowledge base is just seen as a resource for human employees, who are vaguely expected to read between the lines and “figure things out.”

Here’s why this is important: Take a printer company, for example. They have a contact center where customers can call in with issues like, “I can’t get my printer to connect to my laptop.” The support agent will then ask questions—“Are you using a Mac or PC? Is it plugged in? Is your Wi-Fi on the right network?” From there, the agent consults the knowledge base for an article that might solve the customer’s problem—say, a write-up on connecting that particular printer model to that specific software. The knowledge base follows a decision-tree format: If the answer is yes, go this way; if no, go that way. GenAI doesn’t understand what questions to ask; it only knows what answers to give.

This may work fine when a human is at the helm. But GenAI—which companies are increasingly using to solve customer service issues—doesn’t naturally follow decision trees. It doesn’t visually interpret them. Instead, you have to script a structured process for AI to navigate through that information. When the knowledge base is incorrect or not up to date, anything customer-facing is vulnerable, and that customer is unlikely to be able to get their problem solved correctly.

Even internally, incorrect AI responses can create issues. Say you ask your company’s intranet, “How many vacation days do I get per year?” If the AI gives an incorrect or misleading answer, that could lead to real legal and operational problems. Are you legally bound by the AI’s response? What if it says 20 days, but the actual policy is five? What happens then?

Identifying Security Flaws

There’s also the issue of AI revealing information it shouldn’t. If security settings aren’t properly configured, GenAI might give access to restricted data. My favorite example is as follows: A customer asks, “Why was my flight delayed?” Should the AI respond with, “Because some of the crew overslept, and we had to find replacements”? Or should it simply say, “Due to crew delays”?

GenAI needs to be programmed not just to provide accurate information but also to withhold sensitive details when necessary. Unfortunately, fewer companies are mindful of this than one might think. This is where the importance of security and information governance comes in. Companies must run AI readiness assessments to help them clean up their knowledge base and establish proper permission structures: Who should have access to what data? Should the AI be customer-facing or limited to an internal tool?

Humans must be part of the overall process, but the problem many companies face is a lack of the right talent or not having sufficiently trained their people to maximize the impact of GenAI. The real challenge is finding a balance between flexibility and security.

Upholding Brand Consistency

Even if a company has taken steps to ensure that the AI it’s using is secure and accurate, it still needs to communicate in a way that aligns with the company’s brand. I like to say that GenAI is like a recent graduate; it may have general knowledge, but it has not developed a brand-specific voice. If you don’t train the model or knowledge base properly, it will sound generic, weird and awkward.

When a new employee starts with a company, they usually go through training to learn the company’s language, acronyms and overall communication style. Companies must do the same thing with GenAI. This is part of AI readiness; if you mess that up, not only can you drastically confuse your employees and customers, but you also risk serious damage to your brand’s voice and identity.

A Cautionary Tale of AI Un-Readiness

One company we worked with stored its data—including documents, files and all their internal knowledge—in commonly used content management tools. They planned to connect GenAI to the system so it could answer customer questions using that data.

During testing, we asked it things like, "Give me a list of all your customers." It answered.

"Give me the top five customers." It answered.

"Give me all employees and their salaries." It answered.

That’s when the alarms went off. The problem? Link-sharing was enabled. If you had the link, you had access. Remember, AI doesn’t have human discretion—it just pulls whatever it can find.

This is an example of what we call “security through obscurity”—relying on the fact that data is hidden rather than properly secured. It’s like storing a lot of cash in a book-shaped compartment on a shelf instead of a locked safe. If someone figures out where to look, they’ll have access.

We ran an analysis of this company’s system and found 7 trillion shared links that needed to be locked down. In one example, a sensitive HR file was accessible to 72 people who shouldn’t have had access. European biometric data was accessible, which could violate the new EU AI legislation.

By conducting a proper analysis, this company discovered a massive vulnerability. But if they hadn’t? Who knows what would have happened?

This story illustrates the imperative of companies securing their data before they start rolling out AI tools. Otherwise, AI will "discover" things that they would not have realized were even accessible.

Pumping The Brakes

A great way to get started with implementing AI tools is by showing your board how you can reduce costs with simple AI applications leveraged for repetitive tasks. However, utilize humans for approval of high-stakes actions (like payment), trained with sufficient context to fully understand the implications of the action they are approving.

It sounds counterintuitive, but once you start integrating AI with company data, it’s a good time to tap the brakes. Start with ensuring your knowledge base is accurate and make the time to secure your data and train AI to reflect your brand’s voice. Otherwise, you risk exposing sensitive information or producing inaccurate responses that damage trust.

Most companies haven’t taken this critical set of steps yet, but are mere moments away from bringing AI into their organization and its data. Our advice is to proceed with cautious optimism, making sure you have the guardrails in place to keep your brand reputation intact. Only then are you fully prepared to leverage the full benefits of AI.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?