The Human + AI Workforce Brief
Issue 002 · February 1, 2026
When AI Speeds Up Work but Undermines Deep Thinking

The Human + AI Workforce Brief Issue 002 · February 1, 2026 When AI Speeds Up Work but Undermines Deep Thinking

Welcome to the February issue!

First, thank you to the more than 10,000 LinkedIn members who subscribed in our very first month. I’m genuinely grateful you’re here, and I’m excited to keep building this conversation with you as we navigate what truly changes for people when AI changes work.

January’s brief focused on what changes for people when AI changes work like identity, meaning, and psychological safety. This month turns to something quieter but just as important. That is how everyday use of AI is reshaping how people think at work. Across functions, employees are offloading memory, problem-solving, and first-draft thinking to AI tools. At the same time, employers are doubling down on the need for more value-based curiosity, creativity, and innovation. The risk is a widening gap between the cognitive skills leaders say they want and the cognitive habits  AI overuse and reliance can produce.

For leaders, this is not a philosophical question, it is a talent, performance, and culture issue. If AI is designed into work in ways that create unintended consequences such as dulling critical thinking, reducing the depth of human problem-solving, and rewarding fast, synthetic output, it can undermine the very strategic thinking and edge that AI was meant to unlock from Human + AI collaboration.

Your employees will expect you to understand that the push for AI collaboration and impact must be balanced with the risk of cognitive overload, atrophy, and diminished confidence. When people fear that using AI will quietly weaken their judgment, creativity, and value over time, it can dampen their willingness to fully adopt and experiment with AI.

That’s why this month, we’ll surface critical signals to watch for: first, how overreliance on AI can change the way we think and work, and second, how that shift can directly undercut the creativity and innovation leaders are asking for. Recognizing these signals will help you better support your employees, protect their cognitive strengths, and remove hidden barriers to sustainable AI adoption in ways that strengthen, rather than erode, human capabilities.

Signal 1: When AI Becomes the First Thinker Instead of a Thought Partner

Generative AI tools and emerging AI agents now sit at the center of daily workflows. The upside is clear:  faster cycles, and broader access to ideas. The hidden downside is cognitive offloading at scale: as more thinking and orchestration are handed to systems, people may progressively do less of the analysis, judgment, and sense-making themselves.

In practice, this shows up as employees:

  • Letting AI write follow‑up emails and simply hitting send, instead of pausing to tailor the message to the relationship, history, or risk in the situation
  • Asking AI to “summarize the key points” of a report and reacting only to the summary, without scanning the original data or noticing what the summary left out
  • Dropping vague prompts like  “create a go‑to‑market strategy”  or “design a change plan” into AI and then lightly editing the output, rather than starting with their own view of the business problem, constraints, and trade‑offs
  • Generating talking points for town halls, manager meetings, or 1x1s, and failing to reflect on what their team uniquely needs to hear or how they might react to the message
  • Relying on AI to propose goals, metrics or KPIs and tweaking the wording, rather than first deciding, what they are actually trying to move, and why it matters

These everyday moments train people to let AI tools think first, and for them, instead of treating AI as a partner to actively question, direct, and collaborate with.

 What Can HR Leaders Do

  • Name the cognitive risk, not just the productivity upside: In your February communications and leader touchpoints, call out that AI is changing how people think, not just how fast they work. Use clear language such as, “We want AI to take friction out of work, not to take away your opportunity to build judgment, creativity,problem‑solving and showcase your expertise."
  • Set expectations for AI as a thought partner, not the first thinker: Work with leaders to define simple norms like “human first pass, AI second” for key activities like strategy, change plans, performance feedback, or critical customer communication, and translate these into team agreements. Make it explicit that employees are expected to bring their own view, then use AI to expand, test, or challenge it, not to skip the thinking step.
  • Embed “human thinking first” into critical workflows: For high‑leverage processes or projects like business cases, or go‑to‑market plans, require a short, human‑written problem statement and rationale before any AI‑generated content is added. Include questions like, “What do you think is happening?” and “What trade‑offs are we facing?” I love this strategy for two reasons. First, it builds mental muscle around problem identification and framing, which are critical skills workers need. Second, it drives metacognition. We use this term commonly in higher education, but it is all about pausing to think about how we think!
  • Teach practical ways to challenge AI outputs: In your AI literacy efforts,  offer concrete tactics such as asking AI to argue against its own answer, generate multiple options, and justify your choice, or identify what data or voices might be missing. Encourage teams not to treat AI as an automatic authority. Be sure to encourage leaders to engage in these conversations with employees.
  • Recognize and reward discernment, not just visible AI use: In performance conversations and recognition programs, spotlight leaders and teams that show strong judgment and that question AI recommendations, adapt strategies to local context, and involve diverse perspectives. Make discernment and critical thinking explicit parts of your Human + AI leadership competencies, so people see that thoughtful use of AI is valued.

Signal 2: When Leaning on AI Starts to Erode the Creative Edge You Need

Leaders are asking employees to be more curious, identify new AI use cases, and drive innovation, but many workers are quietly leaning on AI in ways that soften the very skills those demands require. This can result in diminishing the ability to notice patterns, connect unexpected dots, and push beyond obvious answers. Over time, genuine Human + AI originality can become harder to find, and employees can lose confidence in their ability to be creative.

In daily reality, this shows up as employees who:

  • Turn to AI immediately when asked for “new ideas” and present lightly edited lists from the tool as innovation, instead of bringing points of view rooted in customer pain points, operational realities, or frontline insights.
  • Ask AI to “create a strategy for AI use cases in our function,” then only adjust the language to fit local context, rather than starting from their own understanding of the business and using AI to pressure-test ideas.
  • Struggle to brainstorm in real time without AI because they’ve grown accustomed to having the tool supply options, and feel exposed or “less smart” when they don’t have it in the room.
  • Default to AI-generated examples from other industries or companies, while underexploring unique internal data, stories, and edge cases that could spark more differentiated solutions.

The result is a subtle but important shift. Employees become very good at curating and polishing AI-generated content, but less practiced at originating and stretching ideas themselves. While the language of innovation remains loud from leaders, the underlying human capabilities to imagine, challenge, and recombine can gradually degrade.

What Leaders Can Do  

  • Make “human-led, AI‑supported” innovation the standard: Set the expectation that teams come with their own view of the problem, opportunities, and early ideas before they consult AI. Ask first, “What were you seeing or hearing that led you here?” and then, “How did you use AI to extend or stress-test this?”
  • Redesign innovation practices so AI comes in later, not first: For brainstorms, strategy sessions, or use‑case workshops, start with human-only exploration  before introducing AI. Use AI in the second half to expand, combine, or challenge what the group has already generated, so it amplifies human creativity rather than replacing it.
  • Anchor use cases in real problems, not prompts: Ask leaders and teams to frame every AI use case around a specific business, worker, or customer pain point. Require a short, human-written problem statement that includes who is affected, what hurts now, and what better would look like, before any AI-generated solutioning is considered. This will also help develop complex problem framing and solving skills.
  • Build creativity and innovation reps back into work: Create small, recurring practices where teams must generate options without AI for a set period. For example, the first 10 minutes of idea generation or a weekly “no‑AI” problem-solving huddle. Position these as cognitive workouts that keep the organization’s creative edge sharp.
  • Recognize the quality of thinking, not just the volume of AI ideas: In performance and recognition processes, spotlight people who connect unexpected insights, challenge default or AI assumptions,  and adapt ideas to real constraints. Make it clear that leaders value discernment, originality, and thoughtfulness over polished AI slop.
  • Wire creativity and critical thinking into how you measure “good” AI use: Update your AI adoption dashboards, pilot reviews, and leadership scorecards so they don’t just track usage and speed, but also look at things like originality of ideas, diversity of inputs, and how often teams adapt or override AI-generated options. When leaders know they’ll be asked how AI use is improving human thinking and innovation, not just productivity, they’re more likely to design work that protects and drives the creative edge you need.

The Briefing: What HR Should Do This Month - Design  Work So AI Sharpens, Not Softens, Human Thinking

  1. Set a clear Human + AI thinking stance: Publish a short, practical statement that defines how your organization expects people to think with AI. It might look like, “human judgment first, AI to extend and test”, and integrate it into your AI principles and practices, leadership messages, and manager talking points.  
  2. Identify and protect “thinking-critical” work: With business leaders, name a small set of activities where you most need strong human thinking, such as complex problem framing, ethical trade‑offs, sensitive talent decisions, and early‑stage strategy and codify guardrails so AI supports these activities without replacing the human work of sense‑making and judgement.
  3. Upgrade AI literacy to include cognitive risks and habits: Evolve AI training beyond how to use the tool to include how AI can weaken or strengthen critical thinking and creativity, practical ways to challenge outputs, and examples of “good” Human + AI thinking in your own organizational context.
  4. Align performance, recognition, and development with deep thinking: Make curiosity, critical thinking, and creativity visible in your competency models and performance criteria. Recognize leaders and teams who demonstrate these behaviors in how they use AI, not just those who automate the most.
  5. Measure cognitive health alongside AI adoption: Add a small set of items to your pulse or engagement surveys to track whether employees feel they still have opportunities to think deeply, learn actively, and challenge AI-driven decisions, and use those insights to adjust AI rollout plans, adapt training, and shape leadership expectations.

One Question for Your Next Leadership Meeting:

"If we observed our teams for a week, would we see AI deepening thinking and creativity or simply enabling faster, more polished, but less thoughtful and impactful work?"

What Dr. Terri’s Reading this Month

Each month, this section highlights an article, report or book that resonated with me and that can help you deepen your conversations about AI adoption and the Human + AI experience in your organization.

HBR: Why AI Boosts Creativity for Some Employees but Not Others

WEF:  The Human Advantage: Stronger Brains in the Age of AI

LinkedIn Learning With Dr. Terri

Want to deepen your understanding of this month’s topic? Check out Dr. Terri’s LinkedIn Learning courses for HR leaders on incorporating AI in HR practices, and her courses for leaders on Responsible AI and Career Conversations in the Age of AI. Her courses have reached more than 400,000 learners around the globe, several are included in the Microsoft Global AI Skills Inititative and the new Microsoft AI Skills Navigator Platform.  Share one course with your HR or leadership teams this month and discuss how to apply a single idea or strategy in your organization.

Dr. Terri’s LinkedIn Learning Courses

Partner with Dr. Terri on Your Human + AI Workforce Strategy

Dr. Terri partners with HR and executive teams to lead the human side of AI transformation, helping organizations shape the culture, leadership capabilities, skills, and behaviors required to adopt AI successfully, responsibly, and at scale.

If you’re leading AI‑related change and want a strategic partner to work through your specific context, challenges, and opportunities, schedule a 60‑minute complimentary discovery call with Dr. Terri and explore how her thought leadership, targeted advising, leadership training, workshops, and keynote presentations can help you build a Human + AI workforce strategy tailored to your organization.

Contact:

📧 Drterri@drterrihorton.com

🌐 www.drterrihorton.com

🔗 LinkedIn: Terri Horton EdD, MBA, MA, SHRM-CP, PHR

Article content


The “AI as thought partner, not first thinker” line really fits how I work with these tools. I start by talking through my view of the problem and naming what I know, the known unknowns, and likely blind spots. Then I ask AI to help me organize, check tone, and surface gaps, because jumping straight to “create something” misses the human context, sensory detail, and empathy that should guide the work. I really appreciate how you turn that idea into concrete “human thinking first” workflows that teams can actually use.

Fascinating insight into AI’s double-edged role in remodeling our cognitive landscape. Balancing offloading tasks and nurturing curiosity feels like walking a tightrope between efficiency and intellectual agility. How can leaders best cultivate this equilibrium while avoiding “Zap-lord syndrome” where quick well-artificialized thoughts dull genuine creativity? Great thought starter!

Insightful and interesting. I especially liked the comment about having AI argue against its own answer.

This is a solid, concise and comprehensive read that definitely transfers to any industry or profession that grapples with implementing AI into its culture.

Dr. Horton, an interesting question. From the perspective of the Luevano Standard, we advocate for the thoughtful use of AI as a 'thought partner,' not the 'first thinker.' If AI becomes the first thinker, it risks diminishing the deep thinking and creativity you mentioned. At ACI, we are actively implementing 'human-led, AI-supported' innovation models to ensure AI augments rather than replaces human cognitive strengths, fostering more impactful work.

To view or add a comment, sign in

More articles by Terri Horton, EdD, MBA, MA, SHRM-CP, PHR, SWP

Others also viewed

Explore content categories