Issue 001 · January 1, 2026 What Changes for People When AI Changes Work in 2026
Welcome to the inaugural issue of The Human + AI Workforce Brief. This newsletter launches at a pivotal moment, as more organizations accelerate the move from piloting AI to scaling it across the enterprise and into everyday work. As an advisor, corporate trainer, speaker, and author, one area of my work that I care deeply about is helping leaders navigate the tension between AI-driven change and the deeply human side of adoption and implementation. In that space, questions of psychological safety, identity and meaning at work are front and center and if left unaddressed can undermine performance and AI adoption.
Each issue of The Human + AI Workforce Brief is a practical note from me to you: a short opening to ground what’s happening now, two key signals that spotlight how AI is really impacting people, and a clear set of moves you can make that month. Each brief closes with a question to take into your next leadership conversation, a few things I’m reading, and ways to deepen learning and practice with your teams.
In 2026 the acceleration of AI transformation gets real. The goal of this newsletter is to help leaders unpack and develop strategies to mitigate these tensions and bridge the gaps between AI and human potential, so that humans remain the greatest advantage to human + AI collaboration.
Signal 1: Identity and Meaning Under Pressure Impacts on Employee Experience and Culture
When AI enters a workflow, most employees ask, “What does this mean for me?” and “Who am I if I no longer do this part of my job?” Workers worry about becoming interchangeable, losing status, having years of expertise reduced to a prompt, or being flat-out replaced. When these identity questions stay underground, resistance, silent non-use of AI, and disengagement emerge. The problem is that these feelings and perceptions can hinder learning, innovation, and adoption.
What Can HR Leaders Do:
- Name the reality: In your January communications, explicitly acknowledge that AI changes how people see their roles, not just their tasks. Use language like “We know AI raises questions about your work, your growth, and your future here.”
- Map meaning-critical roles: Identify roles where identity is tightly tied to craft and expertise and prioritize these groups for deeper dialogue and support as AI is introduced.
- Reframe contribution: Work with leaders to articulate how human strengths like judgment, empathy, creativity, and ethical reasoning are becoming more central and be sure to give concrete examples tied to real jobs in your organization. This can feel esoteric, so role play if necessary.
Signal 2: Psychological Safety in Human + AI Teams A New Priority for Leaders
In a human + AI–augmented workplace, psychological safety shows up in what I call the small moments when people feel they can say, “I don’t trust this output,” or “I tried this tool and it didn’t work the way I expected.” When that safety is missing, employees may engage in shallow adoption, use AI in ways they don’t fully understand, or stay silent about errors and risks that could impact customers, people, and performance. Leaders often underestimate how much of their own modeling and explicit invitation it takes to make these conversations feel natural and safe, so embracing vulnerability and authenticity becomes a critical way to level up these conversations.
Watch for warning signs:
- Teams using AI tools but never challenging outputs or raising ethical concerns in meetings
- Jokes or side comments about “being replaced” that are laughed off but never addressed
- Employees experimenting with AI only in private, without sharing what they learn or where they struggle
- Lack of curiosity or experimentation with AI in the flow of real work. For example, teams never exploring role-specific use cases, only using default prompts, or treating AI as a checkbox rather than a tool to rethink how they create value
What HR Leaders Can Do:
- Equip managers with language: Provide managers a short guide with phrases they can use, such as “Let’s walk through where we trust this tool and where we still need human judgment,” or “It’s okay to say this AI suggestion doesn’t feel right, tell me why.”
- Model fallibility at the top: Encourage senior leaders to share one story where an AI output was wrong or incomplete and how human expertise corrected it, so people see that human judgement and discernment is valued, not punished.
- Create AI listening forums: Set up small, recurring sessions or ERG‑hosted discussions where employees can safely share experiences with AI, voice fears about their roles and identity, and explore how AI can make their work more meaningful, not just faster. These forums are also critical for de‑risking worker identity, so people are seen as more than the tasks that AI can augment or replace, and for centering human value in an AI‑driven workplace.
The Briefing: What HR Leaders Should Do This Month Humanizing the AI Transition
- Center identity and meaning in AI change: In January communications, explicitly acknowledge that AI is reshaping how people see their roles, not just their tasks, and use clear language that validates questions about work, growth, and the future.
- Prioritize meaning-critical roles: Identify roles where identity is tightly tied to craft and expertise, and focus early dialogue, support, and work redesign efforts on these groups as AI is introduced.
- Make human strengths explicit: Partner with leaders to name and illustrate how judgment, empathy, creativity, and ethical reasoning are becoming increasingly central to specific jobs, using concrete examples rather than abstract value language.
- Put a psychological safety lens on AI pilots: For every AI initiative, define human metrics such as perceived control, clarity of expectations, and comfort raising concerns, and build “how this impacts people” into pilot plans and readouts.
- Normalize honest conversations about AI: Equip managers with simple phrases to invite the challenge of AI outputs, encourage senior leaders to share where human expertise corrected AI, and create AI listening forums or ERG‑hosted sessions where employees can safely surface fears, risks, and ideas for more meaningful human + AI collaboration.
- Create a simple sensing mechanism: Use quick pulse checks, manager debriefs, and ERG feedback summaries to track how identity, meaning, and psychological safety are shifting as AI rolls out, and feed those insights back into your AI and people strategies.
Recommended by LinkedIn
One Question for Your Next Leadership Meeting:
What signals are we using right now to know whether AI is increasing or eroding psychological safety, curiosity, and a sense of future for our employees?
What Dr. Terri’s Reading this Month
Each month, this section highlights an article, report or book that can help you deepen the conversation on human experience, psychological safety, identity, and the human + AI workforce.
LinkedIn Learning With Dr. Terri
Want to deepen your understanding of this month’s topic? Check out Dr. Terri’s LinkedIn Learning courses for HR leaders on incorporating AI in HR practices, GenAI in Learning and Development, Agentic AI for HR Teams, Responsible AI for Managers and A Manager’s Guide to Career Conversations in the Age of AI. Share one course with your HR or leadership teams this month and discuss how to apply a single idea or strategy in your organization this month.
Partner with Dr. Terri on Your Human + AI Workforce Strategy
Dr. Terri partners with HR and executive teams to navigate AI adoption in ways that strengthen psychological safety, protect worker identity, and accelerate responsible, human‑centered use of AI that drives adoption and impact. If you’re leading AI‑related change and want a strategic thought partner to work through your specific context, challenges, and opportunities, schedule a 60‑minute complimentary discovery call with Dr. Terri. In this session, you’ll map your current state, pinpoint your highest‑impact people risks and opportunities, and explore how her targeted advising, leadership training, workshops, and keynote presentations can help you build a Human + AI workforce strategy tailored to your organization.
Contact:
🔗 LinkedIn: Terri Horton EdD, MBA, MA, SHRM-CP, PHR
Great thoughts. Thanks for sharing.
Nice piece of information articulated Terri Horton, EdD, MBA, MA, SHRM-CP, PHR, SWP
Hai Terri, very impressive and thoughtful. Keep sharing and connecting us. 🎊✨️🎈
The rising spiritual power of AI goes unnoticed, and by the time Agentic AI is introduced people won't know what happened to the what will seem like the good ol days.....
Very timely read! I really enjoyed your tangible take on where AI is headed and how we as culture/L&OD pioneers can help direct conversations around it. Psychologically safety is a key foundation, I’m so glad your research centers it.