What’s next for AI in 2026? Looking back at 2025 and ahead to the new year, AI has clearly graduated from being a science topic to something that’s all around us, doing real things in the real world. What began as experimentation with passive, assistive tools is now a partnership with a proactive collaborator. A collaborator that amplifies how we work, create, and solve problems. I continue to be amazed at how AI closes meaningful gaps in healthcare, accelerates breakthroughs in research, and transforms how software is built and secured across industries. AI agents are empowering small teams, or even individuals, to achieve big things, and new safeguards are making trust and security foundational to every advance. In a recent Microsoft blog post entitled “What’s Next in AI: 7 Trends to Watch in 2026,” colleagues and I explore trends to look out for in 2026. Here are a few highlights: * AI will become central to the research process. * AI is poised to shrink the world’s health gap. * AI agents will get new safeguards as they join the workforce. * AI is learning the language of code — and the context behind it. * With quantum, the next leap in computing is closer than most people think. I’m optimistic about and energized by these and other possibilities. The next wave of AI will pose big challenges, but also offer the opportunities for more intuitive and satisfying collaborations—humans and AI working side by side to realize the potential that was always in us. https://lnkd.in/gwCMUiyJ
This was a strong read. What really stood out is that 2026 isn’t about more AI tools… it’s about AI becoming a collaborator embedded in real work, with governance and trust built in. The question I keep coming back to is readiness. If AI is moving this fast, our workplaces and our education systems have to catch up. Tooling isn’t the bottleneck. Fluency, judgment, and learning design are. As a working parent, this makes me think not just about today’s workforce, but the one we’re preparing next. Who owns that readiness? Employers, schools, or both?
The real shift isn’t “AI is getting better.” It’s that leverage is collapsing. When agents let a single person do what used to require a department, the constraint stops being capital, credentials, or headcount. It becomes judgment. In 2026, the winners won’t be the ones with the best tools. They’ll be the ones who know: • what problems are worth solving • where humans must stay in the loop • and when to trust machines without outsourcing thinking AI won’t replace people. But it will brutally expose who was adding leverage vs. hiding behind process.
The optimism around AI agents makes sense. One risk I don’t see discussed as often is that agents amplify existing organizational assumptions — including the wrong ones. In that sense, trust and safeguards aren’t just technical problems; they’re coordination problems. If institutions can’t distinguish sensing from decision authority, more powerful tools may accelerate error before correction. That feels like the frontier to watch alongside capability gains.
This framing feels right. 2025 was experimentation, 2026 is integration. The shift from tools to collaborators is the real inflection point, especially as safeguards, research acceleration, and agent governance mature. When Microsoft talks about humans and AI working side by side, it signals less hype and more responsibility. The opportunity now is not faster tech, but better judgment in how we deploy it.
Really interesting framing of where AI is headed. What stood out to me is how much of this vision depends on domain‑fluent humans working with AI, not being replaced by it. In pharma commercial analytics, AI ‘agents’ will only be as good as their understanding of IQVIA, claims, lab, and formulary data and the limitations baked into each source. That’s the gap we’re aiming to close with the proposed Pharmaceutical Commercial Data Certificate at BU—teaching people to interpret messy real‑world datasets, govern them responsibly, and then pair that expertise with AI so decisions about access, GTN, and promotion are actually trustworthy. If AI is becoming the digital coworker, pharma still needs people who speak both the language of the data and the business.