Possible | Andrew Bosworth on AI, wearables, and mixed reality
Is augmented reality (AR) the next step in giving humanity superpowers? Maybe the better question is: why wouldn't it be?
Since the earliest days of AR—dating back to the 1960s with Ivan Sutherland’s “Sword of Damocles” headset (which Sutherland preferred to call "Stereoscopic-Television Apparatus for Individual Use")—we’ve imagined overlaying the digital atop the physical. But for decades, AR lived in the realm of science fiction, research labs, or marketing demos. Now, we’re finally on the verge of realizing what it actually means for billions of people to wear intelligence—not in their pocket, or on their desktop, but in their field of vision.
What does this look like in the real world? Firefighters are already experimenting with AR helmets that can highlight exits through smoke and map building layouts in real time. Surgeons are experimenting with ways to use AR overlays to visualize veins and organs during operations. In warehouses and factories, AR reduces cognitive load by displaying step-by-step instructions or flagging safety hazards instantly. This is superagency at work: human decisions, augmented and elevated by machine cognition.
What really excites me is where this goes for everyday life. For consumers. For people who’ve never written a line of code or thought about Moore’s Law.
I was traveling recently, and more than once wished real-time translation was available on any of my devices. That alone will change how people experience travel, new cultures, and connecting with each other. Or walking through a grocery store, and having your glasses flag ingredients that align with your health needs or budget. Imagine being neurodivergent or visually impaired—and AR gives you navigational guidance, social cues, or enhanced contrast so you can thrive on your own terms.
These scenarios are no longer science fiction. Meta’s Orion Project is one path. Apple Vision Pro, Magic Leap, and Snap’s Spectacles are others. Each is trying to answer the same core question: how do we bring intelligence into the world, instead of trapping it behind a computer screen?
In Superagency, I write that the best tools don’t replace human agency—they amplify it. They don’t narrow our options—they expand them. AR is one of the most powerful tools we’ve ever developed for doing exactly that.
Recommended by LinkedIn
So, yes, AR + AI is a superpower. But only if we deploy it with intention. Iteratively. Inclusively. And with the techno-humanist compass in hand.
Because the question isn’t: Can we do this? It’s: What kind of world do we want to see—and who gets to see it?
Audio episode ft. Meta CTO and Head of Reality Labs Andrew Bosworth : https://link.chtbl.com/zBp2jz5u
YouTube: https://youtu.be/41koPU6fRT4
Transcript: https://www.possible.fm/podcasts/boz/
You can subscribe to catch more episodes of Possible here: https://www.possible.fm/podcast/
Predictive Strategic Analysis | Silent Canyon Architect
1wReid Hoffman if Meta or any cloud provider is bankrolling or developing it, it is your imagination that will be lost. Think perpetual Times Square and you won't even be able to slip the screen into your pocket or place it on a table face down. Forever in the *feed. Pass.
VP of R&D and Manufacturing @ Pair Eyewear | Product R&D and Innovation
3wIf vision and health is the focus then, maybe. Function is the barrier to entry. Glasses already fufill a primary function. Refractive error corretion. Unless you are improving on that that you are a secondary function. If the secondary function is something like hearing then yes, if it is checking email I am dubious.
simple lại 👍 Bình Thường dễ mà,👍❤️:Miễn là bạn có được tư duy tích cực toàn diện: "Thấy -> nghe -> nói -> biết = Thấy -> biết -> hiểu -> rõ. ❤️Cũng đơn giản lại BT như ăn kẹo kéo thôi mà có gì khó đâu nhỉ 😄😄😄 Cảm ơn ❤️369=0 👍👍 33/6=0❤️Phá Quân❤️" I have no idea Vietnamese am I can speak English 😀😀😀 Thanks I have no clue Vietnamese 😀 because I Love Vietnam's❤️♥️❤️
Going to listen to this today!!
Engineer
1moInsightful