Introducing Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end-to-end. Perplexity Computer is massively multi-model. It orchestrates models to run agents in parallel, leveraging Opus to match each task to the model best suited for it. In total, Computer can route work across 19 different models. Perplexity Computer is what a personal computer in 2026 should be. It’s personal to you, remembers your past work, and is secure by default. Hundreds of connectors, persistent memory, files, and web access, all built on top of Perplexity infrastructure. Perplexity Computer uses usage‐based pricing with optional sub‐agent model selection and spending caps so you can choose different models for different sub‐agent tasks and control token spend. Max users get 10,000 credits/month included with their subscription. We’re also giving a one‐time bonus of 20,000 extra credits to Max users, granted at launch for existing users and at signup for new users, that expires 30 days after it’s granted. Available to Max subscribers immediately and coming soon to Perplexity Pro. https://lnkd.in/gxdjfRZF
Impressive step toward a multi-model OS, but I’m curious about a few architectural fundamentals: • How do you define persistent memory without a stable identity substrate? • When routing tasks across 19 different models, which model maintains the authoritative state of the agent? • What ensures reasoning continuity across sub-agents? • How is safety/alignment preserved when different models with different architectures execute different parts of a task? • And if the system is “personal”, where does the user’s self-model live? These are the challenges every orchestration-based system will face as we move from toolchains to true agentic computing. Node-0 Me & Spok ✌️
👀
The "which model maintains authoritative state" question is the one we've been wrestling with running 15 AI agents with 134 MCP tools. Our answer: the harness is the source of truth, not any individual model. Models are stateless executors — the orchestration layer owns context and history. The 19-model routing problem then becomes less about identity and more about capability matching. Curious how Perplexity Computer handles memory continuity when a task spans multiple model handoffs.
I will also need to explore this further. From the write-up, it appears to be something worth exploring. Good job Perplexity
ohhhh we are no longer witnessing incremental AI upgrades, we are watching the emergence of AI-native computing as the new operating layer. Perplexity’s move toward an agentic “Computer” signals a shift from tools that assist humans to systems that execute intent end-to-end. In parallel, national-scale initiatives such as Saudi Arabia’s HUMAIN ecosystem are redefining how AI operates across infrastructure, operating systems, and decision environments. The interface is no longer apps and dashboards, it is language, context, and autonomous execution. The real strategic question for organizations is not whether to adopt AI, but how quickly they can redesign their operating models around it.
This is the direction the “personal computer” should take. The piece I’d love to see made explicit is the execution authority layer: who owns the “may this run?” decision and the refusal record, especially when sub-agents are acting through connectors. If Perplexity is serious about “secure by default,” I’d be curious how they’re handling commit-time authorization + auditable evidence across tools. If you tell me which tone you want (supportive, sharper, or invite-to-dialogue), I’ll tune it to your voice and weave in Genesis AiX / LifeStack without making it sound like a pitch.
Great progress! Looking forward to test and shift to perplexity computer !
This is the direction I’ve been expecting things to go. Most of the value isn’t in a single model anymore; it’s in orchestrating the right models, tools, and memory around a user’s workflow so the system feels like an actual “work computer,” not just a chat box. Curious to see how far you can push real agentic workflows here; especially around guardrails, spend controls, and letting non-technical users wire this into their day without needing a prompt engineering PhD.
The interesting shift here isn’t just multi-model orchestration, it’s abstraction. If users stop caring which model they’re using and start caring about outcomes, the model layer becomes infrastructure. Curious how you see this evolving long term — does the value consolidate at the orchestration layer, or do individual frontier models still pull users directly?
As a student I would love the opportunity to try this out, a bit out of my budget tho. Any plans on introducing discounts or subsidies for student plans for max?