🫨 The biggest argument in AI doesn't get enough airtime. Not ChatGPT vs Claude. Something deeper: Is language even the right foundation for intelligence? 🗣️ Team LLM: Language is humanity's ultimate abstraction. Every idea, every discovery, every breakthrough — compressed into text. Train on enough of it and something like understanding emerges. The results speak for themselves. 🌎 Team World Model: Language is a map. Not the territory. LLMs predict tokens. They've never felt gravity, never watched a glass fall. A child learns object permanence before they learn a single word. Real intelligence is grounded in cause, effect, time and space — not autocomplete. What if both are right, but about different things? LLMs cracked the alignment interface — the translation layer between humans and machines. World models crack the substrate — the actual architecture of reasoning. Maybe ASI isn't LLMs or world models. Maybe it's what happens when a world model learns to speak. 🤔
the medium is the message. different medium would be a very interesting paradigm shift
Fundsec Pty Ltd•1K followers
3wFrom my perspective, language models are only a partial world model. A true world model would unify language, vision, and voice as different modalities of the same underlying representation, while also capturing the physical structure of reality.