Ollama has been updated with improved OpenClaw support: - One command installation - Web search for up-to-date information - Image input support Get started: ollama launch openclaw
Ollama
Technology, Information and Internet
Palo Alto, California 167,012 followers
Get up and running with AI models.
About us
Get up and running with large language models.
- Website
-
https://github.com/ollama/ollama
External link for Ollama
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Headquarters
- Palo Alto, California
- Type
- Privately Held
- Founded
- 2023
- Specialties
- ollama
Locations
-
Primary
Get directions
Palo Alto, California 94301, US
Employees at Ollama
Updates
-
Ollama now supports subagents and web search in Claude Code! Subagents can run tasks in parallel, such as file search, code exploration, and research, each in their own context. Ollama's web search functionality is now also built-in. When a model needs current information, Ollama handles the search and returns results directly without any additional configuration. No MCP servers to configure or API keys required. Try it with any model on Ollama's cloud: ollama launch claude --model minimax-m2.5:cloud Some models will naturally trigger subagents when needed (minimax-m2.5, glm-5, kimi-k2.5), but you can force triggering subagents by telling the model to “use/spawn/create subagents” Example prompts: > spawn subagents to explore the auth flow, payment integration, and notification system > audit security issues, find performance bottlenecks, and check accessibility in parallel with subagents > create subagents to map the database queries, trace the API routes, and catalog error handling patterns Blog post: https://lnkd.in/gPpysrUB
-
ollama run qwen3.5:cloud Qwen3.5-397B-A17B is the first open-weight model in the Qwen 3.5 series. It's available on Ollama's cloud right now! Give it a try. Qwen3.5 features the following enhancement: 1. Unified Vision-Language Foundation: Early fusion training on trillions of multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding benchmarks. 2. Efficient Hybrid Architecture: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead. 3. Scalable RL Generalization: Reinforcement learning scaled across million-agent environments with progressively complex task distributions for robust real-world adaptability. 4. Global Linguistic Coverage: Expanded support to 201 languages and dialects, enabling inclusive, worldwide deployment with nuanced cultural and regional understanding. 5. Next-Generation Training Infrastructure: Near-100% multimodal training efficiency compared to text-only training and asynchronous RL frameworks supporting massive-scale agent scaffolds and environment orchestration. Model page on Ollama: https://lnkd.in/g_K6Hd4h
-
-
❤️ We are partnering with MiniMax to give Ollama users free usage of MiniMax M2.5 for the next couple of days! ollama run minimax-m2.5:cloud Use MiniMax M2.5 with OpenCode, Claude Code, Codex, OpenClaw via ollama launch! OpenCode: ollama launch opencode --model minimax-m2.5:cloud Claude: ollama launch claude --model minimax-m2.5:cloud Model page: https://lnkd.in/gHxGWMXQ
-
-
GLM-5 is on Ollama's cloud! It's free to start, and with higher limits available on the paid plans. ollama run glm-5:cloud It's fast. You can connect it to Claude Code, Codex, OpenCode, OpenClaw via ollama launch! Claude: ollama launch claude --model glm-5:cloud Codex: ollama launch codex --model glm-5:cloud Model page: https://lnkd.in/gxxTeAw6 Note: Please update to the latest version of Ollama 0.15.6+
-
-
Wow! In one prompt Tongyi Lab Qwen3-Coder-Next 80B generated a fully working flappy birds game in HTML. (0:05) Claude Code with Qwen3-Coder-Next (0:26) Shows the game running Run it fully locally: ollama pull qwen3-coder-next Ollama's cloud if you can't run it locally: ollama pull qwen3-coder-next:cloud Try launching it with Claude Code using ollama launch Link to the game: (single prompt, no edits) https://lnkd.in/gKZkNF7M
-
ollama run qwen3-coder-next Qwen3-Coder-Next is a coding-focused language model from Alibaba's Qwen team, optimized for agentic coding workflows and local development. 80B total parameters, 3B active. It will run on consumer hardware (64GB+ unified memory recommended) Run it with Claude Code or your favorite tools: ollama launch claude --config Model page: https://lnkd.in/gQT9yJSt
-
-
GLM-OCR is amazing for document understanding. Use it for recognizing text, tables, and figures, or output to a specific JSON format. Drag and drop images into the terminal, script it or access via Ollama's API. https://lnkd.in/dGkRZ5uz
-