Andy Hock will join an exceptional group of leaders to discuss the next frontier of AI: Physical AI — From Virtual to Real. The next frontier of AI isn't online—it's in motion. Andy will join: • Raymond Lo 👓🤖 Liao, VP & Managing Director, Ventures at Samsung Electronics NEXT • Carla Gómez Cano, Co-Founder of THEKER Robotics • Julie Linn Teigland, Global Vice Chair – Alliances & Ecosystems at EY • Moderated by Pep Viladomat, CEO of Northbeam 📅 Wednesday, March 4 🕛 12:10–12:50 CET 📍 Banco Sabadell Stage, Hall 8.0 – 4YFN & Partner Theatres
About us
Cerebras Systems is the world's fastest AI inference. We are powering the future of generative AI. We’re a team of pioneering computer architects, deep learning researchers, and engineers building a new class of AI supercomputers from the ground up. Our flagship system, Cerebras CS-3, is powered by the Wafer Scale Engine 3—the world’s largest and fastest AI processor. CS-3s are effortlessly clustered to create the largest AI supercomputers on Earth, while abstracting away the complexity of traditional distributed computing. From sub-second inference speeds to breakthrough training performance, Cerebras makes it easier to build and deploy state-of-the-art AI—from proprietary enterprise models to open-source projects downloaded millions of times. Here’s what makes our platform different: 🔦 Sub-second reasoning – Instant intelligence and real-time responsiveness, even at massive scale ⚡ Blazing-fast inference – Up to 100x performance gains over traditional AI infrastructure 🧠 Agentic AI in action – Models that can plan, act, and adapt autonomously 🌍 Scalable infrastructure – Built to move from prototype to global deployment without friction Cerebras solutions are available in the Cerebras Cloud or on-prem, serving leading enterprises, research labs, and government agencies worldwide. 👉 Learn more: www.cerebras.ai Join us: https://cerebras.net/careers/
- Website
-
http://www.cerebras.ai
External link for Cerebras
- Industry
- Semiconductor Manufacturing
- Company size
- 501-1,000 employees
- Headquarters
- Sunnyvale, California
- Type
- Privately Held
- Founded
- 2015
- Specialties
- artificial intelligence, deep learning, natural language processing, inference, machine learning, llm, AI, enterprise AI, and fast inference
Products
Locations
-
Primary
Get directions
1237 E Arques Ave
Sunnyvale, California 94085, US
-
Get directions
150 King St W
Toronto, Ontario M5H 1J9, CA
-
Get directions
Tokyo, JP
-
Get directions
Bangalore, IN
Employees at Cerebras
Updates
-
💡 Partner Spotlight: AWS Marketplace Cerebras brings the fastest inference. Amazon Web Services (AWS) Marketplace brings the simplest way to buy and deploy it. Together, we enable teams to start building faster inside the AWS environment they already use. What this unlocks: 🟧 Access Cerebras Inference Cloud through AWS → deploy faster inside your existing environment 🟧 Consolidated AI spend → roll inference into your current AWS billing 🟧 Use EDP and committed spend → maximize existing cloud investments 🟧 Simplified procurement → streamline purchasing, billing, and internal approvals 🟧 Monthly marketplace billing → track and manage AI costs alongside AWS services 👉 Get started: https://lnkd.in/gAveM6yi
-
-
10 academic papers. Parsed, analyzed, and synthesized. In under 10 seconds. We built a research agent with Cerebras Inference and Unstructured that processes entire literature reviews so you become a subject expert faster. Try it here: https://lnkd.in/dRj_gH6t
-
Cerebras reposted this
Love the smell of power plant construction in the morning. It’s the smell of AI. Little is better than the sight of giant gas turbines being installed. A 300MW power plant under construction. Hundreds of jobs for the local community. And 100MW of power for a new data center.
-
-
Introducing ExomeBench — a benchmark for evaluating genomic models on disease-associated genetic variants, built in collaboration with Mayo Clinic. Most genomics benchmarks focus on general sequence modeling tasks. What remains under-evaluated is whether these models can answer clinically meaningful questions about genetic variants. That's why we built ExomeBench and are releasing it to the public: a benchmark for evaluating health-relevant variant interpretation in exome regions. ExomeBench includes: 🧬 A dataset with 158k+ single-nucleotide variants across five clinically relevant tasks with predefined train/validation/test splits 📖 Open-sourced code to reproduce dataset construction and to run benchmark evaluations Both dataset and code are made available so the community can evaluate and iterate faster. Explore ExomeBench and start testing your models: GitHub: https://lnkd.in/erbzE9vd Hugging Face(dataset): https://lnkd.in/ecfiJ8Ya Important note on intended use: ExomeBench is a research benchmark. It is not a diagnostic tool.
-
-
Cerebras is proud to partner with G42 and MBZUAI (Mohamed bin Zayed University of Artificial Intelligence) to deliver a national-scale AI supercomputer in India with 8 exaflops of compute capacity. This cluster is designed to support researchers, startups, enterprises, and government entities, and will serve as a foundational asset under the India AI Mission, accelerating AI innovation tailored to India’s needs. It will operate under India-defined governance frameworks, ensuring full data sovereignty, security, and compliance. Thank you to G42, MBZUAI (Mohamed bin Zayed University of Artificial Intelligence) and CDACINDIA for the strong collaboration!
-
Cerebras reposted this
In New Delhi, India takes a major step forward in strengthening sovereign AI infrastructure and expanding domestic compute capacity for advanced AI development with the deployment of an 8 exaflop national-scale AI supercomputer. Delivered by G42 in partnership with MBZUAI (Mohamed bin Zayed University of Artificial Intelligence) and Cerebras, it will be the largest AI supercomputer in India, operating under India-defined governance frameworks, ensuring full data sovereignty, security, and compliance. Designed to support researchers, startups, enterprises, and government entities, it will serve as a foundational asset under the India AI Mission, accelerating AI innovation tailored to India’s needs. At G42, we are committed to supporting nations in building, deploying, and scaling AI within their own borders, responsibly, securely, and at scale, enabling the development of AI-Native Societies. Read more: https://lnkd.in/dn24binY #G42 #G42ai #ResponsibleAI #IndiaAIImpactSummit2026 #AI #Technology #GlobalSouth #ExponentialAI
-
-
💡 Partner Spotlight: Vercel Vercel gives developers a production-ready way to build and ship AI-native web apps. Cerebras brings the fastest inference with industry-leading intelligence. What this unlocks: 🟧 Call Cerebras models via the Vercel AI SDK → launch faster with minimal code changes 🟧 Instant personalization → increase engagement with immediate user experiences 🟧 Conversational copilots in web apps → deliver intelligent, always-on UX 🟧 Streaming generation at the edge → reduce latency for global users 🟧 Enterprise access via Vercel or Cerebras → scale without re-architecting Cerebras powers compute. Vercel powers developer experience and delivery. Learn how to integrate Cerebras’s ultra-fast inference with the Vercel AI SDK for building AI-powered applications with streaming responses and structured outputs: https://lnkd.in/eYcXV-Qq
-
-
𝗪𝗵𝘆 𝘀𝗽𝗲𝗲𝗱 𝘄𝗶𝗻𝘀: 𝗳𝗮𝘀𝘁𝗲𝗿 𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗶𝘀 𝗮𝗯𝗼𝘂𝘁 𝗺𝗼𝗿𝗲 𝘁𝗵𝗮𝗻 𝗾𝘂𝗶𝗰𝗸𝗲𝗿 𝗮𝗻𝘀𝘄𝗲𝗿𝘀—𝗶𝘁’𝘀 𝘁𝗵𝗲 𝗻𝗲𝘄 𝗽𝗮𝘁𝗵 𝘁𝗼 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆. Watching the Winter Olympic Games in Milano-Cortina these past two weeks reminds us that elite performance is multi-dimensional. In biathlon, for example, you can ski fast, but you don’t win without shooting accuracy—and going faster gives you margin to settle before you shoot. Inference today is similar. State-of-the-art models are reasoning models. They deliver better accuracy through inference-time-compute—planning, decomposition, tool calls, verification, and iteration—and now represent the majority of inference tokens. Faster inference creates headroom, to deliver higher accuracy through more reasoning within the same latency budget, to minimize end-user latency, or a hybrid of both. And much like in sports, the incumbent you may have 𝘦𝘹𝘱𝘦𝘤𝘵𝘦𝘥 to win gold doesn’t stand atop the podium forever. Read more: https://lnkd.in/gM85UUpF
-