Modal has been named to the 2026 Enterprise Tech 30 list. The ET30 is an annual list by Wing Venture Capital and Eric Newcomer, voted on by 90+ leading investors and corporate development leaders. It recognizes the private companies with the most potential to shape the future of enterprise technology. Thank you to Wing Venture Capital and Eric Newcomer, and congratulations to all the companies honored this year.
Modal
Software Development
New York City, New York 19,870 followers
AI infrastructure that developers love.
About us
Deploy generative AI models, large-scale batch jobs, job queues, and more on Modal's platform. We help data science and machine learning teams accelerate development, reduce costs, and effortlessly scale workloads across thousands of CPUs and GPUs. Our pay-per-use model ensures you're billed only for actual compute time, down to the CPU cycle. No more wasted resources or idle costs—just efficient, scalable computing power when you need it.
- Website
-
https://modal.com
External link for Modal
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- New York City, New York
- Type
- Privately Held
- Specialties
- Serverless GPUs, LLM Inference, LLM Fine-Tuning, Generative Model Inference, Generative Model Training, Computational Biology, Audio Generation, Image Generation, Video Generation, Web Scraping, Batch Jobs, Batch Embeddings, and Scaling Out
Products
Modal
Platform as a Service (PaaS) Software
Modal is a serverless compute platform that makes it easy for developers to run compute-intensive workloads like ML inference, fine-tuning, and batch jobs. Our proprietary Rust-based container stack is best-in-class, allowing you to run any function in the cloud in less than a second, even on the most in-demand GPU types. We autoscale to thousands of GPUs or CPUs for your functions based on request volume so you can always meet customer demand while never paying for idle resources. Modal's Python SDK allows you to define custom images and hardware requirements in code. No more spending time on config files or cloud consoles. Let your team ship innovative AI products—we'll handle the compute.
Locations
-
Primary
Get directions
New York City, New York 10038, US
-
Get directions
Stockholm , SE
-
Get directions
San Francisco, California 94103, US
Employees at Modal
Updates
-
Modal reposted this
I joined Modal in June of 2025 - the very early innings of our GTM organization. Fast forward 9 months and we just completed our first SKO - Modal Deploy. A few takeaways from the lead up and execution of the event: 1. Revenue-minded Engineers + Product-minded Sellers is an extremely powerful combination Our engineers are incredibly talented presenters and deeply understand our customer's problems. Our GTM team has technical depth and the natural pull to go deeper and learn more. This is a very complementary and powerful combination where 1 + 1 = 3. Akshat Bubna and I are proactively screening for these qualities - we want our respective orgs to feel like one team. _________ 2. We have a massive opportunity ahead of us and it's our job to bend the curve to grow even faster Almost every tailwind is working in our favor - the move to open source models, the proliferation of agents, the shift from AI experimentation to production. But tailwinds don't close deals. We bend the curve by treating GTM like a product: iterating constantly, sharpening our craft, and getting a little better at every interaction. Sales excellence compounds the same way great software does. That's how we increase the odds of winning - one conversation, one deal at a time. _________ 3. Being a generational company starts with the work we do now Modal Deploy didn't come together by accident. It came together because people cared enough to plan for something great. That's the standard. Winning companies don't operate like the company they are today - they operate like the company they're becoming. Every meeting, every deck, every customer interaction is a chance to raise the bar. Rigor isn't something you add at scale. It's how you get there. _________ Proud of every person who made Modal Deploy what it was. Nine months in, I'm more convinced than ever that we're building something special - and we are truly just getting started. Huge shoutout to Lauren Wang, Greta Workman, Akshat Bubna, Erik Bernhardsson, Jonathon Belotti, Rebecka Storm, Charles Frye, Sierra Wallizer, Christopher Davis, Paul Butler, Richard Gong, Sona Dolasia, Paul Licursi, Zach T., Chris Prinz, Monishee Matin, Margaret Shen, Peyton Walters
-
-
Modal reposted this
How does Modal stay at the forefront of the open source world? 🌐 We caught up with engineer Ben Shababo at #NVIDIAGTC 👇
-
Social engineering attacks evolve constantly. The team at Doppel, an AI-native platform built to detect and disrupt these attacks, needs to evolve faster. Their ML engineers wrote about moving their training and inference workloads to Modal, and what's changed: - Parallelized experimentation - 10x faster image builds - Lower operational overhead
-
Modal reposted this
we’re in a very chinese time of our lives… Mindshare of open-source LLM providers has shifted dramatically since a year ago. I estimated proportional share of the most popular LLM model families deployed on Modal by paying users. The numbers corroborate what we’ve heard anecdotally, which is that Chinese open models are dominating the ecosystem. What’s cool is that this data represents users who are self-deploying models, so it's a more accurate snapshot of what's being chosen by businesses & sophisticated AI/ML teams. A few observations of note 🧐 ⭐️ Qwen (Alibaba) has been a major contributor to open models. Their frequency of shipping and coverage of LLMs of all sizes helps explain their ubiquity. Let’s collectively pray that they’ll be business as usual despite their recent leadership shakeup. 📉 Llama (Meta), unsurprisingly, has fallen off a cliff in popularity. TBH I’m surprised they’re represented at all in recent months’ data. 📈 GLM (Z.ai) adoption is rapidly growing. This reflects the surge in activity around AI coding use cases. 🎁 DeepSeek share hasn't expanded, but I expect this to change if DeepSeek v4 is all that it’s hyped up to be. Caveats: these data points are estimates and reflect relative share by number of apps, rather than usage of those apps. They also don't show the long tail of other LLM families being deployed on Modal.
-
-
Modal reposted this
We launched sandboxes at Modal back in 2023 but it wasn't until the summer of 2025 when everything exploded, with usage coming from: * Vibecoding apps * Reinforcement learning * Background agents We're now launching about 10M sandboxes every day, with numbers growing extremely fast. What makes me really excited is that I've always seen Modal as a very general-purpose platform, and our goal has always been to build a wide range of tools to cover the breadth of modern AI (and even more broadly, compute-intensive problems). Inference is our biggest use case but sandboxes is growing incredibly fast at this point. Over the next few years we hope to launch even more tools and build an even more comprehensive platform!
Over 1 billion sandboxes have been launched on Modal. Since launching three years ago, we've seen Modal Sandboxes become foundational to how AI is being built. Today, teams like Lovable, Ramp, Cognition and more are using Modal Sandboxes to power everything from coding platforms and background agents to RL infrastructure at scale.
-
Over 1 billion sandboxes have been launched on Modal. Since launching three years ago, we've seen Modal Sandboxes become foundational to how AI is being built. Today, teams like Lovable, Ramp, Cognition and more are using Modal Sandboxes to power everything from coding platforms and background agents to RL infrastructure at scale.
-
Deploy Nemotron 3 Super, the latest open model from NVIDIA, today on Modal. Its hybrid architecture can deliver >50% faster token generation compared to the best open models today. We think it'll be used as a higher-throughput alternative to models like gpt-oss-120b. Details: - hybrid Mamba-transformer architecture - 120B parameters with only 12B active - 1M context window Get started with our deployment recipe in the comments below.
-