smallest.ai reposted this
🚨 Bengaluru and SF-based smallest.ai has launched their SoTA TTS model, Lightning V3. It almost sounds human!
Smallest.ai is an AI research lab pioneering the future of compact, powerful models. We power low latency, high accuracy STT, TTS, S2S and SLM models to power Voice and Multi-Modal AI applications across 100+ industries. Our platform runs with enterprise-grade security, supports on-prem and private cloud deployments, and is fully SOC2, GDPR, HIPAA, and PCI compliant, making it suitable for regulated and high-trust environments.
External link for smallest.ai
San Francisco, California, US
smallest.ai reposted this
🚨 Bengaluru and SF-based smallest.ai has launched their SoTA TTS model, Lightning V3. It almost sounds human!
51% of people have abandoned a business entirely because of how the AI voice sounded. Lightning v3 covers 15 languages, 71% of the global population, and outperforms OpenAI on naturalness 76% of the time. Let that sink in. The entire voice industry has been solving the wrong problem - making voices that read text well instead of voices that can hold a conversation. Those are two completely different things. Reading text is clean. Predictable. Easy to benchmark. Conversation is messy. It has rhythm, hesitation, breath. Your pacing changes when you're thinking. Most TTS models fall apart the moment you put them in a real back-and-forth. They sound great in a scripted demo and robotic on a live call. We built Lightning v3 from scratch for the hard version of this problem. It sounds like it's thinking. It switches between languages mid-sentence the way a real bilingual person does. It clones your voice from a 5-second clip across all 15 languages. Want to try it? Link is in the comments.
If you are in SF, this is the place to be on Saturday!
This Saturday, smallest.ai and Imagine AI (YC F25) are hosting Golden Hour Co-Working for early stage founders in SF founders building in AI B2B SaaS at the Corgi cafe with support from the Sprinto team! This is for the founders in SF who want to co-build on a Saturday for an afternoon of focused work, Chinese pastries, and unlimited coffee, alongside the best people. We will be transforming the cafe into an elegant space with natural light and a warm golden-hour atmosphere. It will be a good time, and even better people. Space is limited, link below to register! 🫶 #AIStartups #B2BSaaS #SanFrancisco #Founders #Coworking
Last weekend was our first conference appearance in SF at the AI+ Renaissance Conference as the Title sponsor. Sudarshan took the stage at the Voice AI panel, and we launched Hydra – our Async Thinking Multimodal LLM – live in front of the room. This is the statement we opened with: “we are not close to passing the Turing test in voice. Not even for a single speaker, in a single language, in a single use case. And that's exactly the problem we're here to solve” The gap between AI voice agents and human conversation isn't subtle. Today's agents listen, then think, then respond. Humans do something fundamentally different – they think while listening, act while listening, and respond with contextual emotion. That's not a feature gap. That's an architectural gap. And offline LLMs can't be retrofitted to close it. That's the conviction behind everything we build at smallest.ai. Small, real-time models – built from the ground up for async inference, partial context, and sub-500ms multimodal response – are the path to human-level voice intelligence. Not bigger models. Faster ones. Hydra is our step in that direction: an async thinking Speech-to-Speech model that listens and reasons in parallel, with ~50ms latency. Paired with our Lightning TTS, Lightning ASR, and Electron SLM (which outperforms GPT-4.1 on realtime conversational tasks) – the full stack is finally coming together. A massive thank you to Joshua and Lynn for building AI Plus into the kind of event where everyone can have meaningful conversations, and learn from those around them. And to Sky9 Capital and Topify.ai for co-organising the afterparty with us – 300+ signups speak for themselves. That kind of momentum doesn't happen without people who care about the ecosystem as much as the technology. We're just getting started. The question we left the room with: Attention is all you need -but attention on what?
We made a home for voice projects being built with Smallest AI. Developers have been building incredible things with our APIs. So we built showcase.smallest.ai - a curated gallery where you can explore, learn, and share what's possible with voice AI. → Live demos you can try right now → Open-source code you can fork → 35+ real projects across STT, TTS, and voice agents If you've built something with Smallest AI, this is your stage. If you haven't, this is your starting point. Do it right now. And submit your project. Link is in the comments.
Voice AI is the new mobile. And nobody's building fast enough. We had the same conversation about mobile in 2009. Everyone could see it coming. But the people who actually won weren't the ones who talked about it - they were the ones who shipped before it was obvious. We're at that exact moment with Voice. Right Now. So this Saturday, we're bringing together the best builders in SF for one day to just build. It's just a focused, high-intensity sprint dedicated entirely to voice AI, conversational AI, and agents And yeah - we're giving away Meta Ray-Ban Gen 2s to the winner. We are partnering up with Emergent, Entelligence.AI, Vallo, Sky9 Capital and Topify.ai You'll have credits, APIs, and people in the room who are genuinely invested in what you build. If you're an engineer, founder, or student who's been wanting to go deep on voice - this is your day. March 14. San Francisco. 11AM-6PM. Registration link in the comments.
We're hosting Voice HackSprint 2.0 in San Francisco on March 14th in collaboration with Emergent and Entelligence.AI We gave away a mac mini in our last voice hacksprint and saw some crazy projects being built. So we're back. And this time, the community is stronger. Voice HackSprint is a one-day, in-person build sprint for engineers, founders, indie hackers, and students who are actively building in: - Voice AI - Conversational AI - AI Agents - Developer tools for AI If your default mode is "let's just build it" - you'll fit right in. 📍 San Francisco March 14, 2026 11:00 AM - 6:00 PM Prizes this time: - Winner gets Meta Ray-Ban Gen 2 - Top 5 runners-up get $100 gift cards each Spots are limited. If you're building in voice or agents, apply now - https://luma.com/ikzcmqld
smallest.ai reposted this
I added voice to my AI agent the wrong way. Upstream shipped an update and I was stuck. That broke repo forced me to actually understand NanoClaw's skills architecture — three-way merging, plugin isolation, idempotent reapplication. And I found an honest gap in the design along the way. Also: browsers block microphone access without TLS. Tailscale solved that in about 10 minutes. --- Kudos to smallest.ai for the API keys and Warp for being great agentic IDE Full journey on data-slug.com 👇 https://lnkd.in/gcccGGJd