Sign in to view Andy’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Andy’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
San Diego, California, United States
Sign in to view Andy’s full profile
Andy can introduce you to 8 people at EyePop.ai
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
4K followers
500+ connections
Sign in to view Andy’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Andy
Andy can introduce you to 8 people at EyePop.ai
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Andy
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Andy’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Activity
4K followers
-
Andy Ballester shared thisIncredibly honored to have EyePop.ai recognized with both Best in Video Analytics and the Judges’ Choice Award at #ISCWEST’s 2026 SIA New Products and Solutions (NPS) Awards. This recognition reinforces something we believe at a fundamental level: video is one of the richest, most underutilized data sources in the world. Unlocking it shouldn’t require a team of ML engineers. We’re focused on making visual intelligence accessible, fast, and practical. From real-time streams to structured insights, builders can go from idea to a working system without friction. Grateful to our team, partners, and customers who continue to push what’s possible. This is just the beginning! #securityindustry #SIANPS #ISCWest #VideoAnalytics #ComputerVision
-
Andy Ballester reposted thisAndy Ballester reposted thisCongratulations again to the 2026 SIA New Products and Solutions (NPS) Awards winners! Top among the winners was Ones Technology's BioAffix Secure I/O Distributor, and EyePop.ai received the prestigious Judges' Choice Award for its EyePop.ai Platform product. Honeywell received the 2026 NPS Merit Award. Learn more and see all the photos of the NPS winners at the links in the comments! #securityindustry #SIANPS ISC Security Events
-
Andy Ballester reposted thisAndy Ballester reposted thisThe TL Foundation is excited to welcome and support Connect's 2026 Cool Companies to help ensure San Diego’s most promising founders have the opportunity to grow and succeed. See how TL Foundation is helping expand access to capital while advancing our mission to build a stronger, more inclusive innovation economy in San Diego for future generations. https://lnkd.in/gMJ92KiRTL Foundation Supports Connect’s 2026 Cool Companies to Strengthen San Diego’s Innovation Economy - TL FoundationTL Foundation Supports Connect’s 2026 Cool Companies to Strengthen San Diego’s Innovation Economy - TL Foundation
-
Andy Ballester reposted thisAndy Ballester reposted thisIt's 2 a.m. in Tokyo. A father of two can't sleep. He's three months into a career change that isn't working, and he hasn't told his wife how scared he is. He picks up his phone and starts talking to Tony Robbins' AI Twin. A genuine conversation. He tells Tony everything. Tony holds him accountable the way only Tony can. He makes him go deep to find what he already knows. The man commits. "I can't solve anything in this state. I'm going for a run tomorrow morning, then I'm telling her everything. That's the start." The next day, he opens the app. Tony remembers. Tony asks how it went. This is happening right now. Thousands of conversations like this, every day, across 23 languages. All powered by a platform you're about to hear a lot more about. This is Steno.ai. We build hyper-realistic AI Twins for leaders and brands: digital representations that truly capture how you think, speak, sound, and look. Your Twin remembers every conversation. It deepens over time. It becomes a genuine relationship with every person in your audience. Peter Diamandis. Tony Robbins. Margarita Pasos. Brian Tracy. Dan Lok. Gerard Adams. Oso Trava. Justin Donald. Brands like Sleep Science Academy and Ask Slim. Doctors, scientists, coaches, and experts from around the world. The Tony Robbins app alone has a 4.8-star rating, 2,000+ reviews, and peaked at #29 in the Apple App Store. For the past two years we've been heads-down. Building a world-class team, onboarding these customers, and quietly rebuilding our entire platform from the ground up. New product. New brand. New everything. Today we're reintroducing Steno.ai to the world. If your knowledge, voice, or brand is too valuable to stay one-way, this is what we built for you. I wouldn't be here without the people who bet on this when it would have been easier not to. To the Steno team: you built something extraordinary in conditions that required an unreasonable amount of belief. To our customers: you trusted us with the most valuable thing you have. Your name, your voice, your reputation. To the allies who made this possible: Francis Pedraza, Andy Ballester, Kamron Palizban, Alireza Masrour, Cody Barbo, Zeb Evans, David Cohen, Jeremy Yamaguchi, Ryan Kuder, Misti Cain, Cathy Pucher, Aaron Amerling, Elena Kvochko, Brett Dovman, Usman Gul, Sirj Goswami, PhD, Justin Kahn (& the Tony Robbins team), Bryan Landers, Aman Manik, Ben Holcomb, Manuel Jaime, Duncan Street, Alan Saporta, Shar Broumand, Daniel Huss, the entire Techstars fam, all the early angels, and many, many more who know who they are. I've built companies. I've had adventures. I've worked with the best. All of it will pale in comparison to what we build with Steno. The future is personal. We're just getting started. 🔗 Full story + product walkthrough in the comments.
-
Andy Ballester shared thisHey CDN folks 👋 We’ve been working on something interesting at EyePop.ai — Visual Intelligence layered directly into content delivery infrastructure. Instead of just moving bits faster, what if CDNs could understand what’s inside the images/video as it ingests? Think: • Structured answers from video, not just files • Semantic search • Real-time visual metadata • AI analysis without retraining models • Vision as a native CDN capability We’re actively looking for a CDN partner to explore this together. If you work in edge compute, media pipelines, streaming infra, or product strategy and this sparks even mild curiosity, let’s talk. Here’s what we’re building: https://www.eyepop.ai/cdn The future CDN probably doesn’t just deliver content. It understands it.
-
Andy Ballester reposted thisAndy Ballester reposted thisZero to One is F#@$ing hard. I know, because I’ve been in those trenches. If you're building right now, you know exactly what I mean. Founders don't just 'launch.' They survive MVPs that break at the worst possible moment, 'clean' data that’s actually a disaster, and customers with limited patience. That’s what we’ll dig into on February 25th - Finding real wedge use cases, propriety workflow data, design partners and more. I’m joining Ideja Bajra, Raman Rai, Hussein Yahfoufi, and Brad Chisum for a deep dive into Zero to One in Applied AI hosted by Hubble and Venture Forward Capital and moderated by Sasha Cayward - MSc. 🗓️ Feb 25th @ 9:30 AM PST / 5:30 PM GMT Looking forward to seeing many of you founders there. Registration link in comments.
-
Andy Ballester reposted thisAndy Ballester reposted thisYou don’t need another analyst to clear your dashcam backlog. The crash takes seconds. The review takes hours. Not just “did it happen?” But the questions that slow teams down: “Where’s the exact moment of impact?” “Who entered the lane first?” “Did anyone run the light / sign?” “Was it contact — or a near miss?” “What happened right before?” With EyePop.ai's Visual Intelligence API, you can ask those questions of video and images… and get structured answers back. No custom model building. No custom pipelines. No weeks of setup. You can apply this approach to: > impact moments across dashcam footage > near-miss events at intersections > hard braking / sudden swerves > unsafe following distance > incident review across weeks of video Same input. Different questions. The process is simple: ask video questions, get usable answers. Early access is now available: https://hubs.ly/Q03-S6450
-
Andy Ballester shared thisSecure automated monitoring. Simulated fall. Real vigilance. #NoPatientsHarmed #HealthTech #PatientSafety #AIinHealthcare #ComputerVision #ClinicalInnovation #ResponsibleAIAndy Ballester shared thisFootage isn’t insight. It becomes insight when you can reliably answer: “Did someone fall?” “When did it happen?” “Where did it happen?” “What happened right before?” That’s what this example represents. With EyePop.ai's Visual Intelligence API, you can ask the question of video and images… and get structured answers back. No custom model building. No custom pipelines. No weeks of setup. You can apply the same approach to: >patient safety monitoring (bed exits, wandering, high-risk zones) >staff safety + incident reporting >security + compliance events >workplace and public-space slip/fall review All with security and healthcare-grade data handling in mind (including HIPAA-compliant workflows where required). Same input. Different questions. The process is simple: ask video questions, get usable answers. Early access is now available: https://hubs.ly/Q03-s_BK0
Experience & Education
-
EyePop.ai
***** ******* *******
-
************
** *******
View Andy’s full experience
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Volunteer Experience
-
-
-
Board Chair
TL Foundation
- Present 2 years 4 months
Science and Technology
TL Foundation is the world's first regional philanthropic investment fund. We are building a better San Diego for future generations. Our mission is to create a strong, resilient innovation economy, and to grow economic opportunity for all families and community members across the entirety of the San Diego region. We invest just like a venture fund. Except: (1) we only invest in regional San Diego start-ups. And (2) all returns go right back into the fund, to create an evergreen growing fund…
TL Foundation is the world's first regional philanthropic investment fund. We are building a better San Diego for future generations. Our mission is to create a strong, resilient innovation economy, and to grow economic opportunity for all families and community members across the entirety of the San Diego region. We invest just like a venture fund. Except: (1) we only invest in regional San Diego start-ups. And (2) all returns go right back into the fund, to create an evergreen growing fund that then reinvests into San Diego – in perpetuity. Our Board and Management do not earn carry or management fees. We are a 501c3 non-profit organization, so all donations are tax deductible. Visit us at https://connect.org/tl-foundation/ and come help us make a long-lasting impact.
-
Lead Mentor
Techstars
- Present 8 years 1 month
Economic Empowerment
Techstars San Diego Powered by SDSU & Techstars Anywhere Lead Mentor
-
Board Member, Habitat Capital
Habitat for Humanity International
- Present 2 years 4 months
Economic Empowerment
View Andy’s full profile
-
See who you know in common
-
Get introduced
-
Contact Andy directly
Other similar profiles
-
samy k͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓amkar͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛
samy k͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓͓amkar͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛͛
Openpath Security Inc.
13K followersLos Angeles, CA
Explore more posts
-
Marc Gasser
Pedalix • 8K followers
Agents are the new apps. And they just went multiplayer. The software engineering landscape is undergoing a seismic shift. Google just dropped Firebase Studio. And it might just replace half the AI builder startups out there. You’ve probably seen tools like Lovable, bolt.new, Vercel's v0, or Cursor. They let you build AI-powered apps without writing much code. Describe what you want — boom — they create UIs, workflows, and automation for you. It’s fast. It’s smart. It’s changing how software is built. Now imagine that… backed by Google. What is Firebase Studio? Firebase Studio is Google’s new end-to-end app builder for AI-first products. It combines: ✅ UI generation ✅ Agent workflows (like AgentSpace) ✅ Firebase backend (auth, storage, hosting, etc.) ✅ Google’s best Gemini models AgentSpace is like a command center for AI agents. It lets you create multistep workflows, connect APIs, and manage how agents behave. Great for product teams prototyping internal tools, automations, or AI assistants. But it’s still pretty early-stage. Right now, Firebase Studio and A2A (Agent-to-Agent) are two separate moves from Google — but they are strategically aligned, and integration is very likely coming. You need to manage agents, instructions, tools, and sometimes code. Why this matters for product managers? → The barrier to building software is decreasing. → With Firebase Studio, a product manager can do what used to take a team. The AI agent is becoming the new app. Not just chat. Agents that do stuff: send emails, fetch data, run logic. Google wants to standardise agents across platforms. Their new Agent-to-Agent protocol (A2A) means your agents could soon talk to each other — across apps, tools, maybe even companies. Enter MCP: The USB-C for AI Anthropic’s Model Context Protocol (MCP) is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. MCP is a standardised protocol designed to streamline the way AI models communicate with external systems, eliminating the need for custom integrations. With so many advancements happening rapidly, it’s challenging to keep up and navigate them within the realities of development. Yet, it has never been more exciting to be in the software engineering space. #AI #FirebaseStudio #ProductManagement #Innovation #TechTrends #MCP #A2A
16
1 Comment -
Taylor Black
Microsoft • 8K followers
Reading Metronome’s Monetization Operating Model, I kept coming back to one idea: pricing has become product. Software now delivers outcomes, not access. Yet most companies still charge as if they’re selling seats or licenses. That disconnect creates friction: for customers, unpredictability; for companies, stalled growth. The paper’s argument is simple but sharp—monetization isn’t a late-stage decision. It’s strategic infrastructure. Pricing needs the same ownership and iteration as any feature. Treat it like a surface that customers touch, not a spreadsheet buried in finance. If value is continuous and dynamic, pricing must be as well. That means product, GTM, finance, and engineering working from one system of truth. How many of us still treat pricing as an afterthought—when it should be a growth engine? https://lnkd.in/gnH7WzYf #Monetization #ProductStrategy #AI
8
-
Taylor Black
Microsoft • 8K followers
Innovation often hides behind wrappers. MCPs promise scale and speed, but let’s be honest: they’re just orchestrated prompts and APIs. Valuable? Yes. Foundational? No. The deeper question: are we funding primitives—or just better packaging? The history of technology is littered with companies that confused distribution hacks for paradigm shifts. Efficiency matters. But leaders must know the difference between an infrastructure play and a true leap in capability. So—what deserves our capital: elegant wrappers or new primitives? https://lnkd.in/g8yUQzDb #AI #InnovationStrategy #Technology
14
2 Comments -
Marco Patiño López
Pullpo • 5K followers
Today many will think I’m killing my company. From now on, at Pullpo, we’re allowing our clients to purchase the full Pullpo codebase. Every line of code. As a one-time purchase. Why we are doing this: DevEx platforms make a ton more sense when they’re fully customized to every aspect of a team: their workflows, their internal tools, their automations and the very specific metrics that matter to them. That’s the perfect DevEx platform. But as a SaaS, it’s impossible for us to deliver that level of customization for everyone. Every client has different pains and priorities, and as a company we can’t satisfy all of them quickly. Some requests are so specific that they don’t make sense to consider in a generic product. This is changing today. Just two months ago, it would have made zero sense for a small/medium team to spend precious engineering time maintaining and evolving its own DevEx platform. You’d rather ship product than build internal tooling. The cost, the complexity, and the ongoing maintenance burden were simply too high. But the equation has changed. Building in-house is getting faster and easier every month with AI. Integrations are simpler. Infrastructure is more plug-and-play. And more and more “new features” look less like multi-week projects and more like a well-structured prompt plus a thin layer of engineering. That’s why we’re providing teams with a SOC 2–audited, secure, optimized, production-proven base. So they can build their perfect DevEx platform on top of it. And we won’t just hand over the code and disappear. We’ll provide support, advice, and even forward-deployed engineers to help teams ship the custom features they need. Reach out if you want the perfect DevEx platform for your team.
41
5 Comments -
Justin Gordon
ShakaCode • 5K followers
Any best guesses on how long until Claude Code can: 1. Fire up the browser (without needing the MCP configuration) to evaluate changes 2. Fully leverage the Chrome Dev Tools 3. Use the Ruby debugger I’m finding Claude Code is creating fixes that would have been too tedious or painful. The gap right now is testing and debugging.
10
4 Comments -
Cari Davidson
forward earth • 4K followers
Sent this "stop wasting tokens" message to my team today. Point #3 made me chuckle, so I figured I’d share. My rules for AI code assist. 1. Create a product requirements document. You can even use the LLM to help you make the summary 2. Use TDD. Works great with an AI coder. Make sure they are testing the right functionality before they try to build it. 3. Write down your implementation guidelines, in detail. How you want to implement it. Think of yourself as an architect working on a ticket and you're giving instructions to a total noob junior wannabe tech bro who takes everything literally and doesn't like to admit when they don't know something. 4. Direct the LLM to context that is relevant. and don't make it too large. If you're in a monorepo or monolith codebase, that's a lot of tokens. Keep each piece of work small and manageable 5. Have it generate a development plan - this is one of the options in Antigravity and Zed - my tools of choice - probably also in Cline. Tell it to think step-by-step and explain decisions. 6. Read it. Approve it. Step through the work. Don't thank it or say please. It's just wasted tokens. 7. if it starts getting out of hand. **stop**. Revert. You cant use that slop anyway. Better yet. if it's wrong after 3 tries, stop, code it yourself, or reset the context and start over. it's not going to end well. disclaimer: this post was not generated by AI and intentionally contains mistakes and no f***king emojis
125
42 Comments -
Usman Sheikh
I never wanted to be an… • 56K followers
SaaS monetizes features. Rails monetize mistakes. The main objection to Rails: "Isn't this just network effects?" No. Rails don't create network effects. They create learning effects. Every time work flows through the rail, it doesn't only deliver an outcome, it makes the rail smarter. SaaS scaled revenue. Consulting scaled judgment. Rails scale error correction. While LegacyCos bolt AI agents onto existing workflows, NewCos build differently. They know you can't buy five years of resolved edge cases. That compound learning becomes the moat. Learning Effects > Network Effects Classic SaaS defensibility came from network effects. Slack gains utility with more teammates. But SaaS compounding is bounded. Each instance learns locally. Operating Rails compound differently. Every GitHub Copilot rejection, every Stripe fraud flag, every CrowdStrike attack blocked makes the entire network smarter. Expensify example: When you correct a miscategorized expense, it only improves your workspace, despite 15 million users generating errors daily. Imagine Expensify as a Rail: every correction across thousands of companies teaching the entire network. Your operating costs benchmarked against similar companies. Edge cases from one company preventing errors at another. This isn't about user count. It's about work count. Every pattern resolved becomes reusable logic with provenance. Rails monetize mistakes across the network. SaaS monetizes features in isolation. The Three Laws of Operating Rails Law 1: Learn faster than they copy Your rail must improve faster than competitors can imitate. A competitor can copy features in weeks. They can't compress five years of edge cases resolved, versioned, and rolled back. The moat isn't what you built; it's how fast you compound. Law 2: Make it tacit, not portable Perfect documentation is easily copyable. The real moat lives in tacit knowledge, patterns that only work with your specific context, governance, and audit trails. Law 3: Power without transparency kills trust As rails automate more decisions, they need more governance. One bad auto-execution can destroy years of trust. Constitutions, vetoes, one-click rollbacks aren't nice-to-haves. They're essential. NewCo Playbook: Start Boring, Compound Fast (Exclusive to newsletter subscribers) The winners won't be the ones who automate fastest. They'll be the ones who learn from failure fastest. LegacyCos are adding AI agents to workflows, hoping for magic. But intelligence in local instances doesn't compound. You don't build moats by making each silo smarter. NewCos who grasp Operating Rail laws will target boring shared burdens. They'll turn every error into network intelligence. They'll shoulder bounded liability to earn trust. And they'll compound learning at rates LegacyCo can't match. Strong operators compound errors into moats. Weak operators add features and hope. (Full version sent to subscribers)
55
24 Comments -
Boon Kgim Khur
Learn Parrot • 3K followers
Don't believe the BS that you can use Claude Code for free. Ollama recently made their API compatible with Claude Code. Many creators quickly jumped on the opportunity to farm engagement with the hook: "You can now use Claude Code for free!" My thought? Claude Code without Opus 4.5 is not Claude Code. Period. But this is exciting news. Not because I can use Claude Code for free, but because I see an opportunity to optimize costs by delegating easier tasks to local LLMs. The key question is: what tasks can local LLMs handle? I tested out 7 local LLMs. In this post, I will explain the BS and share my 1st experiment. -- Why the BS? Claude Code has been praised as one of the best AI tools by its users, not only for coding but for many other tasks. But the price feels steep to many. The $20/month plan is not enough for any serious work. You need to at least subscribe to Max 5x ($100/month). Many heavy users, including me, subscribe to Max 20x ($200/month). It’s a steal. But still, many were eager to try it but aren't ready to pay. Ollama's recent announcement means you can buy a Mac, a Strix Halo, or a GPU and use Claude Code for "free" with local LLMs. It’s appealing, as it is a one-time investment, and you can use the machine for other purposes. Creators are leveraging this opportunity to farm engagement. But the reality? Claude Code without Opus 4.5 is not the same Claude Code we praised. Local LLMs are far less intelligent. -- But for those who understand the difference, we see an opportunity to optimize costs by delegating some easier tasks to local LLMs. I'm interested in finding out what tasks local LLMs can handle. This is my typical flow when using Claude Code. This is for coding, but I have similar flows for marketing and content creation. 1. Research and planning 2. Create PRD and implementation plan 3. Break plan into bite-sized tasks 4. Implement + review with reflection pattern 5. Final review with agents 6. Final review and QA by me Based on my quick tests, we can forget about asking local LLMs to do research and planning. All of them failed at a simple instruction: "Visit https://learnparrot.ai/ and tell me about the website." So, I think the most viable use cases would only be (4) — implementation + review loops. While it looks like a very narrow use case, it is where we burn a lot of tokens. So I think it is worth a try. The main selection criteria for this will be instruction-following capability. One very common task is to refer to code samples or templates to code a new feature or page. This is a good test of instruction-following. So, I crafted my first test: - Used Opus to create HTML that I can screenshot as a LinkedIn carousel to display info for each model. - Turned one of the pages into a template and deleted the rest. - Asked each LLM to refer to the template to code its own page, given its specs. Swipe the carousel to see the results. Who would you hire? #LocalLLM #ClaudeCode #VibeCoding
20
10 Comments -
Jason Stokes
PLECCO • 6K followers
If you’re a FinTech founder hiring devs before defining your architecture—you’re already burning money. I’ve worked with dozens of early-stage platforms. The #1 pattern? Founders rush to build without clear: - Integration strategy (Plaid, Stripe, Dwolla, etc.) - Compliance roadmap - Modular backend that supports scale The result: rewrites, delays, and team churn. If you’re a non-technical founder navigating this, my advice: Start with a technical blueprint—before you touch code. I help founders build that plan and execute it fast. Let me know if you’re in this boat.
29
4 Comments -
Matt Turck
80K followers
🚀 “Once you use AI, there’s no going back.” That line from Guillermo Rauch, CEO of Vercel & creator of Next.js, perfectly sets the tone for this awesome episode of The MAD Podcast. 🧵👇 This is a HIGH SIGNAL discussion, very current (recorded last Sunday), where we covered: 1️⃣ V0’s overnight explosion – 100 M app generations, 7 per second, and it doubled Vercel’s user base in <12 months. 2️⃣ “Vibe-coding” vs. agentic engineering – why prompt-driven builders and hardcore engineers are converging. 3️⃣ AI Cloud & Fluid Compute – the platform already handling a trillion function calls while your DevOps pager stays silent. 4️⃣ MCP is the new HTTP – agents talking to agents → every company exposes its data through a model-friendly API. 5️⃣ The 10-person GM model – how a micro-team shipped V0 and turned it into a billion-dollar product line. 6️⃣ Career advice for 18-year-old devs – “Prompt first, code later; learn systems and taste.” …and that’s not even half of it. 🎧 Watch the full chat: https://lnkd.in/ehuwBB2x 💬 Curious what people think, drop your thoughts below—Is coding as we know it really ending? Do you vibe code? What's changing around you.
34
10 Comments -
Ben F.
Loop Software & Testing… • 17K followers
Woke up this morning thinking about a pattern I’ve been hearing a lot lately. In the last few weeks, I’ve talked to a bunch of CTOs who are all circling the same idea: AI is really good at writing automated tests. So let’s lean harder into TDD. Developers write and update all the tests. Unit, integration, API, end-to-end. All greens before you’re allowed to commit. And therefore, we don’t really need test automation teams anymore. Frankly, it’s hard to argue with the "principle". In a perfect world, that process is remarkably effective. A developer owning the whole thing, tests included, everything green before commit, that’s kind of the dream. In some ways, I actually agree it’s the right direction. But... After working with hundreds of developers over the last ~12 years, I’d say around 80%-90% of them have developed pretty rough patterns when it comes to quality. Not because they’re bad engineers, but because of the system we built. For years the implicit contract has been: “I’ll do what the ticket says. QA will catch the edge cases. Then I’ll fix whatever they find.” That mindset didn’t come from nowhere. We incentivized it. One of my QA engineers said something recently that stuck with me: “Imagine if we were building a house, and QA was the home inspector and we showed up to the house and saw it in the state as this dev environment” Developers are used to a very ephemeral world. Ship something that kind of works. Hand it off. Have another human crawl through it carefully. Get a list of cracks. Fill the cracks. Repeat. Years of that pattern don’t disappear just because AI can now write tests. What actually happens is the same handoff, just to a different place. Instead of handing half-done work to QA, they hand it to an AI. They won’t deeply review the tests. They won’t question the coverage. They won’t think hard about what’s real risk vs noise. The AI will catch 80–90% of things, sure. But the remaining 10–20% is where quality lives — and that still requires judgment, context, and experience. This is where I think people are underestimating the role of quality coaching. Automation isn’t just “write tests.” It’s test data. It’s environment management. It’s what you mock and what you absolutely shouldn’t. It’s avoiding duplicate effort. It’s deciding what actually matters to test. Please, for the love of God, don’t just have an engineering leader tell 5 or 10 developers to “own automation now” and expect it to work. What works is having people whose job is to audit, coach, challenge, and advocate for good quality patterns. You don’t need a massive team. One strong quality person can support multiple teams. A half-time quality coach can change everything. Elevate a senior QA. Elevate an SDET. AI makes this possible. Quality coaching is what makes it sustainable. If you skip that part, you’re not removing work, you’re just hiding it until it hurts more later.
36
13 Comments -
Ben Royce
AKQA • 9K followers
So which models produce HTML that has the least accessibility errors, and at what cost? Mapped it out here (bottom left is the best). This is helpful for understanding which models are most compliant for those with disabilities and do it efficiently. Qwen and Gemini 2.5 Flash lead the pack. Hat tip to Ben Ogilvie for pointing me to this: aimac.ai
55
2 Comments -
Alphin Tom
Mycel AI • 1K followers
The "Comprehension Debt" Debate: Are we entering the era of Post-Code Software? I've been following the recent discourse on "vibe coding" and the rising concern within the engineering community. There is a viral chart circulating that warns of "Comprehension Debt" - the idea that while AI makes writing code easy (the honeymoon phase), it makes maintaining it a nightmare because the human author doesn't actually understand the logic they just committed. It's a valid concern if you believe the future of software development looks exactly like the past. But as a non-technical founder currently building a comprehensive SaaS platform using these very tools, I see the trajectory differently. We aren't just seeing a faster way to write code; we are seeing the emergence of a new layer of abstraction. The "Black Box" Reality The debate often centers on trust. We trust the microcode in our processors and the firmware in our storage not because we've audited it line-by-line, but because the abstraction layer holds up. We focus on the input and the output, not the mechanism inside. For builders like me, AI is becoming that next abstraction layer. The shift is moving from Implementation (writing the syntax) to Intent (defining the outcome). The "Zero-Touch" Horizon The fear of "Comprehension Debt" assumes that a human must always step in to fix the mess when things break. But this view ignores where the technology is heading. We are rapidly approaching a workflow where a PRD (Product Requirements Document) translates directly into a working product. 1. Generation: AI handles the architecture and code. 2. Validation: AI agents manage the QA, security audits, and regression testing. 3. Iteration: User feedback is fed back into the requirements, and the system self-heals or refactors. In this scenario, the human "debt" doesn't matter because the "repayments" are also automated. If an AI writes the code, an AI can debug the code. The Evolution of the Builder This doesn't mean engineering is dead - far from it. But the role is evolving. We are moving from a model where value is defined by how well you can lay the bricks, to a model where value is defined by how well you can design the building. The future belongs to those who can articulate a crystal-clear vision and rigorously define the constraints and requirements. The "how" is becoming a commodity. The "what" and "why" remain the premium assets. We are watching the barrier to entry for innovation dissolve in real-time. It's not about ignoring the risks; it's about realizing the toolset has changed forever. #AI #SaaS #FutureOfWork #ProductManagement #TechInnovation #BuildingInPublic 🌐 mycel-ai.de Mycel AI
12
-
Helen Yu
Communications Engineering… • 129K followers
What if writing code is no longer the bottleneck in building software? From idea to production-ready app. Without writing code. In minutes, not months. Emergent just hit $25M ARR in 4.5 months. Over 2.5M users are already building on the platform. That kind of adoption points to a deeper shift in how software is getting built. Here’s what makes this different: ✅ Agentic vibe coding that handles the full software lifecycle — backend, frontend, integrations, testing, deployment ✅ Native backend that removes reliance on third-party tools like Supabase, reducing complexity and failure points ✅ Production-ready from day one with built-in database, authentication, payments, and scaling ✅ Real-time iteration with specialized testing agents validating code continuously ✅ Pro Mode with custom AI agents in isolated VMs or a 1M+ token engine with 16K reasoning tokens for complex builds ✅ True native mobile apps using React Native + Expo for both Android and iOS ✅ One-click deployment with domain management — bring your own or purchase directly in-platform ✅ Full data ownership — your apps and data stay yours, with no lock-in This goes beyond faster prototyping. It changes who gets to build, launch, and scale real products. The real barrier now is simply deciding to start. If you want to explore what that feels like, Emergent’s standard plan is available for $5 for the first month at emergent.sh. What do you think will be the biggest shift in product development in 2026? Stay current with the latest trends in #Technology and #Innovation, Subscribe to 👉#CXOSpiceNewsletter https://lnkd.in/gy2RJ9xg Or 👉 #CXOSpiceYouTube here https://lnkd.in/gnMc-Vpj
624
199 Comments -
Jennifer Bemert
XAnge • 7K followers
🚨 ~$1B raised in Dev + Infra across Nov–Dec. The most unhinged seed rounds I’ve seen lately. 🚨 Seed is the new Series C guess. Almost half a billion went into infra alone - someone finally opened the AWS bill. AGI is taking a breather, small language models are everywhere. I guess size doesn't matter after all! Here are some of the coolest rounds: 🔥 $475M Seed - Unconventional AI Yes, seed. Backed by a16z, Sequoia, Lightspeed, Databricks, Lux. This isn’t a startup - it’s a full-on rethink of the computing substrate for AI (hardware + software + energy). 🔥 $75M Seed - Tenzai Battery, Lux, Greylock. Builds an AI-powered penetration testing platform that continuously simulates real-world attacks against your systems. Designed to proactively surface vulnerabilities as software and AI-generated code rapidly expand the attack surface. 🔥 $70M Seed - Gradium Audio-native foundation models. Develops a unified audio architecture covering speech generation, transcription, voice transformation, and dialogue, optimized for real-time, low-latency interactions rather than text-first pipelines. 🔥 $50M Seed - Inception Diffusion-based language models with lower GPU requirements. Replaces traditional autoregressive generation with diffusion techniques to improve efficiency, controllability, and latency, targeting production use cases where compute cost is a bottleneck. 🔥 $41M Seed - Mirelo AI AI-generated sound for video. Provides tools and APIs that automatically generate synchronized sound effects and music from video content, enabling faster and more scalable post-production workflows. A lot of cool stuff being built in Europe!! Shout out to orq.ai Yasu Specific (YC F25) Albatross AI INTELLIGENT CORE NobodyWho Open Machine
71
4 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content