The biggest misconception about cloud native? That it's primarily a technical challenge. The 2025 CNCF survey reveals the real barriers are cultural: managing change, human dynamics, and organizational transformation. "Culture eats strategy for breakfast, and so too does it in the context of cloud native success," explains Hilary Carter, SVP of Research at The The Linux Foundation. But there's another major shift: organizations have stopped building AI models from scratch. More than half now use open foundation models for inference workloads instead—a much more cost-effective approach. Carter points to research from Frank Nagel at MIT showing organizations not using open models face approximately $25 billion in opportunity costs. The takeaway: complexity in Kubernetes adoption hasn't disappeared (the Kubernetes turns 10 study confirmed it's still a barrier), but success now depends on cultural collaboration, better onboarding, and embracing open models. In this clip, Carter breaks down why showing up at events like KubeCon, fostering open source culture, and optimizing with open models defines infrastructure success today. Check out the discussion on our YouTube page: https://lnkd.in/gfMKyAKG #CloudNative #Kubernetes #Culture #OpenModels #CNCF #LinuxFoundation #AI #ChangeManagement #OpenSource #KubeCon
TFiR
Online Audio and Video Media
Arlington, VA 681 followers
Video first b2b media brand for enterprise technologies.
About us
Leading video publication for enterprise technologies.
- Website
-
http://www.tfir.io
External link for TFiR
- Industry
- Online Audio and Video Media
- Company size
- 2-10 employees
- Headquarters
- Arlington, VA
- Type
- Privately Held
- Founded
- 2018
- Specialties
- Containers, Cloud, Machine Learning, Open Source, IoT, Robotics, AI
Locations
-
Primary
Get directions
Arlington, VA 22206, US
Updates
-
2026 won't be won in the AI lab — it will be won in the AI factory. Jonathan Bryce, Executive Director at Cloud Native Computing Foundation (CNCF), says most organizations are not at factory-scale when it comes to serving AI models — and that gap is the defining infrastructure challenge of this year. "Inference really is a high demand production workload. As we move from the lab to the factory, that's where we start to get into production requirements and production problems — and those are cloud native requirements and cloud native problems." In this full conversation, we explore why inference — not training — is where AI delivers real business value, how cloud native projects like Kubernetes, vLLM, OpenTelemetry, and HAMI are closing the production readiness gap, what capabilities organizations need to scale AI reliably in the real world, and why open source remains the strongest answer to AI sovereignty and proprietary lock-in. Check out the discussion on our YouTube page: https://lnkd.in/gD3KRrqd #CNCF #AIInference #CloudNative #Kubernetes #OpenSource #AIInfrastructure #vLLM #2026Predictions
Why AI Inference Is Cloud Native's Biggest Challenge in 2026 | Jonathan Bryce, CNCF
https://www.youtube.com/
-
OWASP's Top 10 lists could be the highest-ROI security decision your team makes this year. If you're a CISO with limited budget, you need to know exactly which vulnerabilities to fix first — and OWASP has already done that work for you. Steve Winterfeld, Advisory CISO at Akamai Technologies, breaks it down clearly. "As a CIO, I have a $10 budget and $20 worth of problems. If you can fix 10 vulnerabilities — just 10 — these are the ones you should fix, because these are the most common techniques used by hackers." In this short clip, Steve explains what OWASP is, how its Top 10 lists work across web applications, APIs, LLMs, generative AI, mobile, and IoT, and why getting involved with its 30,000+ volunteer community is a career move worth making. Check out the discussion on our YouTube page: https://lnkd.in/gTeVwXVE #OWASP #Cybersecurity #AppSecurity #CISO #APISecurity #GenAISecurity #AkamaiSecurity #InfoSec #CloudSecurity #SecurityLeadership
Why Every Security Team Needs OWASP's Top 10 Lists | Steve Winterfeld, Akamai
https://www.youtube.com/
-
Your AI model is only as secure as the OS it's trained on — and most enterprises are ignoring this. Arthur Tyde of CIQ breaks down why 2026 will force organizations to treat security as a first-class requirement in AI infrastructure, not an afterthought. Arthur F. Tyde III, Senior VP of Global Business Development at CIQ, brings 30 years of open source experience — including founding roles at OSDL, Free Standards Group, and the Linux Foundation. "Your LLM leaks out, and you're really in a bad spot. Security is just as important as performance for AI." In this full conversation, we explore: why sovereign AI is becoming a hard requirement, how HPC and AI workloads are converging on shared infrastructure, the compliance velocity gap slowing enterprise adoption, CIQ's Rocky Linux hardened OS strategy, and how Fuzzball simplifies workload orchestration across on-prem and cloud. Check out the discussion on our YouTube page: https://lnkd.in/gkxkhJwB #AI #Linux #HPC #RockyLinux #AIInfrastructure #EnterpriseLinux #OpenSource #CyberSecurity #CloudNative #Predictions2026
Why AI Security Is More Critical Than Performance in 2026 | Arthur Tyde, CIQ
https://www.youtube.com/
-
Database consolidation is replacing the sprawl of specialized systems—and it's changing how AI agents access data. Madelyn Olson, Valkey Project Maintainer and Principal Engineer for AWS In-Memory Databases, predicts 2026 will see organizations consolidate from 10+ databases down to a handful of flexible systems like Postgres, Valkey, and OpenSearch. Why? Smaller expert teams powered by AI tooling can manage complex infrastructure at scale, and agents need standardized, real-time access to structured data. As Madelyn explains: "People will take the path of least resistance to get something done. AI systems are really useful, but you have to keep humans in mind when thinking about how they work—they should be focused on making individuals more effective at what they're already doing." In this full conversation, we explore how Valkey is adding hybrid search (full-text + vector similarity), improving durability for mission-critical workloads, and addressing rising RAM costs through compression and SSD integration—all to power semantic caching and retrieval-augmented generation for AI agents. Check out the discussion on our YouTube page: https://lnkd.in/gyTnw5ci #Valkey #InMemoryDatabases #AIInfrastructure #DatabaseConsolidation #SemanticCaching #VectorSearch #Redis #AWS #AgenticAI #CloudNative
Valkey 2026: In-Memory Databases, AI Agents & Real-Time Data | Madelyn Olson, AWS
https://www.youtube.com/
-
Apache Iceberg defines how tables behave—not how to operate the pipelines around them. When teams adopt Iceberg, they gain a powerful table format but inherit an operational puzzle: stitching together separate tools for ingestion, transformation, scheduling, and maintenance. When something breaks, you're debugging across four different systems with no single layer accountable for the end-to-end flow. Christian Romming, Founder & CEO at Etleap, joins us to discuss how their Iceberg Pipeline Platform closes this gap by unifying ingestion, transformation, orchestration, and table operations into one coordinated layer. One standout insight: "By having an end-to-end integrated process, we can take data quality rules and enforce them at ingestion time instead of modeling time. You don't have to ingest bad data first and then solve the problem later." In this full conversation, we explore why Iceberg adoption is accelerating, how unified pipelines enable real-time AI use cases, the shift from schedule-based to table-state-first workflows, and why data teams need operational simplicity to focus less on plumbing and more on insights. Check out the discussion on our YouTube page: https://lnkd.in/g9qPgYdT #ApacheIceberg #DataEngineering #Etleap #DataPipelines #LakeHouse
Closing the Operational Gap in Apache Iceberg Pipelines | Christian Romming, Etleap
https://www.youtube.com/
-
95% of enterprise AI POCs are failing — and the reason isn't the technology. The real problem: companies are picking AI projects through executive brainstorming instead of listening to where AI is already delivering value at the grassroots level. Arti Arora Raman, CEO of Portal26, explains how organizations can finally close the gap between AI investment and real business ROI. "People at the grassroots level have already found where AI makes a difference. Customers don't have a way to harness that user and usage information — and that's why projects fail." In this full conversation, we explore: why 95% of AI POCs never reach production, how Shadow AI and unsanctioned tool usage creates massive security risk, Portal26's three-pillar approach — visibility, security, and ROI, how license intelligence exposes costly mismatches between what's bought and what's used, and why listening to demand signals can push POC-to-production conversion from 15% to 80-95%. Check out the discussion on our YouTube page: https://lnkd.in/gSkr_Qz7 #EnterpriseAI #GenerativeAI #AIAdoption #AIROI #AISecurity #ShadowAI #AIGovernance #Portal26 #AIStrategy #CXO
Why 95% of Enterprise AI Projects Fail — And How to Fix It | Arti Raman, Portal26
https://www.youtube.com/
-
The Kubernetes community is shutting down Ingress NGINX—and 50% of production clusters need to migrate in the next six weeks. What you'll learn from this conversation: - Why Ingress NGINX is being retired despite being critical infrastructure. - The security risks you face if you don't migrate before March. - How to transition to Gateway API and other modern alternatives. Kat Cosgrove from the Kubernetes Steering Committee and Tabitha Sable from the Kubernetes Security Response Committee join us to discuss this security emergency. As Kat warns: "If you don't proactively choose to investigate whether or not you're relying on Ingress NGINX and migrate to something else, you may not know that you're vulnerable until after you're compromised." In this full conversation, we explore the technical debt that made Ingress NGINX unmaintainable, worst-case scenarios for compromised systems, and why contributing to open source is now a security imperative, not just a moral obligation. Check out the discussion on our YouTube page: https://lnkd.in/gFNMUWMd #Kubernetes #CloudNative #OpenSource #DevSecOps #CyberSecurity #CNCF #GatewayAPI #InfrastructureSecurity #ContainerSecurity #CloudSecurity
Why Half of All Kubernetes Clusters Are About to Become Vulnerable | Kat Cosgrove & Tabitha Sable
https://www.youtube.com/
-
Application security threats keep evolving—and if you're still relying on outdated vulnerability lists, you're leaving your organization exposed. OWASP just rolled out significant updates to their Top 10 list, adding two entirely new frameworks for agentic AI and large language models. For security teams that have relied on the original OWASP Top 10 as their North Star, this isn't just an update—it's a fundamental expansion of the security landscape. Steve Winterfeld, Advisory CISO at Akamai Technologies, joins us to break down what changed and what it means for your security strategy. "If you have a $10 budget and $20 worth of problems, you have to be very careful about what you're going to fix. These Top 10 lists give you the largest return on investment—you're taking out over 50% of the attacks by fixing just these 10 vulnerabilities." In this full conversation, we explore how to integrate new OWASP lists into existing security programs, practical implementation strategies across development teams, how Akamai uses these frameworks to protect customers at scale, and which industry frameworks beyond OWASP should be in your security toolbox. Check out the discussion on our YouTube page: https://lnkd.in/gbwxfaui #OWASP #ApplicationSecurity #CyberSecurity #CISO #APIsSecurity #GenAI #CloudSecurity #SecurityFrameworks #Akamai #ThreatIntelligence
OWASP Top 10 Updates: What Security Teams Need to Know | Steve Winterfeld, Akamai
https://www.youtube.com/
-
In 2025, AI development was about speed. In 2026, it's about trust. David Loker, VP of AI at CodeRabbit, explains why the shift from "how fast can we code?" to "how confident are we in what we're shipping?" will define enterprise software development this year. CodeRabbit's recent research revealed a stark reality: AI-assisted code generation produces 1.7x more logical and correctness bugs than traditional methods. As AI increases throughput and diff volume, human review capacity hits a ceiling—and existing quality assurance gates weren't built for AI-amplified change. "Attribution is much harder than adoption. Teams can measure usage very easily, but reliably attributing downstream outcomes like regressions or incidents to AI-assisted code changes requires instrumentation, and most organizations don't have that yet." In this full conversation, we explore David's 2026 predictions: formal AI defect metrics tracking, third-party validation tools as essential risk mitigation, multi-agent workflows for code validation, and governance frameworks for AI usage. He also shares actionable advice for enterprise leaders on instrumenting AI impact, deploying context-aware automated review tooling, and building AI governance policies before scaling adoption. Check out the discussion on our YouTube page: https://lnkd.in/gXuViCQn #AICodeReview #SoftwareDevelopment #CodeQuality #AIGovernance #DevOps #CodeRabbit #EngineeringLeadership #TechPredictions2026 #DeveloperProductivity #AITools
AI Code Quality Crisis: CodeRabbit's David Loker on Guardrails for AI-Generated Code
https://www.youtube.com/