If you're in Stockholm tomorrow for Cloud Native & Container Day, you don’t want to miss this. Kubernetes is the backbone of modern cloud environments. That’s why our very own Jonathan Kaftzan, VP of Marketing & Business Development at ARMO, is taking the stage with two powerhouse customers to share real-world insights on securing mission-critical workloads and cutting through Kubernetes security noise. 🔹 13:00-13:25 | Securing Mission Critical Workloads: NOT Mission Impossible w/ Stephen Hoekstra, Mission Critical Engineer at Schuberg Philis [Room 3 San Siro] 🔹 14:10-14:35 | Taming the Noise: Efficient Kubernetes Security Strategies w/ Jonas Larson, Head of DevOps at Proact IT Group AB [Room 3 San Siro] These talks are deep dives into how real teams are solving real security challenges in production. If you care about cutting through alert fatigue, securing workloads without slowing innovation, and making Kubernetes security actually work, these sessions are where you need to be. Jonathan’s ready. The customers are ready. The question is—are you? Details & agenda: https://lnkd.in/d6uxY6wS #CloudNative #Kubernetes #CloudSecurity #CADR #DevSecOps
ARMO VP to speak on Kubernetes security at CADR
More Relevant Posts
-
Leveraging Low Priority Pods for Rapid Scaling in AKS. If you're running workloads in Kubernetes, you'll know that scalability is key to keeping things available and responsive. But there's a problem: when your cluster runs out of resources, the node autoscaler needs to spin up new nodes, and this takes anywhere from 5 to 10 minutes. That's a long time to wait when you're dealing with a traffic spike. One way to handle this is using low priority pods to create buffer nodes that can be preempted when your actual workloads need the resources. The Problem Cloud-native applications are dynamic, and workload demands can spike quickly. Automatic scaling helps, but the delay in scaling up nodes when you run out of capacity can leave you vulnerable, especially in production. When a cluster runs out of available nodes, the autoscaler provisions new ones, and during that 5-10 minute wait you're facing: Increased Latency: Users... #techcommunity #azure #microsoft https://lnkd.in/gPYyze95
To view or add a comment, sign in
-
Which companies top our list of the world’s leading container platforms? This week, Technology Magazine spotlights 10 industry players in the container space, shaping the market and facilitating orchestration, monitoring security and automation. Featuring Amazon, Microsoft, Amazon Web Services (AWS), SUSE, Google, Red Hat, Docker, Inc and Kubernetes Explore the full list in the comments #Tech #containerplatforms #cloudinfrastructure
To view or add a comment, sign in
-
-
2025 cloud strategies are shifting from “cheapest provider” to “highest value per workload,” measured in unit economics, reliability, and change velocity. The winning combo: platform engineering for speed, FinOps for accountability, and policy-as-code for safety. - Key moves: unit cost dashboards, Kubernetes cost allocation, savings plans + rightsizing, and automated guardrails in pipelines. - Watch-outs: lift‑and‑shift without refactoring, untagged resources, and unmanaged egress patterns that inflate spend. #Cloud #FinOps #PlatformEngineering #PolicyAsCode #Objectyk
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 𝗼𝗳 𝗧𝗮𝗺𝗶𝗻𝗴 𝗮 𝗠𝘂𝗹𝘁𝗶-𝗖𝗹𝗼𝘂𝗱 𝗖𝗮𝘀𝗰𝗮𝗱𝗲 (𝘌𝘷𝘦𝘳𝘺 𝘤𝘩𝘢𝘭𝘭𝘦𝘯𝘨𝘦 𝘣𝘦𝘨𝘪𝘯𝘴 𝘸𝘪𝘵𝘩 𝘢 𝘲𝘶𝘦𝘴𝘵𝘪𝘰𝘯) The nightmare scenario isn't one cloud region going down. It's a failure that cascades from AWS to Azure, taking your core services with it. You're now leading a response where you don't control the engineers, the status pages, or the timelines. The complexity isn't just technical. It's about commanding chaos across sovereign territories. 𝘐𝘯 𝘢 𝘮𝘶𝘭𝘵𝘪-𝘷𝘦𝘯𝘥𝘰𝘳 𝘤𝘳𝘪𝘴𝘪𝘴, 𝘺𝘰𝘶𝘳 𝘭𝘦𝘢𝘥𝘦𝘳𝘴𝘩𝘪𝘱 𝘢𝘯𝘥 𝘤𝘰𝘮𝘮𝘶𝘯𝘪𝘤𝘢𝘵𝘪𝘰𝘯 𝘱𝘭𝘢𝘯 𝘢𝘳𝘦 𝘺𝘰𝘶𝘳 𝘮𝘰𝘴𝘵 𝘤𝘳𝘪𝘵𝘪𝘤𝘢𝘭 𝘳𝘦𝘥𝘶𝘯𝘥𝘢𝘯𝘤𝘺. 𝙍𝙚𝙖𝙡𝙩𝙖𝙡𝙠: When your architecture spans multiple clouds, does your incident command structure have the same reach? #ITLeadership #CloudStrategy #DisasterRecovery #IncidentManagement #VendorManagement #ServiceDelivery
To view or add a comment, sign in
-
In today’s cloud-driven environment, balancing performance, scalability, and cost is critical. Our team recently took a deep dive into our Kubernetes setup — and the results have been truly rewarding! 💡 Here’s what we achieved through smart orchestration and optimization: ✅ Right-sized our workloads — Eliminated over-provisioned resources by implementing dynamic scaling. ✅ Optimized cluster utilization — Leveraged node auto-scaling and affinity rules for efficient scheduling. ✅ Adopted spot and reserved instances for non-critical workloads, reducing compute cost significantly. ✅ Automated monitoring & alerting using Prometheus and Grafana to identify unused resources in real-time. 💰 Outcome: Reduced overall infrastructure cost by 30%, while maintaining system reliability and scalability. Kubernetes continues to prove itself as a game-changer — not just for automation and deployment, but also for strategic cost management in modern cloud ecosystems. 🌩️ #Kubernetes #CloudOptimization #DevOps #Microservices #CostEfficiency #CloudNative #EngineeringExcellence
To view or add a comment, sign in
-
The recent AWS outage reminded me of one big truth: even the best tech can fail. When AWS went down this week, many AI products, including Anthropic Claude, stopped working. It wasn’t because their AI broke, it was because the cloud infrastructure did. That small event showed how much our AI products depend on one company’s servers. As product managers, we often focus on features and user delight. But real trust comes from reliability. Here are a few lessons this outage taught me: - Build for resilience, not just for speed. - Know your dependencies- if AWS fails, what happens to your product? - Plan a backup and communication plan for your users. - Think multi-region or even multi-cloud. Don’t keep all eggs in one basket. - Beyond model accuracy and engagement, track metrics like operational health, uptime, and recovery time. AI can do amazing things, but without a strong foundation, even smart systems can fail. If your product went down during the AWS outage, what did you learn from it? #AIProductManagement #AWS #CloudResilience #TechLeadership #ProductManagement
To view or add a comment, sign in
-
Great conversation with ALEX KUAN (FinOps Lead, Arlo) on how to make long-term AWS commitments flexible instead of fragile. Highlights: - When “savings” stop saving: Long-term cloud discounts can backfire when usage patterns shift, locking teams into rigid spending. - Arlo’s playbook: Commit to your baseline usage, not the peaks. Cover predictable workloads first, then layer flexible commitments that adapt to changing demand. - 3-phase approach: Seed → Grow → Harvest — start small, scale coverage as patterns stabilize, and adjust as the business evolves. - Automation as leverage: Continuous monitoring ensures that every dollar committed is actually utilized, turning cost management into a measurable ROI engine. Common questions from the session: 1. How to balance simplicity and flexibility? (Use fixed commitments for the stable 80–90% of workloads, flexible ones for the rest.) 2. What’s the biggest risk? (Overcommitting. Start with data, not guesses — then expand as visibility improves.) 3. When to evolve your strategy? (Once your teams can forecast with confidence, it’s time to introduce flexible, data-driven automated commitment management.) Replay is available: https://lnkd.in/guFmitv4 If you’d like to benchmark your own commitment efficiency, DM me. #FinOps #CloudCostOptimization #AWS #EngineeringLeadership #CFO #CloudStrategy #nOps
To view or add a comment, sign in
-
-
De-Clouding as a Strategy This week’s AWS global nap reminded everyone that even the biggest cloud in the sky can still rain on your parade. When consoles freeze and dashboards spin forever, our reflex is to patch, reboot, and pray, but I believe the smarter move is to ask: “Have we built too much of our resilience on someone else’s uptime?” “Cloud-first” doesn’t always mean “business-first”; sometimes it just means “someone-else’s-data-center-first.” So now the grown-ups are rediscovering de-clouding: not rage-quitting AWS, but bringing a few critical workloads back down to earth. It’s about control, visibility, and cost sanity. On-prem may be less glamorous, but at least when it breaks, you can actually touch it. De-clouding isn’t anti-cloud, it’s pro-resilience. The best architecture, after all, mixes clouds with concrete… because when the sky falls, it helps to have solid ground.
To view or add a comment, sign in
-
A single region issue can ripple across thousands of services. We saw with yesterday's AWS incident. The fix is to remove single points of failure with automated, tested failover across clouds. In this short clip, Todd shares how teams use TAHO to shift traffic and stay online. #reliability #DR #platformengineering
From Single Cloud Risk to Automated Failover
To view or add a comment, sign in
These are exactly the kind of practical sessions we need more of. Managing dev teams has shown me that theory is nice, but what matters is making security work without becoming a bottleneck. Both topics hit the mark.