If you’re building with durable functions, this will make you dramatically faster. The Kiro power for Lambda durable functions is now available! Accelerate design with common patterns and proven guidelines Speed up implementation with ready-to-use building blocks and best practices Improve quality with testing patterns and debugging support tailored to durable functions The power is available today with one-click installation from the Kiro IDE and the Kiro powers catalog. Check here https://lnkd.in/dYWXdwtR for all details. I’m really excited to see what you’ll build with it. And thanks to Michael Gasch and Thomas Gaigher for your support. #AWS #Lambda #Serverless
More Relevant Posts
-
Excited about our team's work helping customers accelerate serverless development with Kiro Powers! Check it out and let us know what you think!
If you’re building with durable functions, this will make you dramatically faster. The Kiro power for Lambda durable functions is now available! Accelerate design with common patterns and proven guidelines Speed up implementation with ready-to-use building blocks and best practices Improve quality with testing patterns and debugging support tailored to durable functions The power is available today with one-click installation from the Kiro IDE and the Kiro powers catalog. Check here https://lnkd.in/dYWXdwtR for all details. I’m really excited to see what you’ll build with it. And thanks to Michael Gasch and Thomas Gaigher for your support. #AWS #Lambda #Serverless
To view or add a comment, sign in
-
Most clusters don’t hit CPU limits first. They hit control-plane limits. When workloads grow, ETCD becomes the bottleneck: • write amplification • watch cache pressure • limited storage scalability HariKube removes this constraint. Instead of one storage backend, HariKube introduces a workload-aware routing layer for Kubernetes state. What this means for you: • 10–50× higher throughput for heavy workloads • Lower latency for API operations • Horizontal control-plane scalability • True multi-database flexibility (vendor-agnostic) • Better tenant and namespace isolation • Run CRD-heavy architectures without hitting storage limits And the best part: Your Kubernetes API stays exactly the same. No changes to kubectl. No changes to controllers. No changes to RBAC. Just HariKube, without the ETCD ceiling. https://harikube.info/
To view or add a comment, sign in
-
Architecting for Vulnerability: Introducing Protective Computing Core v1.0 Created by CrisisCore-Systems Most software is built on a dangerous premise: the Stability Assumption. We assume the user has a stable network, stable cognitive capacity, a secure physical environment, and institutional trust. When those conditions hold, modern cloud native architecture works beautifully. If you want privacy-... link https://lnkd.in/gYbW3_qv pubDate Fri, 06 Mar 2026 04:55:40 +0000
To view or add a comment, sign in
-
💸 ECS Cost Insight: You’re Probably Overpaying at the Task Level One of the biggest (and quietest) cost leaks in ECS isn’t your architecture choice - it’s how your tasks are sized. In most environments I’ve looked at, services are defined with something like 1 vCPU and 2GB memory “just to be safe.” But when you actually check usage, CPU often sits at 10 - 25% and memory rarely goes beyond 50%. The problem is simple: in ECS (especially with Fargate), you pay for what you allocate - not what you use. That gap between allocation and actual usage is where 20 - 40% of your spend disappears. The root cause is usually a mix of caution and lack of feedback loops. Teams size for worst-case scenarios, rarely revisit task definitions, and rely on averages instead of meaningful metrics. Over time, this becomes baked into the platform and scales with every new service. A better approach is to treat task sizing as an ongoing cost lever. Look at p95 CPU and memory usage rather than absolute peaks, and reduce allocations incrementally while monitoring performance. Most services are far more tolerant than expected. If you’re over-allocating to handle spikes, that’s usually a sign you should scale horizontally instead - adding more tasks when needed is often cheaper than permanently over-sizing each one. The key shift is this: ECS cost optimisation isn’t just about choosing between Fargate, EC2, or Spot - it’s about aligning what you provision with how your service actually behaves. Fixing task utilisation is often the fastest way to reduce spend without changing your architecture at all. 💬 Curious - how often do you revisit task sizing in your environment? #AWS #ECS #FinOps #CostOptimisation #Containers #Capacitas
To view or add a comment, sign in
-
Runner v0.10.0 is out We just released Runner v0.10.0. This release introduces PodDaemon, a new mechanism inspired by a tmux-like multi-process architecture. What it does: Pods — the containers where agents run, including Claude Code execution containers — are now process-isolated from the main agentsmesh-runner process. That means if agentsmesh-runner crashes or restarts for any reason, the running agents are not interrupted. Once the runner comes back up, it can reconnect and continue forwarding the live pod status to AgentsMesh Cloud. Why this matters: This is a big step toward making long-running agent workflows more reliable and production-ready. Also included in this release: A long list of smaller bug fixes and improvements
To view or add a comment, sign in
-
Cloud native is evolving fast and this week, it’s happening in Amsterdam at KubeCon + CloudNativeCon Europe (Cloud Native Computing Foundation (CNCF) flagship conference). Our colleagues Álvaro Vázquez Rodríguez, Juan Pontón Rodríguez and Carlos Giraldo Rodríguez are at the Cloud Native Computing Foundation (CNCF) Native Computing Foundation flagship conference, where core CNCF projects and communities converge to define the next generation of cloud-native systems. From our work on cloud-native architectures, edge-cloud continuum, Kubernetes-based orchestration and distributed data platforms, this is a key space to explore how scalability, resilience and interoperability are being redefined. From platform engineering to edge deployments, this is where the cloud-native stack keeps pushing forward. #CloudNativeTechnologies #CPUGPUEdgeComputing #FPGAEdgeComputing #FPGAEdgeComputing #NeuromorficEdgeComputing
To view or add a comment, sign in
-
-
Hybrid Architecture Combines SOTA Planning with Small Models for Local Code Editing 📌 Developers are now leveraging a hybrid architecture that splits code editing duties-strategic planning via SOTA cloud models and granular edits via optimized local ones-to slash latency and costs. The Qwen3.5-35B-A3B model, when finely tuned for 16GB GPUs, delivers blazing speed (20+ tokens/sec) while avoiding disruptive reasoning artifacts, proving small models can thrive locally when engineered right. 🔗 Read more: https://lnkd.in/dappUBiK #Sotaplanning #Localcodeediting #Smallmodels #Hybridarchitecture #Applyedit
To view or add a comment, sign in
-
Karpenter is great because it ignores the idea of groups entirely. It looks at exactly what your pods need and talks to the AWS API to spin up the right instance at that moment 🏗️ The process is straightforward: 1. A pod can't find a home 2. Karpenter sees the CPU and memory requirements 3. It picks a matching EC2 instance from the whole AWS catalog 4. The node joins the cluster and the pod starts It handles consolidation better than any other auto scaler I know. If a node is mostly empty, Karpenter moves the pods to a smaller instance and kills the old one. Our architecture: We run the Karpenter controller itself on Fargate to avoid circular dependencies. If the cluster is empty, Karpenter still has a place to live so it can scale the rest of the infrastructure. I like to say: keep the brain separate from the muscle 🧠 It makes the infrastructure feel (almost) invisible and invincible! Devs just request resources in their manifests and the compute appears 🥰
To view or add a comment, sign in
-
𝐖𝐡𝐲 𝐭𝐨𝐩 𝐀𝐖𝐒 𝐮𝐬𝐞𝐫𝐬 𝐚𝐫𝐞 𝐬𝐰𝐢𝐭𝐜𝐡𝐢𝐧𝐠 𝐭𝐨 𝐆𝐫𝐚𝐯𝐢𝐭𝐨𝐧-𝐩𝐨𝐰𝐞𝐫𝐞𝐝 𝐄𝐂2 𝐢𝐧𝐬𝐭𝐚𝐧𝐜𝐞𝐬 (𝐚𝐧𝐝 𝐰𝐡𝐚𝐭 𝐲𝐨𝐮 𝐬𝐡𝐨𝐮𝐥𝐝 𝐤𝐧𝐨𝐰): Graviton chips aren’t just cheaper - they’re better for performance and sustainability. Here’s why engineers are adopting them fast: 1. Get more compute per dollar Graviton2 and 3 offer better price/performance than x86 - up to 40% in some cases. 2. Save power without sacrificing speed ARM architecture means lower energy draw, less cooling, and smaller bills at scale. 3. Scale smarter Graviton supports more vCPUs per instance - ideal for microservices or compute-heavy loads. 4. Use it across workloads C7g for compute, R7g for memory-heavy tasks, or Lambda if you’re serverless. If your EC2 bill is high and performance is flat - Graviton is your upgrade path. Credit:- Riyaz Sayyad Follow Naresh Kumari for more insights
To view or add a comment, sign in
-
-
Yes! 🚀☁️ This is an overview of our work relating to Cloud Native. If you would like to know more about Gradiant and how we use these technologies in our day-to-day work, let's have a coffee and chat! #KubeCon #CloudNative #CNCF
Cloud native is evolving fast and this week, it’s happening in Amsterdam at KubeCon + CloudNativeCon Europe (Cloud Native Computing Foundation (CNCF) flagship conference). Our colleagues Álvaro Vázquez Rodríguez, Juan Pontón Rodríguez and Carlos Giraldo Rodríguez are at the Cloud Native Computing Foundation (CNCF) Native Computing Foundation flagship conference, where core CNCF projects and communities converge to define the next generation of cloud-native systems. From our work on cloud-native architectures, edge-cloud continuum, Kubernetes-based orchestration and distributed data platforms, this is a key space to explore how scalability, resilience and interoperability are being redefined. From platform engineering to edge deployments, this is where the cloud-native stack keeps pushing forward. #CloudNativeTechnologies #CPUGPUEdgeComputing #FPGAEdgeComputing #FPGAEdgeComputing #NeuromorficEdgeComputing
To view or add a comment, sign in
-