🚀 I built an AI-powered DevOps pipeline that takes a requirements.json + a zipped app — and deploys it to AWS automatically. Meet DevOps-Crew — a multi-agent system where specialized AI agents collaborate across the entire software delivery lifecycle. From infrastructure generation to live deployment and health verification — end to end. Press Run, and this happens: 🧠 The Orchestrator reads your JSON and generates Terraform for VPC, ALB, ASG, ECR, Route53, ACM, CloudWatch, SSM — plus remote state (S3 + DynamoDB + KMS). If you don’t upload an app, it generates a sample Node.js service. ☁️ The Infrastructure Engineer runs Terraform across bootstrap → dev → prod, auto-handles IAM conflicts and quota limits, and wires backend outputs automatically. 🐳 The Build Engineer builds your Docker image and pushes to ECR. If Docker isn’t available, it falls back to an EC2 build runner via SSM. Zero manual steps. 🚀 The Deployment Engineer deploys using ssh_script, ansible, or ecs — including blue/green ECS updates or EC2 rolling restarts through a bastion. ✅ The Verifier reads metadata from SSM and hits the live HTTPS endpoint, reporting pass/fail via HTTP status. Everything runs from a single Gradio UI. Upload JSON. Upload your app. Choose region and deploy method. Add env vars. Hit Run Combined-Crew. Pipeline: Generate → Infra → Build → Deploy → Verify. Logs stream live. Download the generated project bundle at the end. 🎯 Result: your app running behind HTTPS, load-balanced via ALB + ASG, blue/green enabled, CloudWatch alarms configured — provisioned, built, deployed, and verified entirely by AI agents. ⚠️ Current limitation: validated for simple stateless Node.js apps (Dockerfile at root, port 8080, /health endpoint). Multi-service and database support are next. 🛠 Stack: CrewAI · Terraform · AWS (EC2 / ECS / ECR / ALB / Route53 / ACM / SSM / CloudWatch / KMS) · Docker · Python · Gradio · Ansible The hardest parts weren’t the AI — they were the operational edge cases: Docker daemon timing, Terraform conditional resources, IAM conflicts, and resilient EC2 user data. Still evolving — but it runs end-to-end. Try it here 👇 🔗 https://lnkd.in/ggfZRnPS 📸 Attached: live blue/green deployment — Healthy status, HTTPS domain, timestamp. #DevOps #AIEngineering #AWS #Terraform #AgenticAI #CloudInfrastructure #Docker #BuildInPublic #InfrastructureAsCode
AI-Powered DevOps Pipeline for AWS Deployment
More Relevant Posts
-
🚀 I built an AI-powered DevOps pipeline that takes a requirements.json + a zipped app — and deploys it to AWS automatically. Meet DevOps-Crew — a multi-agent system where specialized AI agents collaborate across the entire software delivery lifecycle. From infrastructure generation to live deployment and health verification — end to end. Press Run, and this happens: 🧠 The Orchestrator reads your JSON and generates Terraform for VPC, ALB, ASG, ECR, Route53, ACM, CloudWatch, SSM — plus remote state (S3 + DynamoDB + KMS). If you don’t upload an app, it generates a sample Node.js service. ☁️ The Infrastructure Engineer runs Terraform across bootstrap → dev → prod, auto-handles IAM conflicts and quota limits, and wires backend outputs automatically. 🐳 The Build Engineer builds your Docker image and pushes to ECR. If Docker isn’t available, it falls back to an EC2 build runner via SSM. Zero manual steps. 🚀 The Deployment Engineer deploys using ssh_script, ansible, or ecs — including blue/green ECS updates or EC2 rolling restarts through a bastion. ✅ The Verifier reads metadata from SSM and hits the live HTTPS endpoint, reporting pass/fail via HTTP status. Everything runs from a single Gradio UI. Upload JSON. Upload your app. Choose region and deploy method. Add env vars. Hit Run Combined-Crew. Pipeline: Generate → Infra → Build → Deploy → Verify. Logs stream live. Download the generated project bundle at the end. 🎯 Result: your app running behind HTTPS, load-balanced via ALB + ASG, blue/green enabled, CloudWatch alarms configured — provisioned, built, deployed, and verified entirely by AI agents. ⚠️ Current limitation: validated for simple stateless Node.js apps (Dockerfile at root, port 8080, /health endpoint). Multi-service and database support are next. 🛠 Stack: CrewAI · Terraform · AWS (EC2 / ECS / ECR / ALB / Route53 / ACM / SSM / CloudWatch / KMS) · Docker · Python · Gradio · Ansible The hardest parts weren’t the AI — they were the operational edge cases: Docker daemon timing, Terraform conditional resources, IAM conflicts, and resilient EC2 user data. Still evolving — but it runs end-to-end. Try it here 👇 🔗 https://lnkd.in/gFFf5b8F 📸 Attached: live blue/green deployment — Healthy status, HTTPS domain, timestamp. #DevOps #AIEngineering #AWS #Terraform #AgenticAI #CloudInfrastructure #Docker #BuildInPublic #InfrastructureAsCode
To view or add a comment, sign in
-
-
🚀 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮𝗻 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗼𝗻 𝗔𝗪𝗦 𝗣𝗮𝗿𝘁 2 — 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝘃𝘀 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗦𝗽𝗹𝗶𝘁 In Part 1, I focused on deterministic EKS bootstrap: making sure the cluster comes up correctly on the first apply. In Part 2, the focus shifts from 𝘤𝘳𝘦𝘢𝘵𝘪𝘰𝘯 to 𝘰𝘸𝘯𝘦𝘳𝘴𝘩𝘪𝘱. At this point, the cluster already exists. The real question becomes: 𝗪𝗵𝗼 𝗼𝘄𝗻𝘀 𝘄𝗵𝗮𝘁 — 𝗮𝗻𝗱 𝗵𝗼𝘄 𝗱𝗼 𝘁𝗵𝗼𝘀𝗲 𝗹𝗮𝘆𝗲𝗿𝘀 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗲? 🎯 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 In many projects, Terraform continues to manage everything: • infrastructure • Kubernetes addons • workloads • platform components This tightly couples infrastructure lifecycle with day-2 operations. It also creates fragile dependencies via remote state and makes iteration risky. That model doesn’t scale in real environments. 🧱 𝗧𝗵𝗲 𝗗𝗲𝘀𝗶𝗴𝗻 𝗗𝗲𝗰𝗶𝘀𝗶𝗼𝗻 I explicitly separated responsibilities: • 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗦𝘁𝗮𝗰𝗸 (𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺) Responsible only for: • VPC • Amazon Elastic Kubernetes Service control plane • system node group • core addons • minimal Kubernetes primitives • 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗦𝘁𝗮𝗰𝗸 (𝗚𝗶𝘁𝗢𝗽𝘀-𝗺𝗮𝗻𝗮𝗴𝗲𝗱) Responsible for everything running 𝘪𝘯𝘴𝘪𝘥𝘦 the cluster: • GitOps control plane via Argo CD • observability • alert routing • workloads • environment promotion Terraform stops once the platform bootstrap is complete. From that point forward, Git becomes the source of truth. 🔑 𝗖𝗼𝗻𝘁𝗿𝗮𝗰𝘁-𝗗𝗿𝗶𝘃𝗲𝗻 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 Instead of using Terraform remote state, the infrastructure stack publishes cluster metadata into AWS Systems Manager Parameter Store: • cluster name • endpoint • CA data • OIDC provider This becomes a 𝘀𝘁𝗮𝗯𝗹𝗲 𝗰𝗼𝗻𝘁𝗿𝗮𝗰𝘁 between layers. The platform stack consumes this contract and never directly depends on Terraform state. This provides: ✔ loose coupling ✔ independent lifecycles ✔ safer iteration ✔ GitOps-friendly workflows 🧑💻 𝗘𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 Platform operations are not performed from a developer laptop. They are executed from a controlled admin host using SSM: • no SSH • no public endpoints • scoped permissions This mirrors enterprise environments where: • bootstrap is restricted • day-2 operations are delegated safely 🧠 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 • Infrastructure and platform have different lifecycles • Terraform should not manage workloads • GitOps becomes the operational control plane • Contracts scale better than shared state • Execution context is part of the architecture Next, I’ll dive into GitOps enablement: how applications are delivered, promoted, and controlled across environments. If you’re working with Kubernetes, GitOps, or platform design, I’d love to exchange ideas. #Kubernetes #AWS #DevOps #Terraform #PlatformEngineering #LearningByBuilding
To view or add a comment, sign in
-
-
Docker Architecture Explained All You Need to Know | Build, Pull, Run Containers Like a Pro | Containerization is one of the most important technologies powering modern cloud infrastructure, DevOps pipelines, and scalable application deployment. If you're preparing for DevOps, Cloud Engineer, or Platform Engineer roles, understanding Docker architecture is essential. But beyond learning it, companies also need reliable infrastructure to run containers in production. Let’s break down the architecture step by step. #DockerClient The Docker Client is the command-line interface engineers use to interact with Docker. Common commands: • docker build • docker pull • docker run Interview Insight: The Docker client communicates with the Docker daemon using REST APIs. #DockerDaemon (dockerd) The Docker Daemon runs in the background and manages all Docker operations. Responsibilities include: • Building container images • Managing containers • Handling networking and storage • Communicating with container registries #DockerImages Docker images are read-only templates used to create containers. Examples: • Ubuntu • Nginx • Redis Images typically contain: • Application code • Runtime environment • Required libraries • Dependencies This ensures consistent deployments across environments. #DockerContainers Containers are running instances of Docker images. Key characteristics: • Lightweight • Isolated execution environment • Fast startup time • Share the host OS kernel This makes containers much more efficient than traditional virtual machines. #DockerHost The Docker Host is the system where Docker runs. It can be: • A local development server • A cloud VM • A Kubernetes worker node • A dedicated container server #DockerRegistry A Docker Registry stores and distributes container images. Examples include: • Docker Hub • AWS ECR • Azure Container Registry Organizations often maintain private registries for internal deployments. #DockerWorkflow (Build → Pull → Run) Build Developers create container images using Dockerfiles. Pull Images are downloaded from a registry. Run Containers are launched from images on the Docker host. This workflow allows applications to run consistently across development, staging, and production environments. Where Infrastructure Matters Running containers in production requires reliable compute, fast storage, and stable networking. That’s where #ConnectQuest comes in. For teams deploying containerized AI agents and automation platforms, Connect Quest provides OpenClaw AI Agent Hosting, a production-ready environment with Docker, Redis, PostgreSQL, Python, and Node.js pre-installed so developers can deploy AI agents without complex infrastructure setup. Learn more: https://lnkd.in/dyhE4xG7 #Docker #DevOps #Containerization #CloudComputing #Kubernetes #Microservices #CI_CD #CloudEngineering #OpenClaw #OpenClawHosting #AIAgent #AiAgentHosting #AIAgentDevOps
To view or add a comment, sign in
-
-
Docker Architecture Explained All You Need to Know | Build, Pull, Run Containers Like a Pro | Containerization is one of the most important technologies powering modern cloud infrastructure, DevOps pipelines, and scalable application deployment. If you're preparing for DevOps, Cloud Engineer, or Platform Engineer roles, understanding Docker architecture is essential. But beyond learning it, companies also need reliable infrastructure to run containers in production. Let’s break down the architecture step by step. #DockerClient The Docker Client is the command-line interface engineers use to interact with Docker. Common commands: • docker build • docker pull • docker run Interview Insight: The Docker client communicates with the Docker daemon using REST APIs. #DockerDaemon (dockerd) The Docker Daemon runs in the background and manages all Docker operations. Responsibilities include: • Building container images • Managing containers • Handling networking and storage • Communicating with container registries #DockerImages Docker images are read-only templates used to create containers. Examples: • Ubuntu • Nginx • Redis Images typically contain: • Application code • Runtime environment • Required libraries • Dependencies This ensures consistent deployments across environments. #DockerContainers Containers are running instances of Docker images. Key characteristics: • Lightweight • Isolated execution environment • Fast startup time • Share the host OS kernel This makes containers much more efficient than traditional virtual machines. #DockerHost The Docker Host is the system where Docker runs. It can be: • A local development server • A cloud VM • A Kubernetes worker node • A dedicated container server #DockerRegistry A Docker Registry stores and distributes container images. Examples include: • Docker Hub • AWS ECR • Azure Container Registry Organizations often maintain private registries for internal deployments. #DockerWorkflow (Build → Pull → Run) Build Developers create container images using Dockerfiles. Pull Images are downloaded from a registry. Run Containers are launched from images on the Docker host. This workflow allows applications to run consistently across development, staging, and production environments. Where Infrastructure Matters Running containers in production requires reliable compute, fast storage, and stable networking. That’s where #ConnectQuest comes in. For teams deploying containerized AI agents and automation platforms, Connect Quest provides OpenClaw AI Agent Hosting, a production-ready environment with Docker, Redis, PostgreSQL, Python, and Node.js pre-installed so developers can deploy AI agents without complex infrastructure setup. Learn more: https://lnkd.in/dg5p7vfn #Docker #DevOps #Containerization #CloudComputing #Kubernetes #Microservices #CI_CD #CloudEngineering #OpenClaw #OpenClawHosting #AIAgent #AiAgentHosting #AIAgentDevOps
To view or add a comment, sign in
-
-
𝐈 𝐛𝐮𝐢𝐥𝐭 𝐚𝐧 𝐀𝐈 ��𝐡𝐚𝐭 𝐫𝐞𝐯𝐢𝐞𝐰𝐬 𝐓𝐞𝐫𝐫𝐚𝐟𝐨𝐫𝐦 𝐏𝐮𝐥𝐥 𝐑𝐞𝐪𝐮𝐞𝐬𝐭𝐬. Most infrastructure code reviews still happen manually. But Terraform changes can easily introduce problems: • security issues • bad practices • inefficient resource configuration • missing best practices So I decided to experiment with something simple. When a developer opens a Pull Request with Terraform code, the system automatically: • analyzes the Terraform files • detects potential issues • suggests improvements • posts the feedback directly into the Pull Request The architecture is intentionally simple: GitHub PR → GitHub Webhook → AWS Lambda → Amazon Bedrock → PR Comment The idea is not to replace engineers. The goal is to assist DevOps teams and speed up infrastructure reviews. I wrote a full article explaining the architecture and implementation. https://lnkd.in/dx6wfdVP Would you trust AI to review your Terraform code? #AWS #Terraform #DevOps #AI #AmazonBedrock #CloudArchitecture
To view or add a comment, sign in
-
💡 Something shifted in how I approach infrastructure work recently, and I wanted to share it. I've been hands-on with Claude Code — not as a novelty, but as a genuine part of how platform work gets done. After putting it through its paces on real AWS workflows, I wrote up a guide for engineers who live in Terraform, GitHub Actions, and EKS day to day. A few things that surprised me along the way: 1️⃣ The CLAUDE.md file is deceptively powerful. It's just a markdown file in your repo root, but it becomes the AI's understanding of your team — your naming conventions, your tagging requirements, your "never do this" rules. It follows them in the terminal and in CI. 2️⃣ The GitHub Actions + AWS Bedrock combo is a real unlock for enterprise teams. Claude runs through your existing IAM roles. No Anthropic API key sitting in your secrets store. For orgs with strict data residency or third-party API policies, that matters. 3️⃣ Skills feel like runbooks that actually run. Write your EKS incident response steps in a structured markdown file, and Claude loads and follows them when the situation calls for it. It's a small thing that changes a lot. The guide covers all of this with working code examples — Terraform review automation, CI failure diagnosis, headless scripting patterns, MCP for live AWS resource queries, and more. Curious whether others on platform or SRE teams are building similar workflows, or approaching AI tooling differently. Always more to learn here. Link in the comments if you want to dig in. 🔗 https://lnkd.in/gk6fWK8D #AWS #PlatformEngineering #DevOps #Terraform #SRE #IaC #ClaudeCode #CloudInfrastructure #GitHubActions #AIEngineering
To view or add a comment, sign in
-
🚀 How Does AWS Deployment Actually Work Internally? Many developers use AWS daily, but understanding what happens behind the scenes during deployment is essential for building reliable production systems. Here’s a simplified view of a typical CI/CD deployment pipeline on AWS. 1️⃣ Code Development The journey starts when a developer writes code and pushes it to a Git repository. Flow: Developer → GitHub / GitLab / Bitbucket This push usually triggers a CI pipeline automatically. 2️⃣ Continuous Integration (CI) The CI pipeline performs automated steps to validate the code: • Compile the application • Run unit tests • Perform static code analysis • Build an artifact (JAR, WAR, or Docker image) Common tools: Jenkins, GitHub Actions, GitLab CI, AWS CodeBuild 3️⃣ Artifact Storage Once the build succeeds, the artifact is stored in a repository. Examples: • AWS S3 → stores JAR/WAR files • AWS ECR → stores Docker images This ensures the deployment pipeline always uses a versioned artifact. 4️⃣ Continuous Deployment (CD) The CD pipeline deploys the application to AWS infrastructure. Tools commonly used: • AWS CodeDeploy • AWS CodePipeline • Jenkins pipelines Deployment targets could be: • EC2 – Virtual machines running your app • ECS – Container orchestration • EKS – Kubernetes-based deployment • AWS Lambda – Serverless functions 5️⃣ Load Balancing & Traffic Routing Once deployed, traffic is routed through an AWS Elastic Load Balancer (ELB). Users → Load Balancer → Application Servers This ensures: ✔ High availability ✔ Traffic distribution ✔ Health checks 6️⃣ Auto Scaling AWS can automatically scale infrastructure based on traffic. Example: If CPU usage or traffic spikes → new instances launch automatically. This helps handle large workloads without manual intervention. 7️⃣ Monitoring & Observability Production systems must be monitored continuously. Common AWS tools: • CloudWatch – Metrics & logs • CloudTrail – API auditing • AWS X-Ray – Distributed tracing 8️⃣ Safe Deployment Strategies To avoid downtime, modern systems use deployment strategies like: • Blue-Green Deployment – Switch traffic between two environments • Rolling Deployment – Gradually update instances • Canary Deployment – Release to a small percentage of users first 🔑 Final Deployment Flow Developer → Git Push → CI Pipeline → Build Artifact → CD Pipeline → Deploy to AWS → Load Balancer → Users Understanding this pipeline helps engineers design scalable, reliable, and production-ready systems. How does your team currently manage deployments — Jenkins, GitHub Actions, or AWS CodePipeline? #AWS #DevOps #CloudComputing #CI_CD #Microservices #SoftwareEngineering
To view or add a comment, sign in
-
3 days. 6 phases. 1 fully production-grade Kubernetes platform built from scratch. No shortcuts. No half-measures. Here's what we shipped 👇 The client had a problem most growing startups hit: A monolith that couldn't scale. Deployments that broke production. Zero visibility into what was actually happening inside the system. No security posture beyond "hope for the best." So we rebuilt it. The right way. 𝗧𝗵𝗲 𝗦𝘁𝗮𝗰𝗸: → Next.js 14 frontend → 2× Express.js microservices (users, products) → 1× .NET 8 microservice (orders) → PostgreSQL managed by Percona Operator → Kong API Gateway → Linkerd service mesh (mTLS on every call) → KEDA autoscaling on Prometheus metrics → Prometheus + Grafana + Loki + Tempo (full observability) → Kyverno policy engine + Sealed Secrets + NetworkPolicies → ArgoCD GitOps + Helm → All infrastructure via Terraform — zero manual AWS console clicks 𝗪𝗵𝗮𝘁 𝗺𝗮𝗱𝗲 𝘁𝗵𝗶𝘀 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻-𝗴𝗿𝗮𝗱𝗲: ✅ Private EKS cluster — API server never exposed to internet ✅ OIDC auth everywhere — no long-lived AWS keys, ever ✅ Scan before push — Trivy blocks vulnerable images before they reach ECR ✅ GitOps sync waves — deployment order enforced automatically ✅ Default-deny NetworkPolicies — zero-trust between every namespace ✅ Topology spread constraints — node failure = degraded, not down ━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝗯𝘂𝗴𝘀 𝘄𝗲 𝗵𝗶𝘁 (the ones nobody talks about): 🐛 pgBouncer returning "no such user" even when the user existed in PostgreSQL — turned out pgBouncer maintains its own auth file, separate from PG's user list 🐛 GATEWAY_URL pointing to a cluster-internal address — browser JavaScript was calling a URL it physically cannot reach Every bug had a root cause. Every fix was the right fix, not a patch. ━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗳𝗶𝗻𝗮𝗹 𝗿𝗲𝘀𝘂𝗹𝘁: Push code → GitHub Actions builds, scans, and pushes to ECR → ArgoCD detects the change → rolling update with zero downtime → KEDA scales pods to match traffic → Grafana shows you exactly what's happening → Linkerd encrypts every service-to-service call The platform manages itself. Full technical write-up is live on Medium 👇 https://lnkd.in/g5RdaxUC It covers every architecture decision, every bug, and every "why" behind the choices made. If you're working on Kubernetes platforms, microservices migrations, or DevOps automation — this one's worth the read. ━━━━━━━━━━━━━━━━━━━━━━ #DevOps #Kubernetes #AWS #EKS #Terraform #GitOps #ArgoCD #CloudNative #Microservices #PlatformEngineering #SRE #Infrastructure #Kong #Linkerd #Prometheus #Grafana #KEDA #Helm #OpenTelemetry #SoftwareEngineering #BackendDevelopment #Docker #CI #CD #SecurityEngineering #NodeJS #DotNet #NextJS #TechLeadership #SystemDesign #CloudComputing
To view or add a comment, sign in
-
-
I recently tackled a persistent problem: our main application's CI pipeline in Azure DevOps was taking an agonizing 18-20 minutes for every single commit. This wasn't just an annoyance; it meant slow feedback cycles, increased agent consumption, and a general drag on developer productivity. For a long time, we attributed it to the "size of the codebase" or "standard cloud build times." But after some profiling within the pipeline logs, the real culprit became painfully obvious: the `dotnet restore` step. With a large .NET solution comprising dozens of projects, each build was re-downloading hundreds of megabytes (sometimes gigabytes) of NuGet packages from scratch on every agent spin-up. Azure DevOps agents are pristine by design, which is great for consistency but terrible for package restore performance if not managed. My initial attempts involved trying to manually copy `nuget global-packages` or using `nuget locals all -clear` in weird ways, which were hacks at best and unstable at worst. The proper solution, which we should have leveraged earlier, is the `Cache@2` task. It's designed specifically for this, but getting the `key` right is paramount. Here's the YAML snippet that finally unlocked massive improvements: ```yaml variables: # Define a custom path for NuGet packages that the cache task can target NUGET_PACKAGES: $(Pipeline.Workspace)/.nuget/packages steps: - task: Cache@2 inputs: key: 'nuget | "$(Agent.OS)" | **/packages.lock.json' # Crucial: cache key based on lock file hash path: '$(NUGET_PACKAGES)' restoreKeys: | # Fallback keys for partial matches (e.g., if OS changes but packages are same) nuget | "$(Agent.OS)" nuget displayName: Cache NuGet packages - script: | dotnet restore --packages $(NUGET_PACKAGES) --nologo displayName: Restore NuGet packages # ... rest of your build pipeline (build, test, publish) ``` The `key: 'nuget | "$(Agent.OS)" | **/packages.lock.json'` is the secret sauce. Initially, I tried `key: 'nuget | "$(Agent.OS)" | $(Build.SourcesVersion)'`, thinking that using the source version would ensure cache freshness. However, that invalidates the cache on *every* commit, even if no package dependencies changed, completely defeating the purpose. By keying off the hash of `packages.lock.json` files, the cache is only invalidated when a dependency *actually* changes. The `restoreKeys` provide a fallback mechanism if a full key match isn't found, improving the chances of a partial cache hit. This wasn't just a minor optimization. It slashed our CI build times by over 60%, bringing them down to a much more palatable 6-8 mins. The impact has been significant: faster feedback loops for developers, less queue time, reduced agent consumption costs, and a generally happier team. Now, reviewing and implementing a robust `Cache@2` task is one of the first things I check when setting up a new .NET CI pipeline or optimizing a sluggish existing one.
To view or add a comment, sign in
-
Building a Production-Ready AI Code Reviewer with Serverless and Bedrock AI code reviewers are transforming how development teams catch bugs, enforce standards, and maintain code quality at scale. This guide shows software engineers, DevOps professionals, and technical leads how to build a production-ready AI code review system using serverless architecture and AWS Bedrock. https://lnkd.in/eaDehvSu Amazon Web Services (AWS) #AWS, #AWSCloud, #AmazonWebServices, #CloudComputing, #CloudConsulting, #CloudMigration, #CloudStrategy, #CloudSecurity, #businesscompassllc, #ITStrategy, #ITConsulting, #viral, #goviral, #viralvideo, #foryoupage, #foryou, #fyp, #digital, #transformation, #genai, #al, #aiml, #generativeai, #chatgpt, #openai, #deepseek, #claude, #anthropic, #trinium, #databricks, #snowflake, #wordpress, #drupal, #joomla, #tomcat, #apache, #php, #database, #server, #oracle, #mysql, #postgres, #datawarehouse, #windows, #linux, #docker, #Kubernetes, #server, #database, #container, #CICD, #migration, #cloud, #firewall, #datapipeline, #backup, #recovery, #cloudcost, #log, #powerbi, #qlik, #tableau, #ec2, #rds, #s3, #quicksight, #cloudfront, #redshift, #FM, #RAG
To view or add a comment, sign in
Thanks Ed Donner for the guidance