Cloud Application Deployment

Explore top LinkedIn content from expert professionals.

  • View profile for Nana Janashia

    Helping millions of engineers advance their careers with DevOps & Cloud education 💙

    255,005 followers

    This series has turned into weekly updates, so another exciting update for you 😁 👇 In DevSecOps Bootcamp we released the new chapter 𝗣𝗼𝗹𝗶𝗰𝘆 𝗮𝘀 𝗖𝗼𝗱𝗲! 🥳 → See chapter introduction here: https://bit.ly/4bVM963 To give you a deep dive, in previous lectures: 👉 in #kubernetes Access Management chapter, we are securing K8s cluster access management 👉 In #argocd chapter, we are building a pipeline to automate application deployment into the cluster And as per best practice, we have a separate application Git repo and repo where application deployment manifests are stored ☝️ ArgoCD is synced with the k8s manifests repo 🔄 So any time the deployment configuration files change there, like deployment gets updated with a new image tag or k8s service config changes, ArgoCD auto pulls those changes into the cluster ✅ But now we want to add a validation step in the CD part... 🚦 Cuz what if developers who aren't knowledgeable in K8s, commit misconfigured manifest files, or makes changes that introduce security issues? 🙉 So before ArgoCD applies the changes it pulled from repo, we want to validate to make sure they are properly configured with security and production best practices 🚦 Of course, K8s admins can’t manually review every such manifest update from different product teams deploying to cluster 🤷🏻♂️ That’s where Policy as Code comes in 🚀👇 We deploy a PaC tool OPA Gatekeeper in cluster. K8s admins can then define policies that tell Gatekeeper: “these are the rules we wanna enforce. If someone tries to deploy any changes in cluster that don’t comply to these rules, reject them” So with Policy as Code, admins can fully automate enforcing any rules they want in the cluster. So now we have a CD part that also has an automated validation, that checks for security or other issues in k8s configuration changes. So in this chapter: 👉 we deploy the Gatekeeper with TF, 👉 and learn how to create policies for different rules and see how they get enforced in the cluster, when ArgoCD automatically pulls any changes from the GitOps repository And as you see, this chapter builds directly on top of the previous chapters. So instead of learning each thing in isolation to keep it simple, you are building this complex set up step by step, exactly as it would look like in real project 🚀 And this is probably the most valuable thing about this bootcamp, that will allow you to directly implement this knowledge in any complex DevOps project 💪 → https://bit.ly/4bxXkSW Last time, we got so many requests for the ArgoCD handouts, so we decided to provide the handout 📃 for the Policy as Code chapter also, to anyone who wants to learn this concept. The handout alone includes lots of valuable information that you can use for learning. So write support@techworld-with-nana.com and we’ll give you access to the complete handout of the chapter 😊 Have a great week guys! 💙 Cheers, Nana

  • View profile for Milan Jovanović
    Milan Jovanović Milan Jovanović is an Influencer

    Practical .NET and Software Architecture Tips | Microsoft MVP

    272,182 followers

    Stop storing secrets in appsettings.json. Seriously. That file was never meant to hold production credentials, yet I see it in real projects all the time. I just published a new walkthrough where I show how to secure your .NET apps with Azure Key Vault: - Creating your first Key Vault - The exact RBAC roles you actually need - Storing secrets with versioning - Authenticating with DefaultAzureCredential - Pulling secrets directly into ASPNET Core configuration - Loading connection strings + options without touching appsettings It’s a clean setup that keeps your sensitive values out of the repo and follows Azure’s best practices. Learn more here: https://lnkd.in/eU4mUKnY If you want to upgrade how you handle secrets in .NET, this one will help you get it right from the start.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    708,533 followers

    If you're in IT and haven't embraced Docker yet, you're missing a crucial piece of the modern development puzzle. What is Docker? Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization. It allows you to package an application with all its dependencies into a standardized unit for software development and deployment. Key Concepts: 1. Containers    - Lightweight, standalone executable packages    - Include everything needed to run an application    - Ensure consistency across different environments 2. Images    - Read-only templates used to create containers    - Built from layers, each representing an instruction in the Dockerfile    - Can be shared via Docker Hub or private registries 3. Dockerfile    - Text file containing instructions to build a Docker image    - Defines the environment inside the container    - Automates the image creation process 4. Docker Compose    - Tool for defining and running multi-container Docker applications    - Uses YAML files to configure application services    - Simplifies complex setups with a single command 5. Docker Swarm    - Native clustering and scheduling tool for Docker    - Turns a pool of Docker hosts into a single, virtual host    - Enables easy scaling and management of containerized applications Benefits of Docker: • Consistency: "It works on my machine" becomes a thing of the past • Isolation: Applications and their dependencies are separated from the host system • Efficiency: Lightweight containers share the host OS kernel, reducing overhead • Portability: Containers can run anywhere Docker is installed • Scalability: Easy to scale applications horizontally by spinning up new containers Best Practices: 1. Keep images small and focused 2. Use multi-stage builds to optimize Dockerfiles 3. Leverage Docker Compose for local development 4. Implement proper logging and monitoring 5. Regularly update base images and dependencies 6. Use volume mounts for persistent data 7. Implement proper security measures (e.g., least privilege principle) Getting Started: 1. Install Docker on your machine 2. Familiarize yourself with basic commands (docker run, build, pull, push) 3. Create your first Dockerfile and build an image 4. Experiment with Docker Compose for multi-container setups 5. Explore Docker Hub for pre-built images and inspiration Docker has become an essential skill for developers and operations teams alike. Its ability to streamline development workflows, improve deployment consistency, and enhance scalability makes it a crucial tool in modern software development. Have I overlooked anything? Please share your thoughts—your insights are priceless to me.

  • View profile for Confidence Staveley
    Confidence Staveley Confidence Staveley is an Influencer

    Multi-Award Winning Cybersecurity Leader | Author | Int’l Speaker | On a mission to simplify cybersecurity, attract more women, drive AI Security awareness and raise high-agency humans who defy odds & change the world.

    97,995 followers

    Using unverified container images, over-permissioning service accounts, postponing network policy implementation, skipping regular image scans and running everything on default namespaces…. What do all these have in common ? Bad cybersecurity practices! It’s best to always do this instead; 1. Only use verified images, and scan them for vulnerabilities before deploying them in a Kubernetes cluster. 2. Assign the least amount of privilege required. Use tools like Open Policy Agent (OPA) and Kubernetes' native RBAC policies to define and enforce strict access controls. Avoid using the cluster-admin role unless absolutely necessary. 3. Network Policies should be implemented from the start to limit which pods can communicate with one another. This can prevent unauthorized access and reduce the impact of a potential breach. 4. Automate regular image scanning using tools integrated into the CI/CD pipeline to ensure that images are always up-to-date and free of known vulnerabilities before being deployed. 5. Always organize workloads into namespaces based on their function, environment (e.g., dev, staging, production), or team ownership. This helps in managing resources, applying security policies, and isolating workloads effectively. PS: If necessary, you can ask me in the comment section specific questions on why these bad practices are a problem. #cybersecurity #informationsecurity #softwareengineering

  • View profile for Sukhpal Singh Gill

    Editor-in-Chief, SFHEA, Executive Editor, Leadership, 1.5M+ Impressions

    6,346 followers

    📢 𝐄𝐱𝐜𝐢𝐭𝐢𝐧𝐠 𝐍𝐞𝐰 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡 𝐀𝐥𝐞𝐫𝐭🚨 Our latest study “𝗡𝗲𝘁𝟬𝗔𝗜𝗖𝗹𝗼𝘂𝗱”, published in IEEE 𝗜𝗻𝘁𝗲𝗿𝗻𝗲𝘁 𝗼𝗳 𝗧𝗵𝗶𝗻𝗴𝘀 𝗠𝗮𝗴𝗮𝘇𝗶𝗻𝗲, sheds light on the utilisation of 𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗼𝗳 𝗧𝗵𝗶𝗻𝗴𝘀 (𝗔𝗜𝗼𝗧) for 𝗖𝗹𝗼𝘂𝗱 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 to enable 𝗖𝗮𝗿𝗯𝗼𝗻 𝗡𝗲𝘂𝘁𝗿𝗮𝗹 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴 to contribute towards the 𝐍𝐇𝐒 𝐍𝐞𝐭 𝐙𝐞𝐫𝐨 𝐚𝐦𝐛𝐢𝐭𝐢𝐨𝐧 while managing 𝐩𝐞𝐨𝐩𝐥𝐞'𝐬 𝐡𝐞𝐚𝐥𝐭𝐡. 🤝 Kudos to Han Wang for leading it 𝑯𝒊𝒈𝒉𝒍𝒊𝒈𝒉𝒕𝒔: 1️⃣ Design and implement an AI-driven framework, 𝗡𝗲𝘁𝟬𝗔𝗜𝗖𝗹𝗼𝘂𝗱 that dynamically schedules workloads and allocates resources using 𝗔𝗜 to minimise energy consumption and carbon emissions while maintaining QoS. 2️⃣ Integrate the framework within a cloud-edge 𝗔𝗜𝗼𝗧 architecture, proposing a comprehensive solution for achieving sustainable computing goals in modern distributed environments. 3️⃣ Validate the proposed framework through a real-world IoT healthcare application that uses the Feature Tokenizer Transformer (𝗙𝗧-𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗲𝗿) for disease prediction, demonstrating its practical effectiveness in enhancing both sustainability and service quality. 4️⃣ Demonstrate the potential to significantly improve resource utilisation and reduce energy consumption while maintaining robust service quality for AIoT applications, highlighting actionable recommendations to achieve net zero targets in 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 (𝗜𝗖𝗧) infrastructures. 🔗 𝑳𝒊𝒏𝒌 𝒕𝒐 𝒕𝒉𝒆 𝒂𝒓𝒕𝒊𝒄𝒍𝒆: https://lnkd.in/eEsKx-A8 🔗 𝗡𝗲𝘁𝟬𝗔𝗜𝗖𝗹𝗼𝘂𝗱 𝒊𝒔 𝒓𝒆𝒍𝒆𝒂𝒔𝒆𝒅 𝒐𝒏 𝑮𝒊𝒕𝑯𝒖𝒃: https://lnkd.in/eiEycSQT 🔬💡 🤝 Looking forward to furthering this research and its impact on future healthcare and computing systems. #Cloudcomputing #Machinelearning #Sustainablecomputing #AI #researchpaper #computing #edge #Cloud #applications #IoT #computerscience #Research #industry #academics #journals #journal #qmul #postdoc #Scientificresearch #conference #PhD #university #publications #Computing #academiclife #ArtificialIntelligence #academia #engineering #Academic #NetZero #ieee

  • View profile for Vishakha Sadhwani

    Sr. Solutions Architect at Nvidia | Ex-Google, AWS | 100k+ Linkedin | EB1-A Recipient | Follow to explore your career path in Cloud | DevOps | *Opinions.. my own*

    139,237 followers

    Here’s a quick breakdown of Kubernetes deployment strategies you should know — and the trade-offs that come with each. But first — why does this matter? Because deploying isn’t just about pushing new code — it’s about how safely, efficiently, and with what level of risk you roll it out. The right strategy ensures you deliver value without breaking production or disrupting users. Let's dive in: 1. Canary ↳ Gradually route a small percentage of traffic (e.g. 20%) to the new version before a full rollout. ↳ When to use ~ Minimize risk by testing updates in production with real users. Downtime: No Trade-offs: ✅ Safer releases with early detection of issues ❌ Requires additional monitoring, automation, and traffic control ❌ Slower rollout process 2. Blue-Green ↳ Maintain two environments — switch all traffic to the new version after validation. ↳ When to use ~ When you need instant rollback options with zero downtime. Downtime: No Trade-offs: ✅ Instant rollback with traffic switch ✅ Zero downtime ❌ Higher infrastructure cost — duplicate environments ❌ More complex to manage at scale 3. A/B Testing ↳ Split traffic between two versions based on user segments or devices. ↳ When to use ~ For experimenting with features and collecting user feedback. Downtime: Not Applicable Trade-offs: ✅ Direct user insights and data-driven decisions ✅ Controlled experimentation ❌ Complex routing and user segmentation logic ❌ Potential inconsistency in user experience 4. Rolling Update ↳ Gradually replace old pods with new ones, one batch at a time. ↳ When to use ~ To update services continuously without downtime. Downtime: No Trade-offs: ✅ Zero downtime ✅ Simple and native to Kubernetes ❌ Bugs might propagate if monitoring isn’t vigilant ❌ Rollbacks can be slow if an issue emerges late 5. Recreate ↳ Shut down the old version completely before starting the new one. ↳ When to use ~ When your app doesn’t support running multiple versions concurrently. Downtime: Yes Trade-offs: ✅ Simple and clean for small apps ✅ Avoids version conflicts ❌ Service downtime ❌ Risky for production environments needing high availability 6. Shadow ↳ Mirror real user traffic to the new version without exposing it to users. ↳ When to use ~ To test how the new version performs under real workloads. Downtime: No Trade-offs: ✅ Safely validate under real conditions ✅ No impact on end users ❌ Extra resource consumption — running dual workloads ❌ Doesn’t test user interaction or experience directly ❌ Requires sophisticated monitoring Want to dive deeper? I’ll be breaking down each k8s strategy in more detail in the upcoming editions of my newsletter. Subscribe here → tech5ense.com Which strategy do you rely on most often? • • • If you found this useful.. 🔔 Follow me (Vishakha) for more Cloud & DevOps insights ♻️ Share so others can learn as well!

  • View profile for Anurag(Anu) Karuparti

    Agentic AI Strategist @Microsoft (25k+) | Author - Generative AI for Cloud Solutions | LinkedIn Learning Instructor | Responsible AI Advisor | Ex-PwC, EY | Marathon Runner

    28,390 followers

    𝐌𝐨𝐬𝐭 𝐀𝐈 𝐚𝐠𝐞𝐧𝐭𝐬 𝐟𝐚𝐢𝐥 𝐢𝐧 𝐏𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐞𝐲 𝐜𝐚𝐧 𝐧𝐨𝐭 𝐫𝐞𝐦𝐞𝐦𝐛𝐞𝐫 𝐂𝐨𝐧𝐭𝐞𝐱𝐭.  Here is the 10-step Roadmap to build Agents that actually work. From my experience,  successful deployments follow this exact progression: 1. Scope the Cognitive Contract • Define task domain, decision authority, error tolerance • Specify I/O schemas and action boundaries • Establish non-functional requirements (latency, cost, compliance) 2. Data Ingestion & Governance Layer • Integrate SharePoint, Azure SQL, Blob Storage pipelines • Normalize, chunk, and version content artifacts • Enforce RBAC, PII redaction, policy tagging 3. Semantic Representation Pipeline • Generate embeddings via Azure OpenAI embedding models • Vectorize knowledge segments • Persist in Azure AI Search (vector + semantic index) 4. Retrieval Orchestration • Encode user intent into embedding space • Execute hybrid retrieval (BM25 + ANN search) • Re-rank using similarity scores and metadata constraints 5. Prompt Assembly & Grounding • System instruction + policy constraints + task schema • Inject top-K evidence passages dynamically • Enforce source-bounded generation 6. LLM Reasoning Layer • Invoke GPT (Azure OpenAI) or Claude (Anthropic) • Tune decoding parameters (temperature, top-p, max tokens) • Validate deterministic vs creative response modes 7. Context & State Management • Persist conversational state in Azure Cosmos DB • Apply rolling summarization and relevance pruning • Maintain short-term and long-term memory separation 8. Evaluation & Calibration • Run adversarial, regression, and grounding tests • Measure hallucination rate, retrieval precision, latency • Optimize chunking, ranking heuristics, prompts 9. Productionization & Observability • Deploy via Microsoft Foundry and AKS • Implement distributed tracing, token usage, cost telemetry • Enable human-in-the-loop escalation paths 10. Agentic Capability Expansion • Integrate tool invocation (search, workflow, DB execution) • Add feedback-driven self-correction loops • Implement personalization via behavioral signals The critical steps teams skip: • Step 3 (Semantic Representation): Without proper vectorization, retrieval fails • Step 7 (State Management): Without memory persistence, agents restart every conversation • Step 8 (Evaluation): Without testing, hallucinations go to production My Recommendation: Don't skip steps. Each builds on the previous: • Steps 1-3: Foundation (scope, data, embeddings) • Steps 4-6: Core agent (retrieval, prompts, reasoning) • Steps 7-9: Production readiness (memory, testing, deployment) • Step 10: Advanced capabilities (tools, self-correction) Which step are you currently stuck on? ♻️ Repost this to help your network get started ➕ Follow Anurag(Anu) for more PS: If you found this valuable, join my weekly newsletter where I document the real-world journey of AI transformation. ✉️ Free subscription: https://lnkd.in/exc4upeq

  • View profile for Pooja Jain

    Storyteller | Lead Data Engineer@Wavicle| Linkedin Top Voice 2025,2024 | Linkedin Learning Instructor | 2xGCP & AWS Certified | LICAP’2022

    191,402 followers

    𝗔𝗻𝗸𝗶𝘁𝗮: You know 𝗣𝗼𝗼𝗷𝗮, last Monday our new data pipeline was live in cloud and it failed terribly. Literally had an exhaustive week fixing the critical issues. 𝗣𝗼𝗼𝗷𝗮: Ohh, so don’t you use Cloud monitoring for data pipelines? From my experience always start by tracking these four key metrics: latency, traffic, errors, and saturation. It helps you to check your pipeline health, if it's running smoothly or if there’s a bottleneck somewhere.. 𝗔𝗻𝗸𝗶𝘁𝗮: Makes sense. What tools do you use for this? 𝗣𝗼𝗼𝗷𝗮: Depends on the cloud platform. For AWS, I use CloudWatch—it lets you set up dashboards, track metrics, and create alarms for failures or slowdowns. On Google Cloud, Cloud Monitoring (formerly Stackdriver) is awesome for custom dashboards and log-based metrics. For more advanced needs, tools like Datadog and Splunk offer real-time analytics, anomaly detection, and distributed tracing across service. 𝗔𝗻𝗸𝗶𝘁𝗮: And what about data lineage tracking? How do you track when something goes wrong, it's always a nightmare trying to figure out which downstream systems are affected. 𝗣𝗼𝗼𝗷𝗮: That's where things get interesting. You could simply implement custom logging to track data lineage and create dependency maps. If the customer data pipeline fails, you’ll immediately know that the segmentation, recommendation, and reporting pipelines might be affected. 𝗔𝗻𝗸𝗶𝘁𝗮: And what about logging and troubleshooting? 𝗣𝗼𝗼𝗷𝗮: Comprehensive logging is key. I make sure every step in the pipeline logs events with timestamps and error details. Centralized logging tools like ELK stack or cloud-native solutions help with quick debugging. Plus, maintaining data lineage helps trace issues back to their source. 𝗔𝗻𝗸𝗶𝘁𝗮: Any best practices you swear by? 𝗣𝗼𝗼𝗷𝗮: Yes, here’s what’s my mantra to ensure my weekends are free from pipeline struggles - Set clear monitoring objectives—know what you want to track. Use real-time alerts for critical failures. Regularly review and update your monitoring setup as the pipeline evolves. Automate as much as possible to catch issues early. 𝗔𝗻𝗸𝗶𝘁𝗮: Thanks, 𝗣𝗼𝗼𝗷𝗮! I’ll set up dashboards and alerts right away. Finally, we'll be proactive instead of reactive when it comes to pipeline issues! 𝗣𝗼𝗼𝗷𝗮: Exactly. No more finding out about problems from angry business users. Monitoring will catch issues before they impact anyone downstream. In data engineering, a well-monitored pipeline isn’t just about catching errors—it’s about building trust in every insight you deliver. #data #engineering #reeltorealdata #cloud #bigdata

  • View profile for Faye Ellis
    Faye Ellis Faye Ellis is an Influencer

    AWS Community Hero, cloud architect, keynote speaker, and content creator. I explain cloud technology clearly and simply, to help make rewarding tech careers accessible to all

    26,445 followers

    ☁️ Every major cloud outage is a reminder that resilience isn’t something you can enable with a checkbox, it’s something you need to explicitly design, test, and adapt as dependencies evolve. A recent “thermal event” in Microsoft Azure’s West Europe region, caused by a cooling system fault triggered hardware shutdowns, took storage units offline, and resulted in broader service disruption across VMs, databases, and Azure Kubernetes Service. Even impacting dependent services in other Availability Zones. Serving as a reminder that zone-redundancy alone isn’t going to be enough when underlying storage fabrics or control-plane dependencies span across availability zones. If your replication strategy still relies on locally-redundant storage (LRS) within a single zone, or even multiple zones in the same region, you're exposed to environmental failures like this. As organizations migrate more critical workloads to the cloud, now is the moment to revisit resilient architecture. Invest in services that span multiple regions to avoid this kind of exposure, and test failover under realistic conditions, so that teams can build muscle-memory and to expose unexpected dependencies. https://lnkd.in/eUsDQ-gH https://lnkd.in/eBz8J3kD

  • View profile for Kai Waehner

    Global Field CTO | Thought Leader | Author | International Speaker | Follow me with Data in Motion

    39,360 followers

    "ARM CPUs + Apache Kafka = A Perfect Match for Edge AND Cloud" Real-time #datastreaming is no longer limited to powerful servers in central data centers. With the rise of energy-efficient #ARM CPUs, organizations are deploying #ApacheKafka in #edgecomputing, in addition to the widespread hybrid #cloud environments—unlocking new levels of scalability, flexibility, and sustainability. In my blog post, I explore how ARM-based infrastructure—like #AWSGraviton or industrial IoT gateways—pairs with #eventdrivenarchitecture to power use cases across #manufacturing, #retail, #telco, #smartcities, and more. ARM CPUs bring clear benefits to the world of #streamprocessing: - High energy efficiency and low cost - Compact form factors ideal for disconnected edge environments - Strong performance for modern #IoT and #AI workloads The combination of Kafka and ARM enables more cost-efficient and sustainable applications such as: - Predictive maintenance on the factory floor - Offline vehicle telemetry in #transportation and #logistics - Local compliance automation in #healthcare - In-store analytics and loyalty systems in food and retail chains Read the full post with use cases, architecture diagrams, and tips for building cost-effective, resilient, real-time systems at the edge and in the cloud: https://lnkd.in/eeJ6mcaH

Explore categories