Serverless Architecture

Explore top LinkedIn content from expert professionals.

Summary

Serverless architecture is a way to build and run applications without having to manage the underlying servers, letting cloud providers handle infrastructure while you focus on code. This approach is popular for its scalability, pay-per-use pricing, and freedom from cluster management, making it easier for teams to launch projects and handle data processing efficiently.

  • Monitor your costs: Regularly review your serverless resource usage and tweak memory and timeout settings to avoid unexpected expenses.
  • Simplify your code: Keep your deployment packages lean and versioned, as smaller runtimes reduce startup delays and improve performance.
  • Embrace event-driven design: Use triggers and queues to automate workflows and keep your data fresh without manual intervention or server management.
Summarized by AI based on LinkedIn member posts
  • View profile for Yan Cui

    I teach AWS and serverless | AWS Serverless Hero | Consultant

    49,613 followers

    One of the biggest misconceptions I hear about serverless is this: "It’s just a spaghetti of Lambda functions calling each other." Yes, I’ve seen that happen. When it does, it’s ugly, and it usually comes from inexperience and lack of design. But that’s not what serverless architectures are supposed to be. It’s like saying "cats are animals that poop on your bed", sure, accidents happen (…probably 😹), but that’s not the norm or the expected behaviour! So, what does a well-designed serverless architecture actually look like? From a bird’s-eye view, pretty much the same as what you'd build with containers or EC2: • Separate accounts per team/workload • System is decomposed into independent services • Every service owns its own data (no shared DBs) • Services are loosely coupled through events • Centralised logging and observability Whether I have an API (synchronous communication) or use events (asynchronous communication) does not depend on whether I use Lambda vs. containers. A serverless architecture doesn't have to be event-driven. Equally, an event-driven architecture can run on containers or EC2. Those are orthogonal architectural choices. Inside each service, I use the serverless-first mindset to decide on my tech stack, e.g. • Prefer DynamoDB over RDS • Prefer API Gateway of ALB • Prefer Lambda functions over containers or EC2 • Prefer EventBridge over Kafka The guiding principle is simple: pick the service that does the most heavy lifting. And with serverless technologies like Lambda, you get built-in multi-AZ redundancy; scalability; reduced attack surface; no need to manage infrastructure; simplified deployment; and, pay-per-use pricing. So no, you don’t expose "a bunch of Lambdas" as your service boundary. That’s not the goal. That’s just a mistake.

  • View profile for Zayne Turner

    Developer Relations & AI Engineering

    6,072 followers

    Just published: "Serverless MCP: Stateless Execution for Enterprise AI Tools" Most teams build MCP servers with persistent connections and session state. For enterprise workflows—where tools orchestrate across Salesforce, Stripe, and other systems of record—there's a better way. What serverless architecture eliminates: - Server affinity and connection limits - Session state synchronization - Cache staleness and stale reads - Complex failure recovery (no connection state to reconstruct) What stateless execution forces: - Backend systems as source of truth (your CRM, ERP, payments—not cached copies) - Idempotent operations by design (no duplicate charges, no duplicate records) - Self-contained requests (any worker handles any call) - Cleaner separation between protocol and execution layers The article explains: - The three architectural choices that define serverless MCP - When stateless execution matters (and when it doesn't) - Server architecture comparison (side-by-side) - How to decide which pattern fits your system Includes a complete open-source reference implementation (Dewy Resort sample app) demonstrating the patterns. Read it here: https://lnkd.in/gTKSDg6d Understanding the tradeoffs matters more than following trends.

  • View profile for Hasnain Ahmed Shaikh

    Software Dev Engineer @ Amazon | Driving Large-Scale, Customer-Facing Systems | Empowering Digital Transformation through Code | Tech Blogger at Haznain.com & Medium Contributor

    5,862 followers

    𝐖𝐡𝐚𝐭 𝐃𝐨𝐞𝐬 𝐚 𝐒𝐞𝐫𝐯𝐞𝐫 𝐥𝐞𝐬𝐬 𝐄𝐯𝐞𝐧𝐭-𝐃𝐫𝐢𝐯𝐞𝐧 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞 𝐑𝐞𝐚𝐥𝐥𝐲 𝐋𝐨𝐨𝐤 𝐋𝐢𝐤𝐞? Let’s break it down using a real-world scenario: an e-commerce platform. Traditional monoliths or tightly coupled services often struggle with scalability and flexibility. A server less event-driven setup solves that by breaking the system into modular micro services that only run when triggered. 𝐇𝐞𝐫𝐞 𝐢𝐬 𝐡𝐨𝐰 𝐢𝐭 𝐰𝐨𝐫𝐤𝐬, 𝐬𝐭𝐞𝐩 𝐛𝐲 𝐬𝐭𝐞𝐩: - The user interacts with the frontend. All requests are routed through API Gateway - Each business function - product management, basket operations, order processing - runs independently on AWS Lambda - Data is persisted in Dynamo DB, a fully managed, server less database - When the user completes a checkout, a Checkout Completed event is published to Amazon Event Bridge - Event Bridge evaluates routing rules and triggers downstream systems - like order fulfilment or analytics - No polling. No idle servers. Everything responds in real time 𝐖𝐡𝐲 𝐭𝐡𝐢𝐬 𝐚𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞 𝐦𝐚𝐭𝐭𝐞𝐫𝐬: - Micro services are fully decoupled and independently deployable - System scales automatically with load - no manual provisioning required - Costs stay low since compute runs only when needed - Teams can move faster and ship features independently This is not just a shift in technology. It is a shift in how we think about building software: reactive, modular, and cloud-native by design. Would you design your next platform this way? Let’s discuss.

  • View profile for Pratik Gosawi

    Senior Data Engineer | LinkedIn Top Voice ’24 | AWS Community Builder

    20,547 followers

    𝗗𝗮𝘁𝗮 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀: 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗗𝗲𝗹𝘁𝗮 𝗟𝗮𝗸𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗼𝗻 𝗔𝗪𝗦 =========================================== Imagine you have data in your company's local servers (on-premises) and want to: 1. Move this data to AWS 2. Analyze it without managing servers 3. Use an event-driven approach Here's how TrueBlue, a company facing this challenge, solved it using AWS services: 𝟭. 𝗗𝗮𝘁𝗮 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻 ----------------- • Used AWS Database Migration Service to copy data from local databases to Amazon S3 • Ensures up-to-date information for jobs, job requests, and workers • Enables accurate job matching 𝟮. 𝗘𝘃𝗲𝗻𝘁-𝗗𝗿𝗶𝘃𝗲𝗻 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 ------------------------------ • Set up S3 event notifications when new data arrives • Used Amazon SQS (Simple Queue Service) to capture these events • Created 3 SQS queues for different update frequencies:  - 10-minute updates  - 60-minute updates  - 3-hour updates • AWS EventBridge rules trigger Step Functions based on these time intervals • Step Functions orchestrate AWS Glue jobs for data processing 𝟯. 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 -------------------------- • Chose AWS Glue over Amazon EMR (Elastic MapReduce) for serverless data processing • Reasons for choosing Glue:  - Team's expertise in serverless development  - Easier to manage and debug  - Achieves similar results to EMR without server management • Glue jobs transform and load data into the Delta Lake format 𝟰. 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 ------------ • Data scientists use PySpark SQL to query the Delta Lake • Delta Lake has three tiers:  1. Bronze: Raw data from source systems  2. Silver: Cleaned and joined data from bronze tier  3. Gold: Prepared data for machine learning (feature store) • Glue jobs keep the Delta Lake up-to-date with reliable upserts (updates and inserts) • Enables data scientists to:  - Perform accurate job matches  - Extract datasets for analysis  - Build and train machine learning models 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝗼𝗳 𝘁𝗵𝗶𝘀 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲: ------------------------------ 1. Serverless: No need to manage infrastructure 2. Scalable: Can handle increasing data volumes 3. Cost-effective: Pay only for resources used 4. Real-time: Event-driven updates keep data fresh 5. Flexible: Supports various data processing needs This architecture showcases how to build a modern, serverless data lake using AWS services, enabling efficient data migration, processing, and analytics without the complexity of managing servers. #dataengineer #dataengineering #deltalake #aws

  • View profile for Chandresh Desai

    Founder | Data Solutions Architect | Data & AI Architect | Cloud Solutions Architect | Senior Data Enginner

    125,674 followers

    𝐋𝐞𝐬𝐬𝐨𝐧𝐬 𝐟𝐫𝐨𝐦 𝐭𝐡𝐞 𝐀𝐖𝐒 𝐮𝐬-𝐞𝐚𝐬𝐭-𝟏 𝐎𝐮𝐭𝐚𝐠𝐞: 𝐃𝐞𝐬𝐢𝐠𝐧𝐢𝐧𝐠 𝐚 𝐌𝐮𝐥𝐭𝐢-𝐂𝐥𝐨𝐮𝐝 𝐒𝐞𝐫𝐯𝐞𝐫𝐥𝐞𝐬𝐬 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞 𝐟𝐨𝐫 𝐑𝐞𝐬𝐢𝐥𝐢𝐞𝐧𝐜𝐞 When the AWS us-east-1 outage disrupted major global platforms last year, it was a wake-up call for every architect and engineer — no single cloud can guarantee 100% uptime. That incident underscored the need for multi-cloud resilience, where systems can shift workloads intelligently between providers like AWS and Azure without impacting end-user experience. In response, we designed a multi-cloud, serverless, GitOps-driven architecture that embodies the Well-Architected Framework principles — balancing reliability, performance efficiency, cost optimization, and operational excellence across clouds. 𝐃𝐚𝐭𝐚𝐟𝐥𝐨𝐰: The user’s app connects seamlessly from any source to our gateway app, which distributes requests equally between Azure and AWS. This dual-cloud setup ensures both robustness and availability, with all responses routed through an API Manager gateway for a unified and smooth experience. 𝐓𝐡𝐞 𝐒𝐞𝐫𝐯𝐞𝐫𝐥𝐞𝐬𝐬 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤: At the core of this architecture is the Serverless Framework. It abstracts infrastructure complexity, automates deployments, and supports GitOps-driven workflows — enabling a truly multi-cloud serverless deployment model that’s scalable and cloud-agnostic. 𝐂𝐈/𝐂𝐃 𝐰𝐢𝐭𝐡 𝐆𝐢𝐭𝐎𝐩𝐬: The CI/CD pipeline is built around GitOps principles, automating build, test, and deploy stages across multiple cloud providers. It ensures that code changes flow securely and reliably, maintaining consistency and compliance throughout the delivery process. 𝐏𝐨𝐭𝐞𝐧𝐭𝐢𝐚𝐥 𝐔𝐬𝐞 𝐂𝐚𝐬𝐞𝐬: Build cloud-agnostic APIs for client applications running across environments. Deploy microservices to multiple cloud platforms with a single manifest file. Maintain cross-cloud redundancy to prevent downtime during regional failures. Run serverless functions in the most cost-efficient or lowest-latency region dynamically. 𝐁𝐥𝐮𝐞-𝐆𝐫𝐞𝐞𝐧 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭: Each cloud platform hosts two duplicate sets of microservices — creating active-passive environments that allow instant failover. This approach ensures continuous availability and low-risk deployments across cloud regions and providers. In today’s world, multi-cloud is not just a choice — it’s a necessity for businesses aiming to stay resilient, cost-optimized, and future-ready. The Serverless Framework, combined with GitOps and Well-Architected principles, helps achieve just that. 💡 Follow me for upcoming posts where I’ll share new, innovative architecture blueprints — real-world examples showing how to design well-architected, reliable, and cost-efficient infrastructure for your business platforms. #cloudcomputing #aws #azure #cloudarchitecture #serverless #gitops #multicloud #devops #wellarchitected

  • View profile for Satish Patil

    Senior Solutions Architect | Kubernetes (CKA) | 2X AWS Certified including SA Pro | Terraform Certified | Designed, Deployed, & Migrated Apps on AWS & Kubernetes | Delivered projects for HealthCare & Financial Clients

    1,799 followers

    ⚡ 𝗦𝘁𝗼𝗽 𝗼𝘃𝗲𝗿𝘀𝗽𝗲𝗻𝗱𝗶𝗻𝗴 𝗼𝗻 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 — 𝗮𝗻𝗱 𝘂𝗻𝗹𝗼𝗰𝗸 𝗵𝗶𝗴𝗵 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗯𝗿𝗲𝗮𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗯𝗮𝗻𝗸! Last week, I had the privilege of running a 𝗵𝗮𝗻𝗱𝘀‐𝗼𝗻 AWS workshop for one of our strategic customers focused on 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗮𝗻𝗱 𝗰𝗼𝘀𝘁 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝗔𝗪𝗦 𝗟𝗮𝗺𝗯𝗱𝗮. 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗶𝘀 𝗮𝗺𝗮𝘇𝗶𝗻𝗴 — it scales effortlessly and eliminates infrastructure management. But without the right optimization approaches, performance costs can quietly creep up. That’s exactly what we tackled together. Here’s what we covered and why it matters: 🔍 𝗞𝗲𝘆 𝗪𝗼𝗿𝗸𝘀𝗵𝗼𝗽 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀 • 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗟𝗮𝗺𝗯𝗱𝗮 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝘁𝗶𝗲𝗿𝘀 - how memory & timeout settings impact throughput and latency.  • 𝗖𝗼𝘀𝘁 𝘃𝘀. 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝘁𝗿𝗮𝗱𝗲‐𝗼𝗳𝗳𝘀 - finding that sweet spot where faster doesn’t mean pricier.  • 𝗥𝗲𝗮𝗹‐𝘄𝗼𝗿𝗹𝗱 𝘁𝘂𝗻𝗶𝗻𝗴 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 - including provisioned concurrency strategies and cold start reduction.  • 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 & 𝗼𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 - using CloudWatch and X‑Ray to pinpoint where optimization delivers the best ROI.  • 𝗛𝗮𝗻𝗱𝘀‐𝗼𝗻 𝗹𝗮𝗯𝘀 - customers experimented with real Lambda functions, measured results, and saw cost drops in real time. Each participant walked away with actionable insights - not just theory - 𝘁𝗼 𝗿𝗲𝗱𝘂𝗰𝗲 𝘁𝗵𝗲𝗶𝗿 𝗟𝗮𝗺𝗯𝗱𝗮 𝗯𝗶𝗹𝗹𝘀 𝘄𝗵𝗶𝗹𝗲 𝗸𝗲𝗲𝗽𝗶𝗻𝗴 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗵𝗶𝗴𝗵. That’s practical cloud engineering in action. 🤵 𝗪𝗵𝗮𝘁 𝗢𝘂𝗿 𝗖𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝗔𝗰𝗵𝗶𝗲𝘃𝗲𝗱 Before the workshop:  • Lambda costs were unpredictable  • Functions experienced occasional latency spikes  • No formal performance benchmarking After the session:  ✔️ Clear performance baselines established  ✔️ Cost‑effective configurations implemented  ✔️ Team empowered with tooling and metrics to iterate independently The difference was noticeable - not just on the dashboard, but in team confidence. 📶 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 𝗙𝗼𝗿 𝗬𝗼𝘂 Serverless is a powerful paradigm - but only when optimized with intent. In every environment I see:  • Over‑allocated memory  • Lack of observability  • Minimal performance profiling Even small tweaks can result in 𝗺𝗲𝗮𝘀𝘂𝗿𝗮𝗯𝗹𝗲 𝗰𝗼𝘀𝘁 𝘀𝗮𝘃𝗶𝗻𝗴𝘀 𝗮𝗻𝗱 𝘀𝗺𝗼𝗼𝘁𝗵𝗲𝗿 𝗲𝗻𝗱‐𝘂𝘀𝗲𝗿 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲𝘀. 🏗️ 𝗪𝗼𝗿𝗸𝘀𝗵𝗼𝗽 𝗹𝗶𝗻𝗸 : https://lnkd.in/exZQmbhw 🥡 𝗛𝗲𝗿𝗲’𝘀 𝗮 𝘀𝗶𝗺𝗽𝗹𝗲 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆:  If you’re using AWS Lambda in production, treat 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗮𝗻𝗱 𝗰𝗼𝘀𝘁 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 as a first‑class concern - not an afterthought. What’s the biggest cost challenge you’re seeing with serverless workloads right now? #𝗔𝗪𝗦 #𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 #𝗟𝗮𝗺𝗯𝗱𝗮 #𝗖𝗹𝗼𝘂𝗱𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 #𝗖𝗼𝘀𝘁𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆

  • View profile for Jayas Balakrishnan

    Director Solutions Architecture & Hands-On Technical/Engineering Leader | 8x AWS, KCNA, KCSA & 3x GCP Certified | Multi-Cloud

    2,931 followers

    𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗙𝗶𝗿𝘀𝘁: 𝗜𝘀 𝗜𝘁 𝗔𝗹𝘄𝗮𝘆𝘀 𝘁𝗵𝗲 𝗥𝗶𝗴𝗵𝘁 𝗖𝗵𝗼𝗶𝗰𝗲? The cloud world is buzzing about “Serverless First” strategies. But is it the 𝗯𝗲𝘀𝘁 𝗽𝗮𝘁𝗵 𝗳𝗼𝗿 𝗲𝘃𝗲𝗿𝘆 𝘄𝗼𝗿𝗸𝗹𝗼𝗮𝗱? Let’s compare serverless and containerized approaches with actionable criteria to help you decide. 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝘃𝘀. 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝘀: 𝗞𝗲𝘆 𝗖𝗼𝗻𝘀𝗶𝗱𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 1. 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆: • Serverless: Auto-scales to zero. Perfect for unpredictable traffic (e.g., APIs, event-driven tasks). • Containers: Manual or cluster-based scaling. Better for steady, high-volume workloads (e.g., microservices, data pipelines). 2. 𝗖𝗼𝘀𝘁: • Serverless: Pay-per-execution. It is cost-effective for sporadic use but can spike with scale. • Containers: Fixed costs for reserved resources. Economical for consistent, long-running processes. 3. 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗢𝘃𝗲𝗿𝗵𝗲𝗮𝗱: • Serverless: No infrastructure management. Focus on code. • Containers: Requires orchestration (Kubernetes, ECS) but offers granular control. 4. 𝗖𝘂𝘀𝘁𝗼𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻: Serverless: Limited to the provider’s runtime/config. Containers: Full control over OS, libraries, and dependencies. 5. 𝗩𝗲𝗻𝗱𝗼𝗿 𝗟𝗼𝗰𝗸-𝗜𝗻: Serverless: Higher dependency on a cloud provider (AWS Lambda, Azure Functions, Google Cloud Functions). Containers: Portable across platforms if built agnostically. 𝗗𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗠𝗮𝗸𝗶𝗻𝗴 𝗖𝗵𝗲𝗰𝗸𝗹𝗶𝘀𝘁 𝗖𝗵𝗼𝗼𝘀𝗲 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗜𝗳: • Your workload is event-driven or has erratic traffic. • You want to minimize DevOps overhead. • Short-lived tasks (e.g., image processing, CRON jobs). 𝗖𝗵𝗼𝗼𝘀𝗲 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝘀 𝗜𝗳: • Predictable, high-performance needs (e.g., gaming backends). • Complex apps requiring custom environments. • Avoiding vendor lock-in is a priority. 💡 𝗧𝗵𝗲 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲 “Serverless First” isn’t a one-size-fits-all mantra. It’s about 𝗺𝗮𝘁𝗰𝗵𝗶𝗻𝗴 𝘁𝗵𝗲 𝘁𝗼𝗼𝗹 𝘁𝗼 𝘁𝗵𝗲 𝗷𝗼𝗯.  Use serverless for agility and cost efficiency in the right scenarios.  Opt for containers when control, portability, and performance are non-negotiable. 𝗪𝗵𝗮𝘁’𝘀 𝘆𝗼𝘂𝗿 𝘁𝗮𝗸𝗲? Have you faced a “serverless vs container” dilemma? #AWS #awscommunity

  • View profile for Mike Thornton

    🔸Unpacking Software Architecture

    21,486 followers

    Is it serverless? Is it serverless because the functions are built and run on demand? 🔸 What if the code runs in containers? 🔸 What if containers are pre-built? 🔸 What if it's microservices? 🔸 What if it's a monolith? Is it still serverless? Someone told me, "Only Functions as a Service is serverless." That seems silly because I've used serverless solutions with monoliths, microservices, databases, and machine learning without a single server to worry about. Whether it’s serverless has nothing to do with if its built on demand or if it uses FaaS like AWS Lambda or Google Cloud Functions. What matters is that you don't have to manage the servers. That the instances can scale to zero or there are no instances to think about. That it scales up with demand. That you are only billed for what you use. OpenFaaS uses pre-built containers. AWS Fargate and Google Cloud Run manage and run containers for you. S3 is serverless storage. AWS Aurora is serverless Postgres DB. Monoliths, microservices, databases, storage, API gateways, event buses, and even complex software like machine learning models can be serverless. What makes it serverless? 🔻 Not where it's built. 🔻 Not if it uses functions. 🔻 Not if it uses containers. 🔻 Not how many APIs it exposes. The abstraction of infrastructure management makes it serverless. (Because "serverless" was originally associated with FaaS, people are now using "Serverless Functions" and "Serverless Containers" to distinguish the type of serverless architecture.) ___ 📭 Subscribe to my newsletter for more on Software Architecture https://lnkd.in/gpGG25TK

  • View profile for Chafik Belhaoues

    Founder of Brainboard.co (YC W22). Former CTO @Scaleway.

    20,436 followers

    📌 Azure serverless automated document classification and processing using AI services. In the era of AI, data processing is a key element, especially that most enterprise data is private. ✅ Based on the Microsoft Azure Architecture Center pattern for document classification automation, this architecture allows you to deploy a comprehensive Azure architecture for automated document classification and processing using Azure AI services, with Terraform. 👉 You can use it as is (it is tested and ready to be deployed) or as an inspiration for your own architecture. 🏛️ Architecture Overview The solution implements a serverless document processing pipeline that: - Ingests documents through a web application - Stores documents in Azure Blob Storage - Triggers processing via Azure Service Bus messaging - Orchestrates document processing using Azure Durable Functions - Analyzes documents with Azure AI Document Intelligence - Stores metadata in Azure Cosmos DB - Creates embeddings and indexes in Azure AI Search - Enables natural language interaction via Azure OpenAI 👉 You can clone this architecture here: https://lnkd.in/eGeKCuPZ It has a full read me file with all instructions on how the components work, connectivity between them and the workflow sequences. 🚀 Your production infrastructure needs a production grade solution. #azure #data #document #ai #brainboard #serverless

Explore categories