AWS AppSync's Events API is a powerful tool for building real-time features. But you can see its value from much more than just generic chat apps or live dashboards. Instead imagine a real-time sports score updates. 🎯 E.g. A backend service receives score changes from an external API. Instead of polling or complex pub/sub setups, this service publishes an event like: { gameId: "game-101", homeScore: 3, awayScore: 2 } to an AppSync topic for that specific game. Connected clients subscribed to game-101 receive the update immediately. Publishing a new event is as easy as creating a POST request to the API with your payload. This pattern is far more efficient and scalable than maintaining persistent connections for every user or implementing a custom fan-out mechanism. The key benefit here is treating events as first-class data objects that AppSync distributes. You define your topics and publish messages. And AppSync handles the underlying WebSocket infrastructure, subscription management, and delivery. This dramatically reduces the boilerplate code and operational overhead usually associated with real-time features. Instead of managing connection state, subscription routing, and broadcast logic, you focus on your application's core domain/frontend/UI. --- Liked this post? 🔔 Follow me for more insights on building with DynamoDB. ♻️ Repost to share it with your network 📬 Subscribe to my newsletter for weekly posts on AWS/DynamoDB
Real-time Sports Scores with AWS AppSync Events API
More Relevant Posts
-
LambdaDB Cloud is now in public preview. Get started in 5 minutes → https://lnkd.in/gXDXFJ_b We built LambdaDB because most "serverless" AI databases aren't actually serverless — they're server-based products with a serverless API. That means limited region availability, performance that degrades under load, and costs that grow faster than your usage. LambdaDB is built differently — fully distributed on AWS Lambda and S3, all the way down: → Compute, memory, and storage scale independently with no manual sharding → Full-text, multi-vector, and hybrid search in a single query on a flexible document model → 33 AWS regions — run it where your data needs to live → >1 GB/s write throughput per serverless collection → Git-like data branching — fork a production index, test new embedding models, promote when ready → Configurable strong consistency and point-in-time recovery → $0 monthly minimum — pay only for what you use If you're building with LLMs and tired of infrastructure that can't keep up, we'd love for you to try it. Get started in 5 minutes → https://lnkd.in/gXDXFJ_b Feedback welcome — we're actively building in the open.
To view or add a comment, sign in
-
15 years of building data infrastructure taught me one thing: "serverless" is meaningless if the underlying architecture is still server-based. That's the problem we built LambdaDB to solve. Public preview is live today — would love to hear what you think.
LambdaDB Cloud is now in public preview. Get started in 5 minutes → https://lnkd.in/gXDXFJ_b We built LambdaDB because most "serverless" AI databases aren't actually serverless — they're server-based products with a serverless API. That means limited region availability, performance that degrades under load, and costs that grow faster than your usage. LambdaDB is built differently — fully distributed on AWS Lambda and S3, all the way down: → Compute, memory, and storage scale independently with no manual sharding → Full-text, multi-vector, and hybrid search in a single query on a flexible document model → 33 AWS regions — run it where your data needs to live → >1 GB/s write throughput per serverless collection → Git-like data branching — fork a production index, test new embedding models, promote when ready → Configurable strong consistency and point-in-time recovery → $0 monthly minimum — pay only for what you use If you're building with LLMs and tired of infrastructure that can't keep up, we'd love for you to try it. Get started in 5 minutes → https://lnkd.in/gXDXFJ_b Feedback welcome — we're actively building in the open.
To view or add a comment, sign in
-
🚀 Just shipped: TravelEase: AI-Powered Serverless Contact Form on AWS I recently built a production-ready serverless application that solves the following business problem: travel agencies losing customer inquiries through unreliable email links with no tracking, no confirmation and no way to prioritize leads. What the solution does: → Captures customer travel inquiries via a CloudFront-hosted HTTPS form → Generates AI-powered insights using the Anthropic Claude API; giving TravelEase employees instant context on every submission before they respond. → Persists data to DynamoDB and sends automated confirmations via Amazon SES. All within ~3 seconds, at ~$0.50/month What I built: → Serverless architecture on AWS (Lambda, API Gateway, DynamoDB, SES, Secrets Manager, CloudFront) → 100% Infrastructure as Code using Terraform → CI/CD pipeline with GitHub Actions using OIDC authentication = no static credentials stored anywhere → Human-in-the-loop deployment gates means that Terraform plan is reviewed on every Pull Request before apply runs → Active monitoring with CloudWatch Alarms and SNS for real-time alerting → Defense in depth security across every layer of the architecture What this means: Serverless solutions reduce operational overhead so TravelEase can focus entirely on what matters most; the customer relationship. AWS managed services handle the scalability behind the scenes, and the Claude AI integration gives employees instant context on every inquiry, helping them respond faster and more meaningfully. AI doesn't replace the human connection here, it strengthens it. Documentation on Medium: https://lnkd.in/eCAK9PpF GitHub Repository: https://lnkd.in/eGVgYjYc
To view or add a comment, sign in
-
🚀 Last week I completed the course “Amazon Bedrock AgentCore: Build & Deploy Any AI Agent on AWS.” It gave me a clear picture of how easily we can take agentic AI POCs to production using AWS’ structured ecosystem. Here’s a quick breakdown of the 7 core components I learned: 1. Agent Core Runtime The execution layer for AI agents—similar to AWS Lambda. It works with frameworks like LangGraph, CrewAI, Strands Agents and models like OpenAI, Anthropic, Gemini. There is agent Runtime decorator handles routing, interactions, and orchestration. It converts exposes two endpoints : 1.1 /invocation (main endpoint) 1.2 /ping (health check) User can make request to first endpoint but before sending request we needs to check for user's authentication and authorization which is handled by "Azure Core Identity ". 2. Agent Core Identity Ensures who is calling the agent and what they’re allowed to access. 2.1 Inbound: Authenticates users with providers like Cognito/Okta.Once our user has been authenticated himself using azure cognito then the bearer token generated by cognito sent to the azure core identity .It does some validation and if approved, user able to send request to invocation endpoint. 2.2 Outbound: Controls what resources the agent itself can reach (S3, databases, SharePoint, etc.) 3. Agent Core Memory Provides: 3.1Short‑term memory: for current sessions 3.2 Long term memory : It has multiple strategies for example any conversation done today can be summarized and stored and can be used in next coversation, It also provides "semantic memory"means it will retain all the factual information which can be used in related conversations. 4. Agent Core Observability Monitoring + GenAI‑focused tracing to help understand agent behavior and performance. 5. Agent Core Gateway Validates tool access and permissions before the agent fetches any third‑party data. 6. Agent Core Browser Allows agents to retrieve live web information securely. 7. Agent Core Interpreter A safe sandbox where agents can execute and test code—useful for scenarios like automated debugging or patch testing.
To view or add a comment, sign in
-
-
Build an MCP server for LLMs in minutes — TypeScript + azd + Azure Container Apps. 🚀 This post walks through building a Model Context Protocol (MCP) server from the powergentic/azd-mcp-ts template. You’ll see how to define resources/tools with the MCP SDK, stream responses via SSE, containerize with Docker, and deploy to Azure using the Azure Developer CLI (azd). Key takeaways: - What MCP is and why it matters for LLM integration 🧩 - Use TypeScript + @modelcontextprotocol/sdk and the powergentic template - SSE as the simple, reliable server→client streaming transport ⚡ - Dockerize locally and deploy to Azure Container Apps with azd 🐳 - How to add tools (example: calculate-bmi) and extend the server 🔧 Read it to get a practical, production-ready path for exposing structured data and actions to LLMs: https://lnkd.in/ec9chm2p #MCP #TypeScript #azd #Azure #Docker
To view or add a comment, sign in
-
On hype topics in the software industry: #Kubernetes: Don't use it if you're a team of 300; you are not Google. #Microservices: If your team fits into a single room, stick with a monolith. You're not saving the world. #Kafka: If you don't have 1 million events streaming per second, don't install it; you'll get crushed under the complexity. What's wrong with RabbitMQ/NATS/REDIS? #Rust: There's no need to split the atom for a simple CRUD app; #Multi-Cloud: You can't even use a single cloud provider efficiently yet; don't double your bill out of fear of "vendor lock-in." #NoSQL: If even you know the relationships between your data, don't force a document-based database and end up simulating joins yourself. #CleanArchitecture: Don't get lost in a 15-folder "onion architecture" when a simple 3-tier structure is enough. Code readability is more important than the number of files. #Serverless: Don't turn your architecture upside down for a function that gets 100 requests a month; just get a VPS and find peace. #AI/LLM: Don't try to integrate a "Chatbot" or "RAG" into every problem; sometimes a well-written if-else block saves the day.
To view or add a comment, sign in
-
AWS Lambda durable functions enable developers to reliably replay and pause code (and therefore, billing) while extending the possible run time of a Lambda function. However, it is pretty new and code tools are struggling to know the concepts and syntax. If you are using Kiro, we can help! https://lnkd.in/erfgks4z
To view or add a comment, sign in
-
AWS launched Agent Plugins yesterday. Production-ready. You type "deploy to AWS" and get a complete CI/CD pipeline with architecture diagrams. Bedrock AgentCore now handles memory, identity, and tool integrations as standard cloud services. The infrastructure layer for AI agents is settling. We're moving from "experiment with AI" to "run AI as operations." Three things changed: 𝟭. 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 𝗹𝗼𝗼𝗸𝘀 𝗹𝗶𝗸𝗲 𝗗𝗲𝘃𝗢𝗽𝘀 𝗻𝗼𝘄 CloudFormation templates for agent architectures. IAM policies for agent permissions. Familiar patterns for infrastructure teams. 𝟮. 𝗠𝗲𝗺𝗼𝗿𝘆 𝗶𝘀 𝗮 𝗺𝗮𝗻𝗮𝗴𝗲𝗱 𝘀𝗲𝗿𝘃𝗶𝗰𝗲 Vector databases and conversation persistence come as configuration options now. Less plumbing to maintain. 𝟯. 𝗧𝗼𝗼𝗹 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗲 The deploy-on-aws plugin generates pipelines automatically. Third-party tools get consistent interfaces. At LFG Labs, we've deployed agents on VPS, Mac Mini fleets, and cloud instances. (We've seen the full spectrum from "works great" to "why is this so expensive?") AWS's move validates what we've observed: agent infrastructure is maturing fast. But managed services have costs that creep. One client's bill grew 4x in 90 days because they didn't model token volume correctly. AWS lowered the barrier to entry. That's great for experimentation. For production at scale, you still need to pick your infrastructure deliberately. Start with the problem. Match the tool. Then scale.
To view or add a comment, sign in
-
Hi everyone, I’m sharing a new feature from Holding The Load that can save you money and protect your automation workflows. Now you can validate webhook data before it reaches your automation. That means: - You can check if the webhook contains the required fields - You can verify if the data types are correct - Only valid requests are stored and processed Why does this matter? If you are running automations with tools like n8n (self-hosted on a VPS or using n8n Cloud), every workflow execution consumes resources. When invalid or incomplete webhooks trigger your automation: - Your VPS wastes CPU and memory - Your workflows execute unnecessarily On cloud plans, you may pay for executions that should never have happened. With this validation layer in front of your automation, you block bad requests before they cost you time or money. Link project: https://lnkd.in/dtWXa2zi Video (PT-BR): https://lnkd.in/dZQyEgf7 Video (EN-US): https://lnkd.in/dPPghkCH If you are building automations and want more control over cost and stability, this feature was made for you. #n8n #ai #chatbot #aichatbot #vibecode #automation #aiagent #python #php #node #javascript #java #ruby #golang #csharp
Holding the load - Data validation before store webhook requests #n8n #vibecoding #microsaas #saas
https://www.youtube.com/
To view or add a comment, sign in
-
🚀 "Unlocking Real-Time Data Processing with Azure Service Bus vs AWS SQS" In modern backend development, processing real-time data is crucial for efficient decision-making. Two popular message broker services, Azure Service Bus and AWS SQS, help developers achieve this by handling high volumes of messages. This post explores the key differences between these two services and when to use them. Why it matters: With the increasing demand for real-time data processing, choosing the right message broker service can make a significant difference in your application's performance and scalability. Comparison or key differences: Azure Service Bus is a fully managed platform that provides a more comprehensive set of features, including support for multiple messaging patterns (queue-based, topic-based, and request-response). AWS SQS, on the other hand, is primarily designed for queue-based messaging and excels in handling large volumes of messages. Real-world example: Imagine you're building an e-commerce platform that requires real-time inventory updates. You can use Azure Service Bus to process these updates efficiently or opt for AWS SQS if your primary concern is handling a massive volume of order processing requests. When developers should use it: When building applications that require real-time data processing, such as financial trading platforms or gaming services. Choose Azure Service Bus for its comprehensive feature set and scalability, or opt for AWS SQS if you're primarily handling large volumes of messages. ✔️ Use Service Bus → if you need pub/sub, ordering, complex workflows ✔️ Use SQS → if you need massive scale with simplicity 👉 There’s no “best” — only the right fit for your architecture. #dotnet #microservices #cloudarchitecture #azure #aws #sqs #servicebus #eventdrivenarchitecture
To view or add a comment, sign in