Wondering if ClickStack is worth exploring? 🔍 Now you don't need to set anything up to find out. With ClickHouse 26.2, the ClickStack UI is embedded directly in the binary. Install ClickHouse, open localhost:8123, and you can start exploring straight away. 🔹 Full ClickStack UI ships inside the ClickHouse binary — under 4.2 MB added 🔹 Works with any ClickHouse table — point it at your own data and explore 🔹 Built-in preset dashboard for ClickHouse system metrics: query latency, CPU, memory, and inserts 🔹 Great for local experimentation, demos, and learning the product 🔹 For production, Docker and managed Cloud deployments remain the recommended path A fast way to get a feel for what ClickStack can do with your data. 🚀 🔗
Explore ClickStack with ClickHouse 26.2
More Relevant Posts
-
🚀 AWS just made building resilient, long-running applications even easier — introducing the Lambda Durable Functions Kiro Power! If you've been working with Lambda durable functions, you know the complexity involved: replay model best practices, step and wait operations, concurrent execution patterns, error handling with retry strategies and compensating transactions... it's a lot to keep in mind. Now, with the new Kiro Power for Lambda Durable Functions, an AI agent dynamically loads all that expertise directly into your local development environment as you code. No more context switching between docs and your IDE. 🧠 What the AI agent brings to your workflow: Replay model best practices Step and wait operations Concurrent execution with map and parallel patterns Error handling: retry strategies & compensating transactions Testing patterns Deployment with CloudFormation, AWS CDK, and AWS SAM 💡 Real-world use cases it accelerates: - Order processing pipelines - AI agent orchestration with human-in-the-loop approvals - Payment coordination workflows The power is available today with one-click installation from the Kiro IDE and the Kiro powers page — and the source is open on GitHub. If you're building multi-step, long-running applications or AI workflows on AWS, this is a must-try. The gap between "idea" and "working durable function" just got a lot smaller. 👉 Get started: https://lnkd.in/dBqtMy7S #AWS #Lambda #DurableFunctions #Kiro #ServerlessComputing #AIAgents #CloudDevelopment #AWSLambda #GenerativeAI #DevTools
To view or add a comment, sign in
-
𝑻𝒚𝒑𝒆𝒔 𝒐𝒇 𝑨𝑷𝑰𝒔 𝒊𝒏 𝑨𝑾𝑺 𝑨𝑷𝑰 𝑮𝒂𝒕𝒆𝒘𝒂𝒚 – 𝑪𝒉𝒐𝒐𝒔𝒊𝒏𝒈 𝒕𝒉𝒆 𝑹𝒊𝒈𝒉𝒕 𝑶𝒏𝒆 AWS API Gateway enables developers to build secure, scalable, and highly available APIs for cloud-native applications. It supports three main API types—each designed for different architectural needs. 🔹 REST APIs Best suited for enterprise-grade applications that require advanced features such as request/response transformations, API keys, caching, and detailed monitoring. Ideal for exposing legacy systems or managing partner APIs. 🔹 HTTP APIs A modern and lightweight alternative to REST APIs. Provides lower latency and reduced cost, making it perfect for microservices and serverless architectures using services like AWS Lambda. 🔹 WebSocket APIs Designed for real-time, bidirectional communication between clients and backend services. Commonly used in chat applications, live dashboards, gaming platforms, and notification systems. 🔗 Read the full article: https://lnkd.in/guRT2NEW #AWS #APIGateway #CloudArchitecture #Serverless #Microservices #WebSocket #RESTAPI #CloudComputing #SmartCodersConsulting
To view or add a comment, sign in
-
Real time Event Processing using AWS EventBridge for ML use cases like Dynamic Pricing and multi item delivery processing in a single Order. https://lnkd.in/g87zQEke
To view or add a comment, sign in
-
Why most "Cloud Cost" conversations fail in the boardroom. Reading through posts on forums, I noticed that founders seem to face a "Translation Layer" problem in the cloud space. Engineering sees a list of SKU costs... finance sees a rising monthly bill, and neither is speaking the language of Unit Economics. When teams can't tell their board exactly how much it costs to serve a single customer, they aren't managing a cloud—they're managing a black box. With this in mind, I spent the last couple of hours building a Mock FinOps Engine to solve this. So instead of teams looking at "What we spent," this tool correlates raw Azure billing data with real-world application telemetry to answer one question: What is our Marginal Cost per Transaction? What I built: The Fabricator: A script that mimics real Azure Consumption API schemas (no active sub required). The Translator: Tag-based logic that filters out "Dev-Ops noise" to find the true product cost. The Architect: A report engine that generates a C-Suite ready "Economic Narrative." The Goal? One key thought in mind... to ensure that when a Technical Founder talks to an investor, they don't just say "Our cloud is expensive." They say: "Our unit cost is $0.15, and here is how we've engineered it for scale." If you're the "code" type, you can check out the repo here: https://lnkd.in/d6JFUFJg And if not... you'll see something on Medium soon.
To view or add a comment, sign in
-
Use inference where it is needed. Logic everywhere else. At a recent Quality Engineering Suite demo, the company representatives that we demoed it to, asked about the token usage of our Suite. Very good question! As the cloud based inference is most probably not going to get any cheaper, this FinOps -point is a very good one to note. Arguably the most important right now is controlling the size of the context. We optimize token usage by tailoring the limits of what individual agents can do, for example. The use of inference needs to be in line with the context and what model is used in which use case needs to follow this rule too. Avoiding too large output artifacts is another. In the big picture, as my colleague Niko Nousiainen pointed out in our recent discussion, the goal for these systems, especially in software development, should be to use inference to learn, then turn it into cost-effective operations by using ML or just logic based components for scaling and long term use. #qualityengineering #hiddenisnotsecret #finops
To view or add a comment, sign in
-
𝗔𝗪𝗦 𝗼𝗿𝗴𝘀 𝗼𝘃𝗲𝗿𝘀𝗽𝗲𝗻𝗱 𝗯𝘆 𝟮𝟯-𝟯𝟴% - 𝗮𝗻𝗱 𝗺𝗼𝘀𝘁 𝗶𝘀𝗻'𝘁 𝘄𝗵𝗲𝗿𝗲 𝘆𝗼𝘂 𝘁𝗵𝗶𝗻𝗸. Everyone tunes EC2 and Lambda memory. What compounds silently: service dependencies and provisioned resources at zero utilization. 𝗛𝗲𝗿𝗲'𝘀 𝘁𝗵𝗲 𝗽𝗮𝘁𝘁𝗲𝗿𝗻 𝗜 𝘀𝗲𝗲 𝗰𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝘁𝗹𝘆: Hidden cost chains: One S3 request triggers a CloudTrail log. That log feeds GuardDuty analysis. GuardDuty analysis costs money. Now your S3 activity has a billing tail you weren't accounting for. Ghost resource accumulation: Bedrock provisioned throughput doesn't pause when you stop using it. OpenSearch Serverless collections keep running between queries. 𝗡𝗔𝗧 𝗚𝗮𝘁𝗲𝘄𝗮𝘆𝘀 𝗶𝗻 𝗶𝗱𝗹𝗲 𝗔𝗭𝘀 𝘀𝘁𝗶𝗹𝗹 𝗰𝗵𝗮𝗿𝗴𝗲 $𝟯𝟬𝟬-𝟭,𝟮𝟬𝟬/𝗺𝗼𝗻𝘁𝗵. 𝟱𝟯% 𝗼𝗳 𝗔𝗪𝗦 𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝘀 use zero commitment pricing - no Savings Plans, no Reserved Instances. Significant unrealized savings. But teams who fixed commitments and still have high bills? They missed the idle layer entirely. 𝗧𝗵𝗿𝗲𝗲 𝗳𝗶𝘅𝗲𝘀 𝘁𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗺𝗼𝘃𝗲 𝘁𝗵𝗲 𝗻𝗲𝗲𝗱𝗹𝗲: 1. Lambda Power Tuning - run it against every function. 𝗢𝗻𝗲 𝗶𝗱𝗹𝗲 𝗟𝗮𝗺𝗯𝗱𝗮 𝗰𝗼𝘀𝘁 $𝟯𝟭𝗞/𝘆𝗲𝗮𝗿. 𝗦𝗮𝗺𝗲 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝗮𝘁 $𝟳𝟯𝟬/𝘆𝗲𝗮𝗿 𝗮𝗳𝘁𝗲𝗿 𝘁𝘂𝗻𝗶𝗻𝗴. 𝟵𝟳.𝟳% 𝗿𝗲𝗱𝘂𝗰𝘁𝗶𝗼𝗻. 2. 𝗚𝗿𝗮𝘃𝗶𝘁𝗼𝗻 𝗺𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻 - 𝟯𝟬-𝟰𝟬% 𝗰𝗼𝗺𝗽𝘂𝘁𝗲 𝘀𝗮𝘃𝗶𝗻𝗴𝘀, minimal architecture changes for most workloads. 3. Daily idle audit Lambda - checks Bedrock throughput, OpenSearch collections, and NAT Gateways against a 48-hour zero-usage threshold. Auto-cancel anything unused. Cost Explorer shows the bill. It doesn't show which services bill at full rate while doing nothing. 𝗪𝗵𝗮𝘁'𝘀 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗰𝗲𝘀𝘀 𝗳𝗼𝗿 𝗰𝗮𝘁𝗰𝗵𝗶𝗻𝗴 𝗶𝗱𝗹𝗲 𝗿𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 𝗯𝗲𝗳𝗼𝗿𝗲 𝘁𝗵𝗲𝘆 𝗰𝗼𝗺𝗽𝗼𝘂𝗻𝗱? #AWS #FinOps #CloudOptimization #DevOps #CloudNative
To view or add a comment, sign in
-
-
Multi-AZ is the baseline for critical applications. Most teams get that right… What they miss❓ What happens when recovery automation fires across every layer at the same time. RunInstances. CreateStack. RunTask. All hitting per-account API rate limits simultaneously. ThrottlingException. Recovery stalls. The data plane was fine the entire time. The issue was a recovery path that depended on control plane APIs without planning for burst capacity. Amazon Web Services (AWS) published the fix in the Builders’ Library and Well-Architected (REL11-BP04): rely on the data plane during recovery. Pre-provision capacity. Stagger automation. Rate-limit your runbooks. The platform gives you the tools. The gap is in recovery design. 3 slides on the pattern and how to fix it. ↓ Help to share awareness of resilient systems, repost! And follow to know more! #CloudResilience #AWS #WellArchitected
To view or add a comment, sign in
-
Last week, I went hands-on with Amazon Bedrock AgentCore using Strands Agents + Nova Pro. I got to explore what it actually takes to run production-grade AI agents with the BeSA Agentic AI on AWS program. Some things I put into practice: • Deploying agents with AgentCore Runtime for scalable, serverless execution • Enabling agents to run Python and automate web tasks using Code Interpreter and Browser tools • Securing external API access with AgentCore Identity instead of hardcoded credentials • Turning APIs into agent tools with AgentCore Gateway + MCP servers • Adding memory (the coolest part), observability, and tracing to monitor agent behavior in production Building an agent is actually the easy part...but curating an infrastructure that lets agents run securely, remember context, use tools, and scale is the real architecture challenge. 🤖☁️
To view or add a comment, sign in
-
Your employees are building cloud apps with AI right now. Using tools like Cursor, Replit, and Bolt, anyone can create a fully functional web application —complete with file uploads and data sharing — in minutes. Not months. Minutes. Every one of those apps is a data loss vector. And not a single one exists in any CASB vendor's database. We wrote about why legacy CASB can't survive the Vibe Code era → https://bit.ly/40MkxNo And we're showing the solution live at Booth 6359 at #RSA2026
To view or add a comment, sign in
-
-
This hits close to home for every MSP I talk to. Shadow IT used to mean an employee signing up for Dropbox without asking. Now it means spinning up a custom web app, with a database and file sharing, in 10 minutes using AI. Your client has no idea. Their IT team has no idea. And your CASB? It's never seen these apps either. This is the new shadow IT. And it's moving faster than any catalog-based tool can keep up with. As an MSP, this is both a risk conversation and a revenue conversation. The providers who can identify and control these vectors before a breach happens are the ones clients will pay a premium to keep. iboss solves this with inline inspection that doesn't rely on a static app catalog. It sees and controls behavior in real time. If you're an MSP and want to turn this into a client conversation, DM me. #iboss #SASE #MSP #ZeroTrust
Your employees are building cloud apps with AI right now. Using tools like Cursor, Replit, and Bolt, anyone can create a fully functional web application —complete with file uploads and data sharing — in minutes. Not months. Minutes. Every one of those apps is a data loss vector. And not a single one exists in any CASB vendor's database. We wrote about why legacy CASB can't survive the Vibe Code era → https://bit.ly/40MkxNo And we're showing the solution live at Booth 6359 at #RSA2026
To view or add a comment, sign in
-