Source-to-Image, or s2i, is one of #OpenShift's coolest features, IMO. Zero to Kubernetes in minutes; no Dockerfiles required; almost any language! Yes, even #dotnet apps! My colleague Piotr Mińkowski put together a great guide to help you get started. https://lnkd.in/g9M5R5B7
How to deploy any app with OpenShift's s2i
More Relevant Posts
-
🚀 What if you could give AI agents full system access—without risking your host? Meet BoxLite: the SQLite for VMs. A lightweight, embeddable sandbox that lets AI agents code, install, and explore freely—while keeping your system 100% isolated. 🔥 Why this changes the game: ✅ Hardware-level isolation – Each "Box" is a micro-VM with its own kernel (no shared host risks) ✅ No daemon, no root, no hassle – Just a library you embed, like SQLite for compute ✅ Full Linux freedom – AI can install packages, run servers, or debug—safely ✅ OCI-compatible – Pull any Docker image (python:slim, node:alpine, etc.) in seconds ✅ Cross-platform – Works on macOS (Apple Silicon) and Linux (x86/ARM) This isn’t just another container. It’s the missing link between AI creativity and enterprise-grade security—whether you’re building agentic workflows, hosting customer AIs, or just tired of Docker’s limitations. 💡 Two questions for the builders here: 1. What’s the riskiest thing you’ve let an AI agent do in a sandbox? (And did you regret it?) 2. If you could snap your fingers and fix one pain point in AI execution environments, what would it be? 👉 Hit reply and let’s talk—especially if you’ve battled container breakouts or VM bloat. #AIDevelopment #DevOps #AIAgents #Virtualization #EmbeddedSystems #CloudNative #TechInnovation #FutureOfCompute
To view or add a comment, sign in
-
-
Unpopular opinion that’ll get me excommunicated by the CNCF cult: You wanted docker run -p 80:80 my-app You got 🗒️ 3-node etcd that panics if someone breathes 🗒️ a CNI having a mental breakdown in Go 🗒️ 47 Helm charts auto-updating every 6 minutes because ArgoCD is clingy 🗒️ Istio adding 600 ms latency to trace your 3 weekly requests 🗒️ 3,200 lines of YAML just to open port 80 🗒️ PagerDuty at 3 AM because a pod is emotionally unavailable and won’t terminate Your entire Series B company is a freemium waitlist page for AI-powered todo lists, yet your cluster has more nodes than actual paying customers. You didn’t go cloud-native. You hired six DevOps bros with MacBook Pros to LARP as Google while shipping CRUD for an overengineered todo app “but at scale” #Docker #Kubernetes #OverEngineering #TouchGrass #Security
To view or add a comment, sign in
-
Buzzword galore incoming!!!! 😄 Let’s start in layman’s terms. Imagine you’re scrolling Instagram and then Ronaldo posts a pic. Suddenly millions (more likely billions) of people want to see the same thing at the same time. If you only had a fixed number of servers, two bad things happen: 1. the app gets very slow or... 2. or it just dies So instead of guessing traffic, modern systems scale automatically. That’s where Kubernetes + autoscaling come in. Now here are some TLDRs for all of the matter 👇 Scaling on CPU only is kinda… dumb in real systems. Users don’t care about CPU. They care about: *how slow the app feels (latency) *whether requests are stuck in queues *whether the app recovers fast after spikes That’s where SLOs come in. An SLO (Service Level Objective) is basically: “I want p95 latency under X ms” “I don’t want backlog to grow uncontrollably” So instead of reacting after things break, you scale to protect those goals. This is where KEDA shines. Keda lets Kubernetes scale workloads based on real signals, not just “CPU went brrr”. Sooooooooooooooooooooooooooooooooo... I put together a small event-driven demo app where: traffic ramps from 1 → 1000 RPS 1. jobs go through a real message queue (NATS JetStream) 2. workers are written in Go (real CPU work, not fake sleep()) 3. metrics are scraped every second (Cos I am NOT testing this for 30 mins lol) 4. KEDA decides how many worker pods should exist I’m posting a short ramp-up video(ramps up from 1 to 1000 RPS) below, and I’ll push the project to GitHub with a proper README + manifests so anyone can reproduce the demo. If nothing else, this was a fun way to really feel how autoscaling behaves under pressure 🙂 #kubernetes #keda #autoscaling #golang #observability #mandatoryTagsLol
To view or add a comment, sign in
-
Just spent my weekend nerding out on n8n vs Make for AI automation workflows... because that's apparently what I do for fun now 🤓 The verdict? It's not as simple as picking a winner. n8n is a proper tech playground if you're comfortable getting your hands dirty. Self-hosting option, JavaScript/Python support, and advanced AI orchestration. The pricing (per workflow rather than per step) makes more sense for complex automations. Make gives you 1,500+ ready integrations and a friendlier UI that won't terrify your non-technical colleagues. Quick to set up, but you're locked into their cloud. It's like choosing between a custom gaming PC and a Mac - depends if you want to tinker under the hood or just want something that works straight away. I've been implementing n8n for clients who need that extra control and security of self-hosting. The ability to chain multiple AI nodes together for complex decision trees is brilliant for those edge cases where standard automation falls apart. What automation tools are you lot using? Drop me a DM if you're considering either platform - happy to share my detailed findings and help you figure out which suits your use case best. #AIAutomation #WorkflowAutomation #n8n #DevTools
To view or add a comment, sign in
-
Kubernetes doesn’t solve problems. It exposes them. Teams adopt K8s hoping for: ✔️ Scalability ✔️Reliability ✔️Magic Instead they get: ✔️Complex networking ✔️Debugging nightmares ✔️Cost surprises K8s works when: ✔️Your app is stateless by design ✔️Observability is built-in ✔️You understand failure as normal Actionable takeaway: Adopt Kubernetes after fixing application architecture. CTA: Was K8s a win or a headache for your team? #Kubernetes #CloudNative #DevOpsReality #SystemDesign
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗽𝗼𝗱 𝘄𝗮𝘀 𝗥𝘂𝗻𝗻𝗶𝗻𝗴. 𝗧𝗵𝗲 𝗮𝗽𝗽 𝘄𝗮𝘀 𝘀𝘁𝗶𝗹𝗹 𝗱𝗼𝘄𝗻. I’ve seen this in production: • Pod status: Running ✅ • Container: Healthy (as per Kubernetes) • Alerts still firing ❌ • Users still complaining ❌ At first, it feels like Kubernetes is lying. It isn’t. What actually happened 👇 Kubernetes only knows 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿 𝗵𝗲𝗮𝗹𝘁𝗵, not 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗿𝗲𝗮𝗱𝗶𝗻𝗲𝘀𝘀. My app: • Had an open port • Started the process • Passed the liveness check But: • It couldn’t talk to the database • Or was still warming caches • Or failed a downstream dependency Kubernetes saw “alive”. Users saw “down”. The lesson learned: Kubernetes doesn’t manage 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆. It manages 𝗱𝗲𝘀𝗶𝗿𝗲𝗱 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿 𝘀𝘁𝗮𝘁𝗲. If you don’t teach Kubernetes what “ready” means, it will happily route traffic to something that isn’t. Solution: • A proper 𝗿𝗲𝗮𝗱𝗶𝗻𝗲𝘀𝘀 𝗽𝗿𝗼𝗯𝗲 that checks real dependencies • Delayed traffic until the app was truly ready • Clear separation between “𝘢𝘭𝘪𝘷𝘦” and “𝘳𝘦𝘢𝘥𝘺 𝘵𝘰 𝘴𝘦𝘳𝘷𝘦” #Kubernetes #DevOps #SiteReliabilityEngineering #CloudNative #ProductionLessons #RealWorldScenarios #LearningFromIncidents
To view or add a comment, sign in
-
-
This article explains why Kubernetes needs a higher-level, intent-based layer so you can ship apps without writing huge YAML manifests. It also highlights approaches like KubeVela, OpenChoreo, and Score. More: https://ku.bz/MvCTVVWb0
To view or add a comment, sign in
-
-
2025 was a landmark year for developers—and Kotlin was right in the middle of it. From KotlinConf 2025 teasing upcoming language features like guard conditions, context parameters, and enhanced nullability, to deeper integration with AI tooling and multiplatform momentum, it’s clear Kotlin is evolving to meet a more complex, agent-driven world. At the same time, we saw the Linux Foundation form the Agentic AI Foundation, signaling a shift toward open standards like MCP and AGENTS.md—foundations Kotlin developers can build on as AI agents become first-class citizens in modern applications. Add in Docker Compose’s new agent-friendly workflows and Wasm 3.0 opening the door to larger, more powerful non-web workloads, and the message is clear: ✅ Kotlin’s future isn’t just mobile—it’s agentic, multiplatform, and deeply integrated into modern cloud-native stacks. 2025 set the stage. 2026 is where Kotlin devs ship what’s next.
To view or add a comment, sign in
-
If you use AI-powered coding tools like Gemini CLI, you can now use Crashlytics MCP tools and prompts 🔧 This allows you to use natural language prompts to manage, prioritize, debug, and even fix Crashlytics issues directly within the context of your codebase. Learn more: https://goo.gle/4prRfOQ
To view or add a comment, sign in
-
🧑💻 Devs, rejoice! Keet's Pear Runtime mods make extending with AI/crypto a breeze. Open-source stack rooted in Hypercore—build hybrid apps that scale sans servers. Satscryption's guide for creators pushing boundaries. Code the revolution. https://lnkd.in/eWRcsZVu #KeetMessenger #DevTools #PearRuntime #HypercoreDev #OpenSourceDev #P2PDev #AICrypto #AppDevelopment #TechCreators #Satscryption
To view or add a comment, sign in
Ok so that is VERY cool