Running distributed SQL on Kubernetes shouldn’t mean brittle scripts or late-night toil. The new CockroachDB Operator is purpose-built for automation, using custom resources to eliminate complexity. Multi-region deployments are straightforward, and zero-downtime scaling or upgrades keep always-on applications truly always on. The Operator is in Public Preview today, production-tested and ready for evaluation, with more ahead of GA in 2025. Learn more: https://cockroa.ch/4ns4g9E
CockroachDB Operator for Kubernetes: Simplify Distributed SQL
More Relevant Posts
-
Restart-Operator is a Kubernetes operator for scheduling recurring restarts of workloads using cron expressions. It watches custom `RestartSchedule` CRDs, and when triggered it annotates the pod template (`restart-operator.k8s/restartedAt`). More: https://ku.bz/0kzldJ2Ql
To view or add a comment, sign in
-
12‑factor is still the fastest way to spot reliability debt. Start with a lightweight assessment across services. Document risks, owners, and evidence—runbooks, pipeline links, and dashboards. Score each factor so remediation is auditable and sequenced by risk, not noise. Design opinionated defaults. Use GitOps overlays for config and secrets, SBOM generation in CI for dependencies, and SLO dashboards for logs. Publish templates in your developer portal so new services start compliant by default. Automate enforcement. Add pipeline gates for dependency drift and secret scanning, and runtime monitors that flag non‑conformant deployments. Make failures actionable with docs and self‑serve fixes. Review factors quarterly and celebrate full compliance to reinforce behavior. The payoff: reproducible builds, cleaner rollbacks, and fewer snowflake environments. Get the modernization checklist and start standardizing today: https://zurl.co/J8k9x
To view or add a comment, sign in
-
The Kubernetes community has rolled out alpha support for a changed block tracking mechanism, enhancing storage efficiency. I found it interesting that this feature could significantly streamline operations for cloud-native applications. What are your thoughts on the impact of improved block tracking on data management strategies in Kubernetes?
To view or add a comment, sign in
-
Exciting news from the Kubernetes team! They have announced alpha support for a changed block tracking mechanism, which enhances efficiency in data management. What stood out to me was the potential impact this could have on optimizing storage solutions within Kubernetes environments. How do you think this feature might change data management strategies in cloud-native applications?
To view or add a comment, sign in
-
You may have seen that Alexis Richardson mentioned that we're going to be talking about "config as data" at Kubecon (link to the talk below). What is Configuration as Data? Why would anyone want to use that approach to manage configuration? I wrote a brief post to explain it: https://lnkd.in/gS9dF-iz If you already use the rendered manifest pattern, you're partway there, but still tethered to templates and/or patches. The talk: https://lnkd.in/gE6XBSnt #Kubernetes #GitOps #InfrastructureAsCode
To view or add a comment, sign in
-
🔍 Case Study: When Apache Airflow Scheduler Refused to Update Imagine having to restart your Scheduler pod every time new DAGs are deployed. That’s exactly what our client faced when their Airflow Scheduler stopped auto-updating files from its PVC — while other pods worked perfectly. Our expert reviewed caching settings, database health, and .pyc file conflicts before tracing the issue to a known bug in the client’s Airflow versions. The fix? Upgrading to Airflow 2.10.4. The result was seamless DAG updates, no more manual restarts, and a much more reliable data pipeline. 👉 Check the comments for the full case study. #ApacheAirflow #DataEngineering #DevOps #CloudNative #OpenSource #Kubernetes #Automation #Hossted
To view or add a comment, sign in
-
-
The recent announcement of alpha support for a changed block tracking mechanism in Kubernetes is a significant development for enhancing data management efficiency. I found it interesting that this feature aims to optimize performance for workloads that require rapid data access. As Kubernetes continues to evolve, what capabilities do you think will be most impactful for developers in the next few years?
To view or add a comment, sign in
-
🚀 Day 57 of #100DaysOfDevOps – Deploying Grafana on Kubernetes Today’s task: Deploy a Grafana app in Kubernetes using a Deployment. Expose it via a NodePort Service (32000) to access the Grafana login page. Task breakdown: 1️⃣ Checked cluster info and nodes (kubectl cluster-info, kubectl get nodes). 2️⃣ Created a Deployment YAML for grafana-deployment-datacenter. 3️⃣ Created a Service YAML of type NodePort. 4️⃣ Applied both YAMLs with kubectl apply -f. 5️⃣ Verified Deployment and Pods status (kubectl get deployments, kubectl get pods). Key Learnings: • Deployments provide replica management & rolling updates. • NodePort makes the service available outside the cluster - handy for quick testing. • Keeping YAMLs modular (deployment + service separately) improves maintainability.
To view or add a comment, sign in
-
Finally! I published my article about how we automatically repair nodes in Kubernetes clusters. We currently have over 70 clusters with over 4,000 nodes. For us, node failures are routine. In the article, I described how I implemented automation that freed us from this burden. https://lnkd.in/dvQExpV3
To view or add a comment, sign in