Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Continuous Integration and Development, and related technologies.

Deploy on Friday, Ep. 139

It's Friday, which means it's time to deploy! This week we're covering two weeks of news. On the Octopus side, we have new videos on vibe deployments and proving ROI with the Value Metrics Dashboard, a new Kubernetes migration webinar, and more! In the wider ecosystem, Kubernetes 1.36 "Haru" shipped with user namespaces going GA and Ingress-NGINX officially retired. Docker launched microVM sandboxes for AI coding agents. And Google said developer loyalty to AI tools is at zero.

Dr. Argo Called - Do your custom resources need a check-up | Argo Unpacked Ep. #26

In this episode of Dr. Argo Called, we examine the GitOps “manifest dilemma” by comparing Helm, Kro, Kustomize, and Crossplane for Kubernetes deployments, and explore what their differences mean in a GitOps-driven workflow. We also delve into the often-overlooked topic of custom resource health checks in Argo CD—why they matter, why they shouldn’t reside within Argo CD, and how they could be designed more effectively.

The 2026 software supply chain security gap

AI-generated code is now nearly universal. Enforcement is not. That gap is where your software supply chain is most exposed. Cloudsmith's CEO Glenn Weinstein, Co-Founder & CTO Lee Skillen, and VP of Product Alison Sickelka join Product Marketing Manager Meghan McGowan to unpack the 2026 State of Artifact Management report – a survey-based look at how AI development is reshaping the threat landscape, what organizations are getting wrong, and what the highest-leverage fix actually looks like.

Accelerating AI Agent Development on Google Cloud with JFrog MCP Registry

Developers building agentic AI on Google Cloud have powerful infrastructure at their fingertips: Gemini 3 for reasoning, Google’s Agent Development Kit (ADK) for orchestration, and a rapidly expanding ecosystem of Model Context Protocol (MCP) servers that connect agents to data and tools. So why are so many teams still waiting weeks to ship their first agent to production?

Shipping trustworthy code with Chunk CLI

AI coding agents are fast. They generate functions, refactor modules, and wire up boilerplate faster than any human. What they don’t do by default is enforce the conventions a specific team has agreed on: the lint rules, the review patterns that senior engineers flag on every PR. A generated diff looks clean until someone runs CI or reads it carefully.

The Hidden Cost of DIY DevOps: Why Growing Companies Bring in the Experts

Companies are scaling faster than ever, but infrastructure rarely keeps up with the product. When developers take on operational work on top of everything else, it feels like a smart way to cut costs. In practice, it's one of the most expensive mistakes a growing software team can make. This article breaks down what DIY DevOps actually costs and how a structured approach changes the equation.

Cloudsmith raises $72M Series C to secure the AI software supply chain

Cloudsmith raised $72 million in Series C funding, led by TCV and Insight Partners, to build the operating system for the modern software supply chain. AI agents are writing code faster than teams can secure it. That shifts the risk calculus because more software, built faster, means more attack surface. Artifact management is the control point between every software producer and consumer, and it's where Cloudsmith sits.

Under the Hood: Engineering JFrog Premium Availability

In the modern software factory, 99.9% uptime is no longer the gold standard. A standard 99.9% SLA translates to approximately 43 minutes of unexpected downtime per month. While industry data shows that a single minute of downtime costs an average of $9,000, for large global enterprises, that figure can easily be 5x higher. At tens of thousands of dollars per minute, those 43 minutes quickly compound into a catastrophic financial and operational risk.

Terminal dependencies for CircleCI workflows: Always run what matters

When a job fails, gets canceled, or never runs, the work that still needs to happen afterward (cleanup, notifications, teardown) has no clean way to trigger. There is no easy way to express “run this no matter what” in your pipeline config without duplicating jobs or adding fragile workaround branches. Terminal jobs change that.