Operations | Monitoring | ITSM | DevOps | Cloud

Modern IT and the Burden of Accountability

The leaders responsible for modern IT environments rarely talk about features first. They talk about responsibility. In conversations at Nexus Live 2025, ScienceLogic’s annual customer conference, executives and architects across healthcare, federal systems, managed services, telecom, and enterprise IT described modernization not as a tooling upgrade, but as an escalation of accountability.

Manage service tracing across hosts with Single Step Instrumentation rules

Single Step Instrumentation (SSI) simplifies Datadog Application Performance Monitoring (APM) by automatically discovering and instrumenting services across a host. For many teams, SSI is the ideal starting point because it helps them achieve full visibility with minimal setup. However, as environments grow, teams often want more control over which services get traced. Auxiliary workloads such as batch jobs and cron tasks might not require distributed tracing.

Route OTel data from AI apps to ClickHouse and Datadog using Observability Pipelines

As organizations continue to heavily invest in AI and build more agentic workflows, their telemetry data volumes can surge quickly, and the associated costs can become unpredictable. To regain control of their data, many AI-forward teams are turning to high-throughput, low-latency pipelines to collect and route data to tools such as OpenTelemetry (OTel) and ClickHouse. But these self-hosted solutions come with drawbacks.

You're Running Agents. Your Tooling Is Still Catching Up.

Introducing GitKraken Desktop 12.0. At some point in the last year, the question shifted. It stopped being “should I use AI coding agents?” and became “how do I run more than one at a time without losing my mind?” If you’ve been there, you know what the management layer looks like. A terminal per agent. A worktree created by hand before each session.

Why post-mortem action items die

You can run the best debrief of your life. Honest timeline, blameless tone, real insights. People leave the room nodding. And then nothing happens. This is the last mile problem of post-mortems - and it's an easy trap to fall into. When you've just been through a stressful incident, getting it back up is the priority. Once it's over, the post-mortem itself can feel like the finish line. You've documented what happened, been honest about it, identified what went wrong. It feels like the work is done.

You Don't Need Three Pillars, You Need Single Threads

Last week was a great reminder for me about the challenges of the traditional model of observability defined by the “three pillars” of metrics, logs, and traces. One of the customers I’m currently working with is a large financial institution that has a robust three pillar implementation. Every critical application ships their telemetry to either or both their cloud-native tool and a central tool.

Cloud Cost Visibility at Scale: Why It Fails & How to Fix It | Harness Blog

Why does your cloud cost visibility break down the moment someone spins up a Kubernetes cluster in a new region without telling anyone? You get the alert three weeks later when the bill arrives — and by then, nobody remembers which experiment justified the spend, or which team should own it. This scenario repeats constantly across platform teams managing multi-cloud environments at scale. Cloud cost visibility works fine when you have five services and one AWS account.