Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Log Management, Log Analytics and related technologies.

AI SRE Update: Your Feedback Shaped Our Latest Release

A note from Lauren Nagel, Mezmo's VP of Product: At Mezmo, we believe the best observability tools aren't just built for users, they're built with them. Since the launch of Mezmo's AI SRE agent, we've listened and learned from our customers. The feedback and insights have been invaluable in helping our teams refine and enhance the experience. Today, we're excited to share our latest release, packed with improvements and powerful new capabilities that make our AI SRE even faster and more intuitive.

Elevating global operations: Mastering multi-cluster Elastic deployments with Fleet

In today's global enterprises, distributed infrastructure is the norm, not the exception. Organizations operate across continents and are driven by customer proximity and regulatory requirements. For the Elastic Stack, this reality often translates into a multi-cluster deployment model, where data is collected and stored in multiple geographically dispersed Elasticsearch clusters. But, why adopt complexity? The decision to decentralize data storage is generally driven by three critical factors.

Cribl Search Pack for Outlook Email Activity

Email is still mission-critical, but most teams have very little visibility into what’s actually happening behind the scenes. In this video, I give a quick walkthrough of an inbox intelligence dashboard built on Cribl Search. It shows email volume, delivery health, and unusual activity at a glance, without digging through raw logs unless of course you like doing that.

Your Easiest 2026 Resolution: Simplify the Collection Layer and Move to OTel Without the Agent Sprawl

This is blog 2 in our New Year, New Resolution Series on OTel migrations. Read the first post, "New Year, New Telemetry: Resolve to Stop Breaking Dashboards", here. Most New Year’s resolutions fail because they require a "big bang" change. If your 2026 mandate is to migrate to OpenTelemetry (OTel), the traditional approach is the definition of friction.

Logging in React Native with Sentry

Logs are often the first place dev teams look when they investigate an issue. But logs are often added as an afterthought, and developers struggle with the balance of logging too much or too little. As a seasoned developer, you may remember a time when you were asked to investigate an issue and then handed a 200 MB plaintext log file. Three hours and four Python scripts later, you would realize that the problem was in a different component.

Not everything that breaks is an error: a Logs and Next.js story

Stack traces are great, but they only tell you what broke. They rarely tell you why. When an exception fires, you get a snapshot of the moment things went sideways, but the context leading up to that moment? Gone. That's where logs come in. A well-placed log can be the difference between hours of head-scratching and a five-minute fix. Let me show you what I mean with a real bug I encountered recently.

Bindplane + Oodle.ai: AI-Native Observability Meets AI-Driven Telemetry Pipelines

Today, we’re excited to announce a new integration between Bindplane and Oodle.ai — combining an AI-driven, OpenTelemetry-native telemetry pipeline with an AI-native observability platform built for extreme scale. With Bindplane acting as the control plane for telemetry and Oodle.ai providing AI-powered analysis across logs, metrics, and traces, you get a single, intelligent, vendor-neutral pipeline from raw telemetry to actionable insight.

How to debug a Next.js production bug with Logs and Sentry

Stack traces tell you what broke. They rarely tell you why. In this video, Serge walks through a real Next.js production bug that only affected Firefox and Safari. The error showed up clearly in Sentry, but the stack trace alone wasn’t enough to explain what was going wrong. The missing piece turned out to be logs. You’ll see how adding logs to a Next.js API route exposed unexpected request data, how those logs connected back to traces, and how that context made the root cause obvious and easy to fix.