Operations | Monitoring | ITSM | DevOps | Cloud

From Vibes to Signals: Observing Your AI Coding Workflow

Agentic coding tools like Claude Code and Codex have taken centre stage and inserted themselves into the critical path of software development. This shift has happened fast, and for most teams, the visibility hasn’t caught up. Until now we’ve been evaluating our vibe coding the same way – on vibes. You might say “this feels faster” or “that seems like a better approach”. That’s not going to scale.

What "AI-Ready Data" actually means for observability teams

Many organizations deploying AI are learning similar lessons right now: the challenge isn’t this or that AI model, it’s the data. According to Gartner, 60% of AI projects will be abandoned by organizations because of failures to support these projects with AI-ready data. Also, 63% of organizations either lack or aren’t sure they have the right data management practices to get there.

AI agents are only as smart as the data you feed it

AI is only as useful as the context you give it. An autonomous observability agent can unlock serious value from your telemetry, but only when the foundation is right: good telemetry, a strong data layer, and efficient access to the data. Annie Freeman and Lewis Isaac had a lot to say about this at AWS Summit London this week! hashtag#Observability hashtag#AI hashtag#AWSSummitLondon hashtag#DevOps hashtag#OpenTelemetry.

DataPrime at Ingest: Fine-Grained TCO Routing with DPXL

The real economic decision for observability happens at ingest, before storage, billing, and retention choices are locked-in. Until now, the logic governing that decision could only see three broad fields: application, subsystem, and severity. That just changed. TCO routing now matches on any field in the event payload, including nested keys, custom fields, and event body content, using DPXL, the DataPrime Expression Language.

Observability is a design problem: Live Laugh Logs ep. 1 - KubeCon Amsterdam 2026

What happens when 20,000 engineers descend on Amsterdam to talk about Kubernetes and AI? Welcome to Episode 1 of Live Laugh Logs, the podcast from Annie, Lewis and Andre from the Coralogix Developer Relations team where we will get together and recap everything going on in our worlds! We had an amazing time at KubeCon in Amsterdam and had loads of insights from the talks we went to around designing observability systems, all the AI tools being created and how to observe them, and using agent-generated code.

Building Audit-Ready Observability for Digital Banking

Most observability platforms are built to answer one question: what’s broken right now. Regulators are asking a different one: what happened, exactly, and can you prove it? Digital banking operates under constant regulatory scrutiny, where frameworks like DORA, PCI-DSS, and GDPR require every incident to be fully reconstructed across systems, timelines, and access. Systems can recover quickly, but the ability to explain what happened often remains fragmented across tools and teams.

Debug frontend issues with AI: Real user monitoring meets the Coralogix MCP server

It is 2 AM. Someone on-call gets paged. Conversion rates on the checkout page dropped 30 percent in the last hour. The immediate questions are familiar. Is this a JavaScript error? A slow API call? A broken third-party script? A performance regression that never throws an exception but quietly drives users away? In most teams, answering those questions is not hard because the data is missing. It is hard because the investigation is split across too many places.

The End of Manual Instrumentation: Scaling Observability with OTel OBI & Coralogix

Traditionally, achieving deep visibility into distributed systems required significant trade-offs in engineering time. Collecting meaningful application metrics and traces required teams to embed language-specific agents, modify source code, or manage complex library dependencies across every service.

Spending More, Seeing Less: How Indexing Limits Capital Markets Visibility

Capital markets systems don’t scale linearly. A macro event, an earnings release, a sudden liquidity shift, and telemetry volume doubles in seconds. In most observability platforms today, that spike means one thing: every byte gets written to a high-cost index before a single query can touch it. There’s no middle ground. You pay full indexing cost for the compliance log that no one queries for six months, the same way you pay for the execution trace you need right now.