Honeycomb Service Level Objectives (SLO) can notify your team when one of your error budgets are being exhausted. See how the SLO view gets you from zero information to identification in one click!
As software teams race to integrate AI into their development workflows, we need to ask ourselves: are AI-powered tools actually making software better? The latest research from DORA confirms what many engineers have long suspected, and what we at Honeycomb have said for a long time: AI tools don’t magically lead to better software. In fact, without careful implementation, AI can introduce a whole slew of challenges, including decreased productivity and unreliable code.
I first started using AI coding assistants in early 2021, with an invite code from a friend who worked on the original GitHub Copilot team. Back then, the workflow was just single-line tab completion, but you could also guide code generation with comments and it’d try its best to implement what you want. Fast forward to 2025. There’s now a wide range of coding assistants that are packed with features.
AI is everywhere, and its impressive claims are leading to rapid adoption. At this stage, I’d qualify it as charismatic technology—something that under-delivers on what it promises, but promises so much that the industry still leverages it because we believe it will eventually deliver on these claims. This is a known pattern.
OpenTelemetry (often abbreviated as OTel) is the golden standard observability framework, allowing users to collect, process, and export telemetry data from their systems. OpenTelemetry’s framework is organized into distinct signals, each offering an aspect of observability. Among these signals, OpenTelemetry metrics are crucial in helping engineers understand their systems.
OpenTelemetry is a big, big project. It’s so big, in fact, that it can be hard to know what part you’re talking about when you’re talking about it! One particular critique I’ve seen going around recently, though, is about how OpenTelemetry is just ‘three pillars’ all over again. Reader, this could not be further from the truth, and I want to spend some time on why.
One of the main pieces of advice about Service Level Objectives (SLOs) is that they should focus on the user experience. Invariably, this leads to people further down the stack asking, “But how do I make my work fit the users?”—to which the answer is to redefine what we mean by “user.” In the end, a user is anyone who uses whatever it is you’re measuring.
Are you attempting to connect Honeycomb to a standalone (not hosted with Vercel) Next.js application? Most of the Next.js OpenTelemetry samples in the wild show how to connect Next.js to Vercel’s observability solution when hosting on their platform. But what if you’re hosting your own standalone Next.js server on Node.js? This blog post will get you started ingesting your Next.js application’s telemetry into Honeycomb.
Here at Honeycomb, we emphasize that organizations are sociotechnical systems. At a high level, that means that “wet-brained” people and the stuff they do is irreducible to “dry-brained” computations. That cashes out as the inability to ultimately remove or replace people in organizations with computers, in spite of what artificial general intelligence (AGI) ideologues would have you believe.
With more and more people adopting OpenTelemetry and specifically using the tracing signal, I’ve seen an uptick in people wanting to add the entire request and response body as an attribute. This isn’t ideal, as it wasn’t when people were logging the body as text logs. In this blog post, I’ll explain why this is a bad idea, what are the pitfalls, and more importantly, what you should do instead.