Operations | Monitoring | ITSM | DevOps | Cloud

How I Code With LLMs These Days

I first started using AI coding assistants in early 2021, with an invite code from a friend who worked on the original GitHub Copilot team. Back then, the workflow was just single-line tab completion, but you could also guide code generation with comments and it’d try its best to implement what you want. Fast forward to 2025. There’s now a wide range of coding assistants that are packed with features.

AI: Where in the Loop Should Humans Go?

AI is everywhere, and its impressive claims are leading to rapid adoption. At this stage, I’d qualify it as charismatic technology—something that under-delivers on what it promises, but promises so much that the industry still leverages it because we believe it will eventually deliver on these claims. This is a known pattern.

OpenTelemetry Metrics Explained: A Guide for Engineers

OpenTelemetry (often abbreviated as OTel) is the golden standard observability framework, allowing users to collect, process, and export telemetry data from their systems. OpenTelemetry’s framework is organized into distinct signals, each offering an aspect of observability. Among these signals, OpenTelemetry metrics are crucial in helping engineers understand their systems.

OpenTelemetry Is Not "Three Pillars"

OpenTelemetry is a big, big project. It’s so big, in fact, that it can be hard to know what part you’re talking about when you’re talking about it! One particular critique I’ve seen going around recently, though, is about how OpenTelemetry is just ‘three pillars’ all over again. Reader, this could not be further from the truth, and I want to spend some time on why.

Slicing Up-and Iterating on-SLOs

One of the main pieces of advice about Service Level Objectives (SLOs) is that they should focus on the user experience. Invariably, this leads to people further down the stack asking, “But how do I make my work fit the users?”—to which the answer is to redefine what we mean by “user.” In the end, a user is anyone who uses whatever it is you’re measuring.

Wiring Up a Next.js Self-Hosted Application to Honeycomb

Are you attempting to connect Honeycomb to a standalone (not hosted with Vercel) Next.js application? Most of the Next.js OpenTelemetry samples in the wild show how to connect Next.js to Vercel’s observability solution when hosting on their platform. But what if you’re hosting your own standalone Next.js server on Node.js? This blog post will get you started ingesting your Next.js application’s telemetry into Honeycomb.

Preempting Problems in a Sociotechnical System

Here at Honeycomb, we emphasize that organizations are sociotechnical systems. At a high level, that means that “wet-brained” people and the stuff they do is irreducible to “dry-brained” computations. That cashes out as the inability to ultimately remove or replace people in organizations with computers, in spite of what artificial general intelligence (AGI) ideologues would have you believe.

Stop Logging the Request Body!

With more and more people adopting OpenTelemetry and specifically using the tracing signal, I’ve seen an uptick in people wanting to add the entire request and response body as an attribute. This isn’t ideal, as it wasn’t when people were logging the body as text logs. In this blog post, I’ll explain why this is a bad idea, what are the pitfalls, and more importantly, what you should do instead.

Frontend Monitoring: Deliver Seamless and Performant User Experiences

88% of online consumers are less likely to return to a site after a bad user experience. This means that addressing frontend issues such as slow load times, broken features, and unresponsive elements is crucial. Frontend monitoring helps development and IT teams proactively catch and resolve these issues to improve their user experience.

Why Observability 2.0 Is Such a Gamechanger

One of the hardest parts of my job is to get people to appreciate just how much of a difference Honeycomb/observability 2.0 is compared to their current way of working. It’s not just a small step up or a linear improvement. Rather, it’s an entire step change in the way that you write, deploy, and operate software for your customers.