Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Counting Forest Fires

If you were asked to evaluate how good crews were at fighting forest fires, what metric would you use? Would you consider it a regression on your firefighters’ part if you had more fires this year than the last? Would the size and impact of a forest fire be a measure of their success? Would you look for the cause—such as a person lighting it, an environmental factor, etc—and act on it? Chances are that yes, that’s what you’d do.

Authors' Cut Spark Notes Edition: Jumpstart Your Observability Journey

Whether you’ve been following along with our Authors’ Cut series or doing some self-paced learning, our O’Reilly book Observability Engineering is one of the best resources for jumpstarting your observability journey. It serves as a blueprint to help you understand and map out the technical and cultural requirements of implementing observability into your organization.

3 Easy Ways to Get Started With Distributed Tracing

Not to put too fine a point on it, but we think distributed tracing gets a very bad rap for being too complicated and labor-intensive. We’re here to show you three ways you can jumpstart a distributed tracing effort, starting small and expanding as it makes sense. These examples involve only a little code and perhaps a bit of a mindset change. Starting small with distributed tracing can even be fun, because who doesn’t like getting customized results without much work?

Author's Cut-A Sample of Sampling, and a Whole Lot of Observability at Scale

Brick by brick, block by block—if you’ve been with us throughout our Author’s Cut blog series (and if you haven’t, you can go catch up), you’ve seen us build the case for observability from the ground up. We’ve covered structured events, the core analysis loop, and use cases for managing applications in production—and that’s just to start.

NS1 Implements Honeycomb to Democratize Their Code and Spark Customer Joy

The line from observability to customer joy is straighter than you think. We recently learned this from NS1, a managed DNS provider and Honeycomb customer, in a panel discussion with Nate Daly, Head of Architecture at NS1 and Chris Bertinato, Software Architect at NS1.

New Honeycomb Integrations Let You Bubble Up Lurking AWS Issues

Today, we’re announcing the expansion of Honeycomb integrations with various AWS services. This update now covers a much wider swath of AWS services, makes it easier to integrate your AWS stack with Honeycomb, and with our new BubbleUp enhancements, you’ll be identifying and debugging hidden issues in your AWS stack faster than ever.

The Incident Retrospective Ground Rules

I joined Honeycomb as a Staff Site Reliability Engineer (SRE) midway through September, and it’s been a wild ride so far. One thing I was especially excited about was the opportunity to see Honeycomb’s incident retrospective process from the inside. I wasn’t disappointed! The first retrospective I took part in was for our ingestion delays incident on September 8th.

Scaling Ingest With Ingest Telemetry

With the introduction of Environments & Services, we’ve seen a dramatic increase in the creation of new datasets. These new datasets are smaller than ones created with Honeycomb Classic, where customers would typically place all of their services under a single, large dataset. This change has presented some interesting scaling challenges, which I’ll detail in this post, along with the solution we used, and how we leveraged Honeycomb’s own telemetry to scale Honeycomb.