Always Bee Tracing
If you are running a distributed system and have reached a point of scale where you’re running 5+ services, you are more than likely experiencing difficulties when troubleshooting. Does the following sound somewhat familiar?
If you are running a distributed system and have reached a point of scale where you’re running 5+ services, you are more than likely experiencing difficulties when troubleshooting. Does the following sound somewhat familiar?
In just a few days, we’ll all be embarking on the new year–kicking its tires and planning our conquests for the months to come. This holiday in-between week, while the days are just barely beginning to get a little longer, seems like a good time to look back on the events of the past year and remember some of the good times we’ve shared–so join me for a little retrospective fun.
At Honeycomb, we are frequently asked how we compare to what else is out there. Do these other tools offer observability? Do I need them all? What’s important? Metrics? Logs? What’s the best way to monitor application performance?
What should one pay for observability? How much observability is enough? How much is too much, or is there such a thing? Is it better to pay for one product that claims (dubiously) to do everything, or twenty products that are each optimized to do a different part of the problem super well? It’s almost enough to make a busy engineer say “Screw it, I’m spinning up Nagios”. (Hey, I said almost.)
Happy December! Back in October, we cohosted a SPOOKY HALLOWEEN meetup with our pals at LaunchDarkly about testing in production. Here’s a review of the talks we saw!
This blog miniseries talks about how to think about doing data analysis the Honeycomb way. In this episode, we announce an exciting new feature, currently in beta. Honestly, we’re so excited to get this out the door, we haven’t settled on a final name so for now, we’re going with “Codename: Drilldown.”
In this blog miniseries, I’m talking about how to think about doing data analysis, the Honeycomb way. In Part I, I talked about how heatmaps help us understand how data analysis works. In Part II, I’d like to broaden the perspective to include the subject of actually analyzing data.
Honeycomb has always been about flexibility, power, and speed — and about working with your data in a way that other vendors say is impossible. But now Honeycomb is also about being easier than ever to orient yourself and begin getting value out of your data right away.
You probably know that Honeycomb is the most flexible observability tool around. Its powerful high-cardinality search makes working with real raw data quick and easy. But as you may have learned through hard experience, fetching those dots can still be quite a challenge.
In this blog miniseries, I’d like to talk about how to think about doing data analysis “the Honeycomb way.” Welcome to part 1, where I cover what a heatmap is—and how using them can really level up your ability to understand what’s going on with distributed software. Heatmaps are a vital tool for software owners: if you’re going to look at a lot of data, then you need to be able to summarize it without losing detail.