Honeycomb

2016
San Francisco, CA, USA
Jun 7, 2019   |  By Molly Stamos
Everyone wants to be more efficient — to spend less time on the tedious things, and more time on the things that move the needle. As much as possible, if you can automate those tedious things, you should. With Honeycomb, we enable you to understand how your application behaves in production through the ability to iteratively ask questions of the system instrumentation data, no matter how granular. Honeycomb triggers enable you to be notified when specific things happen in your system.
Jun 4, 2019   |  By Peter Tuhtan
Our latest product update features an intuitive home (landing) page that orients users with a quick, real-time view into what’s happening right now in your production systems. Home displays commonly used queries and breakdowns, and provides a jumping-off point to explore data about what’s happening in production.
May 17, 2019   |  By Liz Fong-Jones
Last week, Rachel published a guide describing the advantages of dynamic sampling. In it, we discussed varying sample rates to achieve a target collection rate overall, and having different sample rates for distinct kinds of keys. We also teased the idea of combining the two techniques to preserve the most important events and traces for debugging without drowning them out in a sea of noise.
May 16, 2019   |  By Ben Hartshorne
Let’s not bury the lede here: we use Observability-Driven Development at Honeycomb to identify and prevent DB load issues. Like every online service, we experience this familiar cycle. This is not a bad thing! It’s a normal thing. Databases are easy to start with and do an excellent job of holding important data. There are many variations of this story and many ways to manage and deal with the growth, from caching to read replicas to sharding to separation of different types of data and so on.
May 9, 2019   |  By Rachel Perkins
One of the most common questions we get at Honeycomb is about how to control costs while still achieving the level of observability needed to debug, troubleshoot, and understand what is happening in production. Historically, the answer from most vendors has been to aggregate your data–to offer you calculated medians, means, and averages rather than the deep context you gain from having access to the actual events coming from your production environment.
Aug 11, 2018   |  By Honeycomb
This document discusses the history, concept, goals, and approaches to achieving observability in today’s software industry, with an eye to the future benefits and potential evolution of the software development practice as a whole.
May 20, 2019   |  By Honeycomb
In this interview with Honeycomb Software Engineer, Ben Hartshorne, we get a to see and hear valuable insights on why observability, distributed tracing and Honeycomb help engineers gain great understanding on how software behaves in all stages of development. Ben will tell you how he builds software, instruments his code and uses Honeycomb to constantly update the “mental model” of how software really works.
Dec 18, 2018   |  By Honeycomb
Visualize your Thundra monitoring data with Honeycomb. Identifying critical issues in your stateless serverless environments can be difficult. Often, you are left guessing at where the problems may lie. Learn how to pinpoint critical issues in your AWS Lambda environment with the deep query and end-to-end tracing.
Dec 18, 2018   |  By Honeycomb
In this Honeycomb.io demo we will see how the world's fastest tool to visualize, understand, and debug software can do just that. We’ll find a particularly hard problem to solve, super fast; with amazing tools such as Honeycomb BubbleUp!
Dec 4, 2018   |  By Honeycomb
A change to a single line of code sent the prices of thousands of products on Amazon to a penny. Taking care of customers and focusing on engineering best practices allowed a company to survive and thrive after a "make or break" event.
Dec 4, 2018   |  By Honeycomb
Once upon a time, our hosted DB provider had a terrible security incident, causing us to take down the entire product for 24 hours. This is a story about the aftermath: the downtime, the incident response, how we got back up, and how we communicated with customers.