San Francisco, CA, USA
2016
  |  By Midge Pickett
Two things happen when engineers first connect the Honeycomb MCP to their AI assistant. The first is the blank page problem. The Honeycomb UI gives you something to react to: a heatmap, a query builder, a trace to click into. An AI assistant gives you a cursor and nothing else. When you don't know where to start, that's a hard place to be. The second shows up right after you get past the first one. You ask a question, you get a confident-sounding answer, and you're not sure whether to trust it.
  |  By Austin Parker
One early spring morning in 1535, the residents of Stockholm awoke to a most curious sight. Six suns lit up the sky, connected by bright halos, as immortalized in Vädersolstavlan, seen here. Today, we recognize these atmospheric effects as a parhelion (also referred to as ‘sun dogs’)—an illusion caused by light refracting off crystalline formations in the atmosphere.
  |  By Mike Goldsmith
Real production data tells the story better than I can. Juraci Paixão Kröhling, a friend and fellow observability practitioner at OllyGarden, recently shared an example from an anonymized production environment: 1,830 occurrences of http.url and 23,984 occurrences of url.full in the same dataset. Both attributes describe the same thing. Both are actively being written to the same backend at the same time.
  |  By Ken Rimple
Are you writing agentic applications, but aren’t sure what the agents are doing? Finding out too late that you've blown the budget with super expensive models? Not sure where the agents are failing, and feeling a loss of control? Could they do better? Observability is the visibility you need to get the job done. Sending telemetry to Honeycomb explains what your agents are actually doing.
  |  By Erwin van der Koogh
Last week was a great reminder for me about the challenges of the traditional model of observability defined by the “three pillars” of metrics, logs, and traces. One of the customers I’m currently working with is a large financial institution that has a robust three pillar implementation. Every critical application ships their telemetry to either or both their cloud-native tool and a central tool.
  |  By Rox Williams
Over the last three months, we’ve been exploring what about software development and observability changes with AI, and what doesn’t. Our conclusion: these five principles will still remain true, even when 90% of the code is AI-driven. The agentic AI space is moving fast. Models are improving, context windows are expanding, and the ways people build and operate agents are changing so fast that any thoughts we share could feel dated by the time you read this.
  |  By Alex Boten
Agentic workloads thrive with precision tooling. Just like developers, they need the rich context, high cardinality, and fast feedback loops that allow them to ask exploratory open-ended questions of their code. But instrumentation is costly, and from the dawn of software, developers have tried to do the most possible with the least amount of resources.
  |  By Austin Parker
On April 1st, I joined Akshay Utture from Augment Code for a webinar on how AI agents use production feedback to improve code.
  |  By Douglas Soo
If you’re like everyone else who works in software development, it’s a good bet that almost every single thing that you thought you knew about your business and engineering has changed as a result of the advent of modern LLMs. How should you respond to these changes? How should you change how you and your team develop software?
  |  By Ken Rimple
The agent era is here. Engineering teams are shipping AI-powered products, deploying multi-agent systems, and trying to figure out what observability even means for non-deterministic systems.
  |  By Honeycomb
See how Honeycomb uses AI in our built-in assistant, Canvas. Then see how your agent can use Honeycomb with our MCP. Both can get from a vague question to the root cause of a latency spike in a few minutes, and the agent with MCP can even fix it!
  |  By Honeycomb
Chapters: In this video we take a tour through Honeycomb's Frontend Observability offerings for Web and Mobile. We see how the launchpads can help spot performance errors, how errors that occur in the frontend can be traced all the way to their cause in other backend services easily with the error investigations feature, and how easy it is to find differences between traces across various devices.
  |  By Honeycomb
Empathy is one of the superpowers of modern teams, especially when building tools that interact with humans. This talk by Kesha Mykhailov tells the story of Fin, Intercom's Customer Support agent, and how they transformed their approach to Fin's.
  |  By Honeycomb
You can give AI agents everywhere fingers & eyes into your tool or service, by implementing an MCP (Model Context Protocol) server. It’s a great idea! It’s also a new kind of design and engineering. Jessica describes how it’s different from implementing an API or a GUI, and why it’s more exciting than either.
  |  By Honeycomb
Did you miss Honeycomb's Observability Day San Francisco? Here are some highlights of the day.
  |  By Honeycomb
Canvas is an AI-guided workspace inside Honeycomb that combines an AI assistant with an interactive notebook for visualizing query results and traces. You can ask a natural language question about your data and Canvas will immediately start exploring your traces, through multiple queries and other tools, to find the right next steps. Instead of having to write each query yourself, Canvas automatically proposes relational queries, comparisons, and visualizations that explain why an SLO fired or what changed after a deploy.
  |  By Honeycomb
Modern teams face a persistent challenge: knowing when something goes wrong before their customers do. With architectures sprawling across dozens or hundreds of services, creating comprehensive alerting becomes an overwhelming task. You're left playing whack-a-mole with manual alert configurations, often missing critical issues or drowning in false positives. Today, we're excited to announce our solution to this challenge: Anomaly Detection (currently in alpha), Honeycomb's proactive approach to understanding and acting on service health.
  |  By Honeycomb
In the months since we launched our public beta, we’ve been hard at work making Honeycomb MCP more useful and capable for agents and human operators alike. Our goal with this project has been, from the start, to allow AI to engage in the same kind of investigatory loops that we guide users towards. Many of the new features are designed expressly with this in mind, the most exciting of which is BubbleUp, now available in.
  |  By Honeycomb
Did you know you can define a calculated field in your Honeycomb queries? You can, and with the power of Honeycomb AI you can ask it to write the calculated field definition for you. Find out how in this short video.
  |  By Honeycomb
You can use your mobile tools to debug errors, but are you really looking at the root cause? With end-to-end observability, powered by Honeycomb's Mobile Android and iOS SDKs, you can see everything! We'll show you how to start from a mobile launchpad, view the errors, select a trace, and find that root cause.
  |  By Honeycomb
Honeycomb is an event-based observability tool, but you can-and should-use metrics alongside your events. Fortunately, Honeycomb can analyze both types of data at the same time. When maturing from metrics-based application monitoring to an observability-based development practice, there are considerations that can make the transformation easier for you and your team.
  |  By Honeycomb
Evaluating observability tools can be a daunting task when you're unfamiliar with key considerations and possibilities. This guide steps through various capabilities for observability tooling and why they matter.
  |  By Honeycomb
This document discusses the history, concept, goals, and approaches to achieving observability in today's software industry, with an eye to the future benefits and potential evolution of the software development practice as a whole.

Honeycomb is a tool for introspecting and interrogating your production systems. We can gather data from any source—from your clients (mobile, IoT, browsers), vendored software, or your own code. Single-node debugging tools miss crucial details in a world where infrastructure is dynamic and ephemeral. Honeycomb is a new type of tool, designed and evolved to meet the real needs of platforms, microservices, serverless apps, and complex systems.

Honeycomb provides full stack observability—designed for high cardinality data and collaborative problem solving, enabling engineers to deeply understand and debug production software together. Founded on the experience of debugging problems at the scale of millions of apps serving tens of millions of users, we empower every engineer to instrument and query the behavior of their system.