Right from your IDE, find out how long your code talked to execute in production. If your AI agent speaks Model Context Protocol, it can query Honeycomb for you!
Last Thursday, our /api/cart endpoint got a lot slower. But we didn't notice, because we hadn't set up a Service Level Objective to guard its reliability and responsiveness. This video shows how to identify and create that SLO in Honeycomb.io.
When your software integrates with Generative AI, you need great observability. You need to see everything about the interaction with the LLM. You also need to see everything around it! That's application observability, with distributed tracing.
Charity Majors, CTO and Co-founder at Honeycomb, and Phillip Carter, Principal Product Manager at Honeycomb, recently hosted a webinar with DORA's Nathen Harvey on AI's unrealized potential. As part of this, we created a 3-minute highlight reel of the webinar that you can watch.
Not all telemetry is created equal. Curate the data you save in Honeycomb using Honeycomb Telemetry Pipeline Manager. Then restore them later if you change your mind!
How can we send less data without losing information? Sampling removes duplication among distributed traces, while keeping all the interesting ones. See how this works with Honeycomb and Refinery.
Honeycomb is an observability platform. What is special about it? Besides first-class support for OpenTelemetry, Honeycomb works with your existing data, especially logs. In this video, experience what working in Honeycomb is like. See you at Honeycomb.io!
Honeycomb released a new feature today: temporary calculated fields. Check conditions, do math, and complicate your query. Honeycomb evaluates it all at runtime. Then save the field for posterity if you choose!
With Honeycomb, you can use traces and wide attributes to find out who is spiking your cloud costs. Simply add attributes to your Lambdas and other serverless traces, use triggers to alert your team, and run Honeycomb Queries. This short video shows you a simple query flow.