Operations | Monitoring | ITSM | DevOps | Cloud

Enabling Design System Observability Using Honeycomb

At Honeycomb, we’re actively growing our design system, Lattice, to ensure accessibility, optimize performance, and establish consistent design patterns across our product. One metric we use to measure Lattice is the adoption of components across the product. Adoption is about understanding how, where, and why they’re being used.

Better CloudWatch Metrics in Honeycomb with the OpenTelemetry Collector

CloudWatch metrics can be a very useful source of information for a number of AWS services that don’t produce telemetry as well as instrumented code. There are also a number of useful metrics for non-web-request based functions, like metrics on concurrent database requests. We use them at Honeycomb to get statistics on load balancers and RDS instances. The Amazon Data Firehose is able to export directly to Honeycomb as well, which makes getting the data into Honeycomb straightforward.

So, What's the Difference Between Observability and Monitoring?

Observability and monitoring are not about gathering different data—they differ in their purpose, but share the same data. Monitoring is focused on notification based on predefined questions. Whether that’s through Dashboards people watch, or push-based alerts to notification systems like SMS or purpose-built platforms like PagerDuty.

Generating Calculated Fields From Natural Language

If you’ve been using Honeycomb for a bit, you know that Calculated Fields (otherwise known as derived columns) are a powerful way to transform your events to a format that’s easier to query and understand. However, they use a lisp-esque language that can be difficult to read and a pain to write. If you dislike making Calculated Fields and want something a little easier, here’s a generative AI prompt that can generate them from natural language.

Does AI Help Write Better Software, or Just... More Code?

As software teams race to integrate AI into their development workflows, we need to ask ourselves: are AI-powered tools actually making software better? The latest research from DORA confirms what many engineers have long suspected, and what we at Honeycomb have said for a long time: AI tools don’t magically lead to better software. In fact, without careful implementation, AI can introduce a whole slew of challenges, including decreased productivity and unreliable code.

How I Code With LLMs These Days

I first started using AI coding assistants in early 2021, with an invite code from a friend who worked on the original GitHub Copilot team. Back then, the workflow was just single-line tab completion, but you could also guide code generation with comments and it’d try its best to implement what you want. Fast forward to 2025. There’s now a wide range of coding assistants that are packed with features.

AI: Where in the Loop Should Humans Go?

AI is everywhere, and its impressive claims are leading to rapid adoption. At this stage, I’d qualify it as charismatic technology—something that under-delivers on what it promises, but promises so much that the industry still leverages it because we believe it will eventually deliver on these claims. This is a known pattern.