Operations | Monitoring | ITSM | DevOps | Cloud

SigNoz Community Edition now available with SSO (Google OAuth) and API Keys

One of the biggest asks from our open-source community has been to open-source our SSO support, which was part of our enterprise offering. Today, we’re thrilled to announce that support for SSO with Google OAuth is now part of our latest release. Latest version: v0.85.0 Not only that, we've also shipped another highly anticipated feature for our Community Edition: API Keys for comprehensive programmatic access to SigNoz.

Shedding Light on Kafka's Black Box Problem (with OpenTelemetry)

"All language is but a poor translation." — Franz Kafka This quote by Franz Kafka reminds me of the time when I used to look at metrics from “Apache Kafka” topics trying to figure out what was causing the huge lags and manually deleting the messages in certain partitions to get rid of polluted messages. Yep, pretty lost in translation. I wasn’t aware of the power of observability for a Kafka producer-topic-consumer system.

SigNoz Launch Week 4.0 - OpenTelemetry Powered Innovations That Redefine Observability

OpenTelemetry is rapidly becoming the backbone of modern observability, but true innovation happens when you build directly on its latest capabilities. For Launch Week 4.0, we’re excited to showcase five powerful features; each crafted to help you get more value from your telemetry, make debugging faster, and deliver a unified observability experience. Here’s a quick look at what’s new, why it matters, and how SigNoz is pushing the boundaries of what’s possible with OTel.

Tracing Funnels - Define funnels b/w spans in your distributed systems

Distributed tracing has long been the go-to for understanding the performance of microservices and asynchronous systems. But as systems grow in complexity, simply viewing individual traces and spans isn’t enough. Teams need to answer questions like: SigNoz Tracing Funnels is here to change that, bringing the clarity of product analytics-style funnel analysis to backend traces, and doing so in a way that’s never been available before.

CI/CD Observability Powered by OpenTelemetry

Modern engineering teams spend a lot of time and resources in setting up monitoring of their production systems - tracking uptime, catching errors, and responding to incidents before customers ever notice. But what about the journey before code reaches production? For most teams, observing the CI/CD pipeline is either an afterthought or completely overlooked. While we recognize its importance, do we truly understand how well our CI/CD process is functioning?

Third party API Monitoring powered by OpenTelemetry semantics

In today’s cloud-native world, third-party APIs are everywhere. Payments, notifications, search, AI, analytics as modern applications are built on a web of external services. But what happens when one of those APIs slows down, starts throwing errors, or gets rate-limited? Suddenly, your users are facing issues, and you’re stuck asking.

Metrics Explorer - Search, Query, and Analyze all your Metrics at one place

If you’ve ever found yourself staring at a dashboard dropdown, wondering, “What metrics am I even sending to my observability tool?”, you’re not alone. For most engineering teams, answering even the most basic telemetry questions is about as hard as catching a Mewtwo. Frustratingly elusive and way more complicated than it should be, like: We built Metrics Explorer to finally answer all of these questions instantly, and in one place.

Deep Temporal Observability - Correlate Metrics with Logs & Traces

Temporal lets you orchestrate complex, reliable workflows, but when something breaks or slows down, the built-in dashboards only give you a list of events and some basic filters. You can see what happened and filter by attributes like workflow type or namespace, but you can't drill deeper. There's no way to jump straight from a metric spike to the exact trace or log line you care about.

Optimising OpenTelemetry Pipelines to Cut Observability Costs and Data Noise

Fat bills from observability vendors and tons of not-so-insightful telemetry data have turned out to be a very common issue today. This often leaves teams having to explain the lack of clear ROI, despite the growing costs. If you’re using OpenTelemetry to record your observability data, there are some practical methods you can apply to keep those costs from piling up.

Why no one talks about querying across signals in observability?

In today’s complex distributed systems, observability has evolved from a nice-to-have feature to a mission-critical engineering discipline. Engineering teams across organizations depend on robust observability to maintain system reliability and quickly diagnose issues when they inevitably arise. However, current observability tooling significantly lags behind user expectations by failing to support a critical capability: querying across different telemetry signals.