Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

4 Strategies to Reduce Observability Costs - Without Sacrificing Visibility

Today’s end users have little to no patience for performance issues. Jitters, slow load times, and full-blown outages can quickly lead to brand damage, lost customers, and diminished revenue. That’s why it’s essential for DevOps and engineers to be able to quickly identify and resolve issues before users ever notice them. Doing this requires collecting and analyzing massive amounts of telemetry data – metrics, traces, and logs.

10 Things to Consider before Multicasting Your Observability Data

This article was originally published in APM Digest here. Multicasting in this context refers to the process of directing data streams to two or more destinations. This might look like sending the same telemetry data to both an on-premises storage system and a cloud-based observability platform concurrently. The two principal benefits of this strategy are cost savings and service redundancy.

More is More - A Case for Dynamic Observability

Dynamic observability is the concept that the amount of data collected should scale based on signals from your environment. Elastic infrastructure is not a new concept. Much of the internet is powered by services that provision more resources based on signals derived from metrics like cpu load, memory utilization and queue depth. If we can use tools to right size our infrastructure, why can’t we also use tools to right size the amount of data we collect?

Auto Optimize Your Observability with a Time-Based Collection Strategy

Observability has become one of the largest line items in the IT budget, second only to cloud costs. A main reason for this is teams are often stuck collecting significantly more data than they need. This is where Circonus Passport helps. Rather than filter data after it’s collected like current observability data pipeline management tools, Passport is used to filter data before it’s collected.

Circonus Launches Open Beta for Passport, Ushering in a New Era of Flexible Observability

Sky-high observability costs or visibility gaps? This is the unfortunate trade-off many organizations have to make when it comes to determining how much telemetry data they should collect and send to their observability tools. Teams either collect more data than they need and pay the price, or they collect less and suffer visibility gaps. Today, this all changes.

How to Develop a Modern Monitoring & Observability Strategy for Businesses of Any Size

In the dynamic world of IT, the way we monitor systems has seen a remarkable evolution. Gone are the days when monitoring was limited to basic server checks or infrastructure health. With the rise of cloud-native applications, serverless architectures, and container orchestration platforms like Kubernetes, the digital landscape has become a multi-dimensional maze.

4 Ways a Consistent Schema Drives More Value From Your Observability Data

One of the hardest challenges in computer science is deciding what to name things. Adoption of consistent nomenclature is difficult because there is no one right answer. In fact, it’s not uncommon for different teams within organizations to choose different names for the same technologies. In the world of monitoring and observability, this can create quite a lot of confusion – not to mention wasted resources.

Introducing the Telemetry Cloud: An All-In-One Observability Platform All Enterprises Can Afford

We’re excited to announce that we just released the next-generation of our observability platform – the Circonus Telemetry Cloud™. Here’s a closer look at what it is and why we think it’s a standout in the monitoring and observability space.

Domain Driven Design For All

Domain Driven Design (DDD) is usually associated with microservice architectures. As microservice architectures have been perceived as burdensome and overly complex, so too have organizations started to call into question the relevance of DDD initiatives. The argument is usually that unless an organization reaches a mega-scale that requires eventing to keep and micro-services to scale horizontally, such architectures are overkill.