Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Optimizing APM Costs and Visibility with Cribl Stream and Search

OpenTelemetry is starting to gain critical mass due to its vendor neutrality and having worked in the APM space for the last five years. I can see the appeal. Using OpenTelemetry libraries to instrument your code frees you from putting vendor libraries in your codebase. The other challenge most customers face is balancing cost versus visibility. While effective, most APM solutions are costly.

Securing the Future: The Critical Role of Endpoint Telemetry in Cybersecurity

As IT managers and security practitioners navigate the complex terrain of modern cybersecurity in 2024 and beyond, the importance of endpoint telemetry cannot be overstated. This sophisticated technology involves meticulously gathering and analyzing data from various network endpoints, such as personal computers, mobile devices, and the ever-growing network of IoT devices.

Data Lake Strategy: Implementation Steps, Benefits & Challenges

Data lakes have emerged as a revolutionary solution in the current digital landscape, where data growth is at a 28% CAGR with no signs of slowing. These repositories, capable of storing vast amounts of raw data in their native format in a vendor-neutral way, offer unprecedented flexibility and scalability.

Managing Kubernetes Events with Cribl Edge

When we discuss observability for applications running in Kubernetes, most people immediately default to Metrics, Logs, and Traces – commonly referred to as the “three pillars.” These pillars are just different types of telemetry – signals that can be fed into observability platforms to help understand how an application behaves. But did you know that Kubernetes offers another valuable signal? When combined with the other signals, you get MELT.

Overcoming Messy Cloud Migrations, Outdated Infrastructures, Syslog, and Other Chaos

As businesses grapple with increasing data volumes, the need for practical tools to manage and use this data has never been greater. High-quality tools are great — but imagine what you could accomplish with one that made all the others in your toolbox even better? That’s exactly how we design every Cribl solution — we exist to help IT and Security teams get more out of their existing infrastructure.

How Cribl Helps the UK Public Sector Manage Challenges Around Growing Data Costs and Complexity

As the Data Engine for IT & Security, Cribl helps organisations overcome several challenges, including : In this first blog, we will concentrate on how Cribl can help the UK public sector deal with ever-rising data volumes whilst controlling costs.

Make Moves Without Making Your Data Move

How much of the data you collect is actually getting analyzed? Most organizations are focused on trying not to drown in the seas of data generated daily. A small subset gets analyzed, but the rest usually gets dumped into a bucket or blob storage. “Oh, we’ll get back to it,” thinks every well-intentioned analyst as they watch data streams get sent away, never to be seen again.

Security Has a Big Data Problem, and an Even Bigger People Problem

Got cybersecurity problems? Well, the good news is the same as the bad news — you’re not alone. The world of security has a big data problem and an even bigger people problem. Enterprise connectivity has drastically increased in the last decade, meaning every employee, contractor, and vendor has some level of access to corporate networks. To support this growth, companies monitor exponentially increasing infrastructure and traffic, producing a steadily rising volume of data.

How the All-In Comprehensive Design Fits Into the Cribl Stream Reference Architecture

In this livestream, Ahmed Kira and I provided more details about the Cribl Stream Reference Architecture, which is designed to help observability admins achieve faster and more valuable stream deployment. We explained the guidelines for deploying the comprehensive reference architecture to meet the needs of large customers with diverse, high-volume data flows. Then, we shared different use cases and discussed their pros and cons.