Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Cribl Search Now Supports Email Alerts For Your Critical Notifications!

Cribl Search helps find and access data regardless of the format it’s in or where it lives. Search provides a federated solution that reaches into existing object stores and explores data without moving it or having to index it first. This same interface can also connect to APIs, databases, or existing tooling, and can even join results from all these disparate datasets and display them in comprehensive dashboards.

The Data Lake Dilemma: Why Businesses Need a New Approach

In today’s data-driven landscape, every organization knows the immense value their data holds, but with the explosion of data from diverse sources, traditional data storage and management solutions are proving inadequate. Organizations are urgently seeking new ways to handle their data effectively.

Welcoming Henry the Honey Badger: The New Face of Cribl

At Cribl, we’ve always prided ourselves on solving complex data challenges for our customers, but doing so with a bold spirit and a can-do attitude. Our journey with Ian the Goat as our mascot has been nothing short of incredible. Ian represented our agile and adaptable approach to solving complex data challenges. However, as we pivot towards tackling even bigger data puzzles for our customers, we believe it’s time for our mascot to reflect this evolution.

One Reason Why Your Nodes' Memory Usage Is Running High

When you’re using Cribl Stream and Cribl Edge to send data to hundreds of Splunk indexers using Load Balancing-enabed Destinations, it is sometimes necessary to analyze memory usage. In this blog post, we delve into buffer management, memory usage calculations, and mitigation strategies to help you optimize your configuration and avoid memory issues.

Data Chaos MUST Be Curbed, but How?

My introduction to the world of data science was writing anomaly detection for a SIEM that catered to banks and credit unions. Some of these places were running on 50-year-old IBM core banking servers — meaning that someone trying to turn off a light in a server room could take down an entire bank with a literal flip of the wrong switch. While some companies take their time updating infrastructure, others still embody the move-fast-and-break-things philosophy of the early dot-com era giants.

The Ultimate CPU Alert - Reloaded, Again!

It’s been nearly ten years since “The Ultimate CPU Alert – Reloaded” and its Linux version were shared with the SolarWinds community. At that time, managing CPU data from 11,000 nodes, with updates every five minutes to a central MSSQL database, was a significant challenge. The goal was to develop alerting logic to identify when a server was experiencing high CPU usage accurately.

Mastering Log Retention Policy: A Guide to Securing Your Data

The strategic implementation of a security log retention policy is critical for safeguarding digital assets and key company data. This practice is foundational for detecting and analyzing security threats in real-time and conducting thorough post-event investigations. Integrating the nuances of log analytics system costs, which escalate with data volume due to the infrastructure needed for storage and processing, highlights a critical aspect of security log retention.

Receive Cribl Notifications on a Distribution List or Group Email Alias

IT and security teams have several products they use and in turn, have many admins. Some have wide privileges, while others have focused responsibilities for the various tools and touch points in an IT and security data path. Not all admins are members of all tools. But they are all typically part of a larger group bound by an email alias (aka a distribution list).

Searchception! Iterative Search Through Prior Search Results

An analyst’s process often involves searching through a given set of data many times, refining the question and analytics performed each time. Cribl Search was originally designed to be stateless – executing each search from the original dataset provider(s) with every execution. However, a new feature has been introduced to allow searching into previous cached results, accelerating the analyst process for certain types of iterative search development.

Scanning the Edge: Expand Your Visibility to New Heights

Data is born at the edge, and the traditional approach is to collect it, then ingest it into one or more systems of analysis — or at least as much as you can afford to. And now the deep dive analysis begins. This might be the perfect solution for some datasets, but what about all the other data being collected on the edge? All the logs, metrics, and state information you seldom (if ever) retrieve?