Operations | Monitoring | ITSM | DevOps | Cloud

Cribl

Capturing Security and Observability Data From Oracle Cloud

A couple of years ago, I wrote another blog on how Oracle Cloud Infrastructure (OCI) Object Storage can be used as a data lake since it has an Amazon S3-compliant API. Since then, I’ve also fielded several requests to capture logs from OCI Services and send them through Cribl Stream for optimization and routing to multiple destinations. There are two primary methods to achieve this.

Enhancing Log Analytics in Loki with Cribl Stream

First, when I mention Loki, I’m not talking about one of my favorite TV shows to binge-watch or the lead character played by Tom Hiddleston, who has arguably become one of my favorite characters in the Marvel universe. I’m talking about the Loki, which is a highly available, cost-effective log aggregation system that was inspired by Prometheus. While Prometheus is focused on metrics, Loki is focused on collection of logs.

Thou Shall Pass! Troubleshooting Common Amazon S3 Errors in Cribl Stream

Data lakes are everywhere! With data volumes increasing, cost-effective storage is becoming a greater need. With Cribl Stream, you can route data to an Amazon S3 data lake and replay or search that data at rest. But nothing is more frustrating than something not working and those blasted error logs that pop up. In this blog, some common errors for your S3 sources or destinations are highlighted, and some potential root causes and solutions are highlighted.

Greater Control Over Windows Events for Qradar: Why Windows Events Matter

Windows events provide a wealth of security-relevant information, especially when they are correlated and analyzed within a SIEM like IBM Qradar. Whether you rely on MITRE ATT&CK, NIST, or another security framework, Windows Events are likely one of your higher volumes (EPS – Events Per Second) and represent your largest-sized events (Gigs per day – Storage and Archive).

Get Swept Off Your Feet by Cribl Stream 4.5: Converting Dimensional Metrics to the OpenTelemetry Protocol Format with the OTLP Metrics Function

In the dynamic world of observability and analytics, everyone’s looking for smarter, more efficient, and interoperable ways to handle their data. That’s where Cribl steps in, bringing you an exciting update to our product lineup. We’re thrilled to introduce the OTLP Metrics Function to Cribl Stream 4.5! This Function converts metrics into the OpenTelemetry Protocol (OTLP) format with ease!

Don't Slow Your Roll: Controlling Your Qradar Data Flow

IBM Qradar is a Security Incident and Event Manager (SIEM) trusted by many organizations to provide threat detection, threat hunting, and alerting capabilities. Qradar SIEM is often integrated with complementary IBM tools or enhanced with extensions to meet the needs of organizations that wish to mitigate their risks.

Aggregate Data in Cribl Stream to Optimize Your SIEM Data and Its Performance

Cribl Stream offers different ways to optimize data, such as: In this blog, I will focus on the Aggregation use case using the Aggregations function and how you can practically use the Aggregations function to format the output in different ways.

Better Practices for Connecting Cribl Stream to Many Splunk Indexers

Cribl Stream and Cribl Edge can send data to Splunk in several different ways. In this blog post, we’ll focus on the common scenario where you want to connect Cribl Stream’s Splunk Load Balanced Destination to many Splunk Indexers at once. (We’ll talk about Cribl Stream, but what we say applies to Cribl Edge, too.) Cribl Destinations settings default to reasonable values. Sometimes Cribl Support recommends changing those values for better results in a given situation.

Taming Tetragon With Cribl.Cloud

Did you know you can deploy Tetragon and parse high-volume logs with Cribl Edge? It’s true! Tetragon integrates seamlessly with Cribl Edge. This combination enhances monitoring capabilities in Linux environments. Have your cake and eat it, too. With a combined Cribl and Isovalent solution, you can deliver deep insights into your workloads, optimizing for your specific operational requirements with zero loss of data fidelity.