Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Elastic Universal Profiling agent, a continuous profiling solution, is now open source

Elastic Universal Profiling™ agent is now open source! The industry’s most advanced fleetwide continuous profiling solution empowers users to identify performance bottlenecks, reduce cloud spend, and minimize their carbon footprint. This post explores the history of the agent, its move to open source, and its future integration with OpenTelemetry.

How to Calculate Log Analytics ROI

Calculating log analytics ROI is often complicated. For many teams, this technology can be a cost center. Depending on your platform, the cost of a log management solution can quickly add up. For example, many organizations use solutions like the ELK stack because the initial startup costs are low. Yet, over time, costs can creep up for many reasons, including the volume of data collected and ingested per day, required retention periods, and the associated personnel needed to manage the deployment.

Mastering CloudTrail Logs, Part 1

CloudTrail logs are a type of log generated by Amazon Web Services (AWS) as part of its CloudTrail service. AWS CloudTrail records API calls made within an AWS account, providing a history of activity including actions taken through the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. For example, CloudTrail events are generated for actions such as EC2 instances start/stop, S3 bucket read/write and IAM user creation/deletion.

Enhancing Data Ingestion: OpenTelemetry & Linux CLI Tools Mastery

While OpenTelemetry (OTel) supports a wide variety of data sources and is constantly evolving to add more, there are still many data sources for which no receiver exists. Thankfully, OTel contains receivers that accept raw data over a TCP or UDP connection. This blog unveils how to leverage Linux Command Line Interface (CLI) tools, creating efficient data pipelines for ingestion through OTel's TCP receiver.

The Complete Guide to Capacity Management in Kubernetes

In the dynamic world of container orchestration, Kubernetes stands out as the undisputed champion, empowering organizations to scale and deploy applications seamlessly. Yet, as the deployment scope increases, so do the associated Kubernetes workload costs, and the need for effective resource capacity planning becomes more critical than ever. When dealing with containers and Kubernetes you can find yourself facing multiple challenges that can affect your cluster stability and your business performance.

Loki 3.0 release: Bloom filters, native OpenTelemetry support, and more!

Welcome to the next chapter of Grafana Loki! After five years of dedicated development, countless hours of refining, and the support of an incredible community, we are thrilled to announce that Grafana Loki 3.0 is now generally available. The journey from 2.0 to 3.0 saw a lot of impressive changes to Loki. Loki is now more performant, and it’s capable of handling larger scales — all while remaining true to its roots of efficiency and simplicity.

Find your logs data with Explore Logs: No LogQL required!

We are thrilled to announce the preview of Explore Logs, a new way to browse your logs without writing LogQL. In this post, we’ll cover why we built Explore Logs and we’ll dive deeper into some of its features, including at-a-glance breakdowns by label, detected fields, and our new pattern detection. At the end, we’ll tell you how you can try Explore Logs for yourself today. But let’s start from the beginning — with good old LogQL.

Better, Faster, Stronger Network Monitoring: Cribl and Model Driven Telemetry

New in Cribl 4.5, the Model Driven Telemetry Source enables you to collect, transform, and route Model Driven Telemetry (MDT) data. In this blog, you’ll learn how to explore the YANG Suite to understand the wide variety of datasets available to transmit as well as how to configure the tools to get data flowing from Cisco IOS XE network devices to Cribl Stream.

Crossing the machine learning pilot to product chasm through MLOps

Numerous companies keep launching AI/ML features, specifically “ChatGPT for XYZ” type productization. Given the buzz around Large Language Models (LLMs), consumers and executives alike are growing to assume that building AI/ML-based products and features is easy. LLMs can appear to be magical as users experiment with them.