Operations | Monitoring | ITSM | DevOps | Cloud

Analytics

timeShift(GrafanaBuzz, 1w) Issue 72

The Grafana Labs team converged on Seattle this week for KubeCon + CloudNativeCon NA 2018 where we announced a new Prometheus-inspired, open source logging project we’ve been working on named Loki. We’ve been overwhelmed by the positive response and conversations it’s sparked over the past few days. Please give it a try on-prem or in the cloud and give us your feedback. You can read more about the project, our motivations, and check out the presentation in the blog section below.

How to Read Log Files on Windows, Mac, and Linux

Logging is a data collection method that stores pieces of information about the events that take place in a computer system. There are different kinds of log files based on the kind of information they contain, the events that trigger log creation, and several other factors. This post focuses on log files created by the three main operating systems--Windows, Mac, and Linux, and on the main differences in the ways to access and read log files for each OS.

Dynamically Provisioning Local Storage in Kubernetes

At LogDNA, we’re all about speed. We need to ingest, parse, index, and archive several terabytes of data per second. To reach these speeds, we need to find and implement innovative solutions for optimizing all steps of our pipeline, especially when it comes to storing data.

Part II: Anomaly detection within monitoring: how can you get started?

In a previous post we introduced anomaly detection as a group of techniques used to identify unusual behavior that does not comply with expected data pattern. In this article we will find out how we can apply anomaly detection within monitoring.

Log Analysis and the Challenge of Processing Big Data

To stay competitive, companies who want to run an agile business need log analysis to navigate the complex world of Big Data in search of actionable insight. However, scouring through the apparently boundless data lakes to find meaningful info means treading troubled waters when appropriate tools are not employed. Best case scenario, data amounts to terabytes (hence the name “Big Data”), if not petabytes.