Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Best Practices for Kubernetes Monitoring with Prometheus

Kubernetes has clearly established itself as one of the most influential technologies in the cloud applications and DevOps space. Its powerful flexibility and scalability have inarguably made it the most popular container orchestration platform in modern software development, helping teams manage hundreds of containers efficiently.

An Introduction to AWS Monitoring with Prometheus and Logz.io

Prometheus is a widely utilized time-series database for monitoring the health and performance of AWS infrastructure. With its ecosystem of data collection, storage, alerting, and analysis capabilities, among others, the open source tool set offers a complete package of monitoring solutions. Prometheus is ideal for scraping metrics from cloud-native services, storing the data for analysis, and monitoring the data with alerts.

Prometheus Roadmap and Latest Updates

We Just celebrated 10 year birthday to Prometheus last month. Prometheus was the second project to join the Cloud Native Computing Foundation after Kubernetes in 2016, and has quickly become the de-facto way to monitor Kubernetes workloads. The plug-and-play experience, just putting Prometheus server and starting to see metrics flowing in tagged with Kubernetes labels, was a compelling offer.

Unreadable Metrics: Why You Can't Find Anything in Your Monitoring Dashboards

Dashboards are powerful tools for monitoring and troubleshooting your system. Too often, however, we run into an incident, jump to the dashboard, just to find ourselves drowning in endless data and unable to find what we need. This could be caused not just by the data overload, but also due to seeing too many or too few colors, inconsistent conventions or the lack of visual cues.

Phantom Metrics: Why Your Monitoring Dashboard May Be Lying to You

Whether you’re a DevOps, SRE, or just a data driven individual, you’re probably addicted to dashboards and metrics. We look at our metrics to see how our system is doing, whether on the infrastructure, the application or the business level. We trust our metrics to show us the status of our system and where it misbehaves. But do our metrics show us what really happened? You’d be surprised how often it’s not the case.

Tis the Season: 3D Observability for Prometheus + Grafana + Octoprint

You may get lucky this holiday season with a new 3D printer, either as a gift or something you give yourself as a reward for all your hard work this year. Household 3D printers have made tremendous strides in ease of use and affordability over the last decade.

Automate Observability Tasks with Logz.io Machine Learning

As an observability provider, we are always confronted with our clients’ goal for faster resolution of problems and better overall performance of their systems. By working on large-scale projects at Logz.io, I see the same main challenge coming up for all: extracting valuable insights from huge volumes of data generated by modern systems and applications.

Product Spotlight: Announcing Power Search for Log Restore

We’re excited to announce significant improvements to our Archive+Restore capabilities – which enables low-cost long term log storage in AWS S3 or Azure Blob, while providing access to ingest those logs into Logz.io at any time. The first enhancement is Power Search, which will make it faster to restore logs from archived log data in AWS S3 (and soon for Azure Blob) in our Open 360™ platform.

Product Spotlight: Smart Tiering + LogMetrics to Optimize Costs

Is all observability data worth the same cost? If you guessed no, then you’d obviously be correct. Anyone familiar with the very nature of gaining targeted observability knows that some data points hold more value than others. Yet, many observability platforms still treat all types of log data the same, and as a result, related costs remain uniform. One of the most persistent observability challenges today is the cost of indexing log data.

Announcing Logz.io's Data Optimization Hub

To help our customers reduce their overall observability costs, we’re excited to announce the Data Optimization Hub as part of our Open 360™ platform. The new hub inventories all of your incoming telemetry data, while providing simple filters to remove any data you don’t need. Gone are the days of paying for observability data you never use.