Operations | Monitoring | ITSM | DevOps | Cloud

PostgreSQL extensions you need to know in 2025

PostgreSQL is by design lightweight and un-opinionated but its killer feature has long been its extensions ecosystem. The extensions ecosystem adapts and customizes PostgreSQL data storage and manipulation use cases, making it suitable for AI, analytics, document data stores and more. This flexibility keeps PostgreSQL viable as an option for any business or startup, as it’s hard to ‘outgrow’ PostgreSQL.

Forecasting with InfluxDB 3 and HuggingFace

Machine learning models must do more than make accurate predictions; they also need to adapt as the world around them changes. In real-world systems, data distributions shift due to seasonality, equipment wear, user behavior changes, or other external forces. If your models can’t keep up, the result is poor predictions. This can lead to outages, inefficiencies, or missed opportunities. That’s why forecasting systems need to be monitored and resilient, not just accurate.

Real-Time, Automated Resource Optimization for Kubernetes Workloads

Struggling with underutilized Kubernetes resources or rising cloud costs? Learn how Pepperdata Capacity Optimizer delivers real-time, automated resource optimization for Kubernetes and Amazon EMR workloads—helping teams reduce costs and boost performance without manual tuning. In this video, discover how Pepperdata helps DevOps, platform engineers, and FinOps teams.

Pepperdata In Collaboration with AWS | Optimize Utilization and Cost for Kubernetes Workloads

In this AWS Startup Partner Spotlight, discover how Pepperdata empowers cloud-native startups to optimize their Kubernetes and Amazon EMR workloads in real time. With automated resource optimization, companies can reduce costs by an average of 30% while increasing utilization by up to 80%—without any manual tuning. Whether you're scaling rapidly or managing unpredictable workloads, Pepperdata ensures your infrastructure runs efficiently and cost-effectively from day one.

Why Manual Tuning Fails: A Better Way to Optimize Kubernetes Workloads

As a data platform engineer, you’re tasked with running complex workloads—Apache Spark jobs, AI/ML pipelines, batch ETL—across dynamic Kubernetes environments. Performance matters. Time spent tuning matters. And so does cost. But if you’re still relying on manual resource tuning to optimize your workloads, you’re playing a losing game. Sure, you can tweak CPU and memory requests by hand. You can comb through Prometheus metrics, look at job logs, estimate peaks.

Visualize Databricks in Grafana: write custom SQL queries, build interactive dashboards, and more

As part of our big tent philosophy at Grafana Labs, we think you should be able to dig into your data and find meaningful insights — wherever that data happens to live. For many of our users, that data lives in Databricks, the open analytics platform for building, deploying, sharing, and maintaining enterprise-grade data, analytics, and AI solutions at scale.

5 Must-Have Python Plugins for InfluxDB 3 Core & Enterprise

InfluxDB 3 is our latest time series database built for real-time analytics and high-volume data. Its Python Processing Engine lets developers run custom scripts known as plugins to process data, trigger alerts, or integrate with external systems via HTTP web requests. To demonstrate what’s possible, we’ve developed several plugins, all of which are available in the influxdb3_plugins GitHub repository. This public repo is open for anyone to use, modify, and contribute to.

How to Visualize and Explore Your Datalake: Databricks Enterprise Data Source for Grafana

Ready to bring your Databricks data lakehouse to life? In this Grafana quick start, Shawn Pitts walks through how to connect Databricks to Grafana Cloud using the official plugin, available on all tiers — including Cloud Free. We’ll cover: Setting up the Databricks data source Retrieving your Host, HTTP Path, and Token from the Databricks App Exploring data with SQL builder and custom queries in Grafana Creating a cross functional dashboard using live Databricks data.

Getting Started With Lakehouse: Not Even White Lotus Can Match the Hospitality of Cribl's Lakehouse

Cribl recently introduced Lakehouse, a powerful new feature within Cribl Lake that enables fast queries on the freshest data. But it’s so much more than just speedy searches. Lakehouse redefines how organizations collect, store, manage, and analyze telemetry data at scale, ensuring a future-proofed, cost-efficient, and flexible approach to data management.