Operations | Monitoring | ITSM | DevOps | Cloud

Not All Telemetry Requires Premium Pricing

Observability in software is often framed as a choice between self-hosted and SaaS: manage it yourself, or pay a vendor to handle your data. Both self-hosted and SaaS approaches have their merits, but assuming you must choose one exclusively over the other leads to poor trade-offs: either overcommitting to an all-in-one SaaS despite spiraling costs, or fully self-hosting when it’s unnecessary.

VictoriaMetrics at KubeCon Amsterdam: Community Highlights

KubeCon + CloudNativeCon Europe in Amsterdam brought together about 13,500 attendees this year, the largest turnout yet. The size of the event showed just how much the cloud-native space has grown, and how central observability, platform engineering, and cost control have become. For VictoriaMetrics, this year’s event was a mix of talks, booth conversations, and a lot of direct feedback from users.

What's New in VictoriaMetrics Cloud Q1 2026? Logs, MCP Server, Better Alerting, and... a Secret Project

Q1 2026 has been one of our most eventful quarters yet for VictoriaMetrics Cloud. We shipped something we have been building towards for a long time, crossed a few infrastructure milestones, and started clearing the path for what is coming next to the most performant observability stack.

VictoriaMetrics at KubeCon: Optimizing Tail Sampling in OpenTelemetry with Retroactive Sampling

Last month, the VictoriaMetrics team gave a talk on retroactive sampling at KubeCon Europe 2026. By writing this blog post, as a transcript of the session, we want to explain how retroactive sampling reduces outbound traffic, CPU, and memory usage in the data collection pipeline significantly compared to tail sampling in OpenTelemetry.

VictoriaMetrics March 2026 Ecosystem Updates

Welcome to the March release roundup of VictoriaMetrics Stack, covering key enhancements in VictoriaMetrics and VictoriaLogs. These updates deliver improved UI scalability, enhanced authentication flexibility, improved query performance, and logging tools that streamline observability workflows in production environments. This roundup covers releases for.

Observability Lessons From OpenAI

Writing code is moving from the good old IDE into the realm of autonomous AI agents. One example of this is OpenAI, which has been developing internally with 0 lines of manually written code. You can read about their workflow in their engineering blog: Harness engineering: leveraging Codex in an agent-first world. For me, the main takeaway of OpenAI’s article is how AI has rewritten the constraints equation.

Benchmarking Kubernetes Log Collectors: vlagent, Vector, Fluent Bit, OpenTelemetry Collector, and more

At VictoriaMetrics, we built vlagent as a high-performance log collector for VictoriaLogs. To validate its performance and correctness under a real production-like load, we developed a benchmark suite and ran it against 8 popular log collectors. This post covers the methodology, throughput results, resource usage, and delivery correctness. Collectors under the test: We’ve made all benchmark configurations and source code public, so you can reproduce and verify the results independently.

VictoriaMetrics February 2026 Ecosystem Updates

This month, we’re thrilled to see OpenAI using the VictoriaMetrics Stack internally — including VictoriaMetrics, VictoriaLogs, and VictoriaTraces — in their Harness engineering experiment, as shown in their architecture diagram. It’s a great way of combining observability and AI agents.

VictoriaMetrics at FOSDEM, Cloud Native Days France, and CfgMgmtCamp Ghent

Last week, members of the VictoriaMetrics team, including myself, spoke at three very different but equally important community events: FOSDEM in Brussels, Cloud Native Days France in Paris, and CfgMgmtCamp in Ghent. Each event drew a different crowd with its own expectations, making them a good way to see where open source observability stands today and how VictoriaMetrics is adapting to real-world needs. The talks we gave were snapshots of the problems we are actively working on.