Prometheus has become something of a de-facto standard for how to start monitoring Kubernetes. There are good reasons for this: It’s open source, freely available and embraced by the Cloud Native Computing Foundation (CNCF). Also, Prometheus was designed to handle the highly ephemeral nature of Kubernetes workloads. This has propelled Prometheus to a position as the obvious choice for anyone initially wanting to monitor Kubernetes.
Distributed tracing is a critical piece of application observability. But instrumenting your applications with traces is not always easy. Whether you are an SRE or a developer, you need application observability. But you might not prefer to instrument code. That is where the Wavefront Tracing Agent for Java comes in handy, as it provides application observability without needing any code change.
External linking helps engineering teams connect Wavefront to logging tools such as vRealize Log Insight, ELK, or Splunk. For example, when you have received alerts and see them in Wavefront, and then want to investigate them further by drilling down into logs, you can quickly do that using the Wavefront External Links feature.
For the VMware Secure State engineering team, metrics have become an integral part of daily life. From monitoring our services to customer success and new features, all activities are driven by metrics. In this blog, I share my team’s experience in transitioning from Prometheus monitoring to Wavefront enterprise observability.
How do you find unknown unknowns? How do you detect silent failures in your cloud services involving hidden dependencies that are flying below your radar? If undetected, they can accumulate and be detrimental to your customers.
Distributed tracing is a critical piece of application observability. But, the sheer number of traces containing a ton of information can be overwhelming. In this blog, I’ll show you concrete examples of how the Wavefront Query Language can be applied to distributed tracing and quickly get you answers to all your questions, significantly reducing troubleshooting time.
Back in the good old days of monolithic applications, most developers and application owners relied on tribal knowledge for what performance to expect. Although applications could be incredibly complex, the understanding of their inner workings usually resided within a relative few in the organization. Application performance was managed informally and measured casually.
Wavefront has added a Dynatrace integration to its portfolio of over 200 pre-built integrations. With this integration, Wavefront customers can easily ingest all or selected metrics from the Dynatrace SaaS solution. From Wavefront, customers can easily correlate application metrics from Dynatrace with full-stack metrics from the rest of their environment – e.g. multicloud, Kubernetes, infrastructure – to accelerate incident detection and troubleshooting.
As an SRE deploying Wavefront as Enterprise Monitoring-as-a-Service across your organization, you may be asking: Is there a good way to see which metric time series are writing the most points? How can I check how many data points are emitted per metric into Wavefront? Is my cluster running efficiently?