Operations | Monitoring | ITSM | DevOps | Cloud

Latest Videos

How to Deploy Grafana on Kubernetes Using Helm | Grafana | Tutorial

How to deploy Grafana on Kubernetes using Helm Charts, customize the default configurations from values.yaml and also debug the logs? Join Senior Developer Advocate Syed Usman Ahmad in this complete hands-on tutorial and learn to easily deploy Grafana into a Kubernetes namespace via Helm charts.

Scaling into the unknown: growing your company when there's no clear roadmap ahead

During a recent episode of ⁠The Debrief⁠, we spoke with Jeff Forde, Architect on the Platform Engineering team at Collectors, about building an incident management program at various stages of growth. In that episode, we called it growth from zero to one, one to two, and two to three. But what happens once you’ve scaled beyond three and answers to question you may have become that much harder to find.

CSM to ITSM Migration: A Panel Discussion #ITSM #ITSolutions

Three Ivanti customers at various stages of migration – Maxar's Allison Hull, Fareway Stores' Steve Clime, and Memorial Health Ohio's Barbara Munger – give their firsthand perspectives on a successful journey from Cherwell Service Management to Ivanti Neurons for ITSM. Ivanti finds, heals, and protects every device, everywhere – automatically. Whether your team is down the hall or spread around the globe, Ivanti makes it easy and secure for them to do what they do best.

Build to scale with Aiven!

In this session, we will show how to leverage Aiven for Dragonfly and Aiven for AI. First, we’ll discuss how to increase your throughput and reduce memory usage by 25% compared to open-source Redis. Then explore scalability, efficiency, and advanced capabilities ideal for caching, gaming leaderboards, messaging, AI applications, and more. After that, we’ll jump into Aiven’s latest AI use cases and cover.

Aiven workshop: Preparing and Using Data for AI with LangChain and OpenSearch

In this workshop we’ll work together to generate embeddings for podcast transcriptions and load that data into OpenSearch. Then we’ll search the documents using similarity search and use those results to improve our responses from an LLM (Large Language Model). Along the way we’ll explain the Retrieval Augmented Generation (RAG) pattern and show how it’s possible to try different LLMs without having to completely rewrite your code.