Operations | Monitoring | ITSM | DevOps | Cloud

Introducing GitKraken MCP: AI Agents Just Got a Power-Up

With the latest iteration of the GitKraken CLI, you can now connect to a local MCP server to deliver more functionality to your agent of choice. Whether you are using GitHub Copilot, Cursor, Windsurf, or any other tool, you can now leverage the power of GitKraken’s MCP server to enhance your workflows.

DevEx Unpacked 001 - Scaling Secure Software with Alison Sickelka

Episode 001: In this inaugural episode of DevEx Unpacked, host Alan Carson sits down with Alison Sickelka, VP of Product at Cloudsmith, for a deep dive into the evolution of software supply chain security. Alison shares her journey from journalism to product leadership, the unique talent landscape in Belfast, and how Cloudsmith is pioneering secure artifact management. Learn how Cloudsmith's Enterprise Policy Management is shaping compliance strategies, why SBOMs are crucial, and where AI fits in a secure DevOps future.

Introducing Seer: Sentry's AI Debugging Agent

There's a lot more context to an error than the message blinking in red on your screen. Seer understands the context of your application and everything behind that error. Seer collects information from the Stack Trace, Logs, Traces and Spans, Profiles, and the code from your GitHub repo and uses it to understand what's causing your issues, and propose fixes.

Opsgenie Is Shutting Down: Why FireHydrant Is the Natural Evolution

Opsgenie set a high bar. For years, it helped teams respond faster and stay on top of incidents with reliable alerting and on-call management. At FireHydrant, we’ve always admired how Opsgenie modeled incident data and structured its workflows — it was one of the best in the game. But as Atlassian sunsets Opsgenie and teams face the pressure to migrate, there’s a real decision to make: move into Jira Service Management, or find a new solution that fits your team’s needs and scale.

Moving from Relational to Time Series Databases

I’ve been building apps with SQL Server for years. Everything worked well until I started dealing with sensor data, stock trade volume, and IoT telemetry. As the volume of time-stamped records grew into the millions, I saw relational databases struggling with workloads they weren’t designed for. That’s when I explored time series databases. The performance improvements were significant, but what surprised me was the mental shift required.

Datadog MCP Server: Connect your AI agents to Datadog tools and context

As development teams adopt AI-powered tools and build services that make use of AI agents, they want to extend their AI capabilities to incorporate familiar tools and observability data. However, AI agents struggle with regular API endpoints and frequently fail when parsing complex nested JSON hierarchies or incorrectly handling errors. As a result, these agents often fail to retrieve relevant results.

Optimize and troubleshoot AI infrastructure with Datadog GPU Monitoring

As organizations bring more AI and LLM workloads into production, the underlying GPU infrastructure that supports these workloads becomes even more critical in ensuring these workloads remain fast, reliable, and scalable. Inefficient GPU resource usage, for instance, can lead to longer runtimes and reduced throughput, negatively impacting overall model performance. Additionally, idle and underutilized GPUs can quickly drive up costs and lead to needless spending.

How to Monitor Kafka Producer Metrics

Your Kafka producer pushed a million messages yesterday. Nice. But can you tell if they all made it? Or why did latency spike at 2 PM? Producer metrics help you determine that. They expose how long messages take to send, whether messages are getting stuck, and whether retries are piling up. Let’s go over which ones help while debugging and how to monitor them.