Operations | Monitoring | ITSM | DevOps | Cloud

Modernizing Data Centers for AI: Bridging Observability, Cost Control, and Intelligent Automation

Attend our webinar on April 3 to see our latest innovations live. Register IT Operations are more complex than ever, with modern data centers spanning on-premises, containers, multi-cloud environments, and AI-powered infrastructure. The rapid expansion of data sources has created an overwhelming volume of information, making manual monitoring across multiple tools impractical. Visibility gaps slow down troubleshooting and delay critical decisions, impacting business performance.

Optimizing Script Placement for Web Performance

Master the art of loading JavaScript efficiently in this essential Concepts of Web Performance tutorial with Todd Gardner from Request Metrics. Perfect for entry-level web developers struggling with slow websites, this video breaks down the critical differences between standard blocking scripts, async, and defer attributes that dramatically impact your site's performance. Learn when and why to use each loading technique, understand how JavaScript execution blocks HTML parsing and CSS rendering through clear waterfall and flame chart visualizations, and discover why defer is usually your best option for most scenarios.

Introducing Charmed PostgreSQL

PostgreSQL, a proven and well-loved database trusted by IT sectors for over three decades, continues to evolve with modern enterprise needs. In this video, we introduce Charmed PostgreSQL — an advanced enterprise solution designed to secure and automate the deployment, maintenance, and upgrades of PostgreSQL databases across private and public clouds. Watch the video to explore more about Charmed PostgreSQL, its features and advantages: Security and compliance features Support and managed services Automation features Deployment options Pricing.

Server Monitoring Explained: How to Outwit Downtime Before it Strikes

Server monitoring is the practice of continuously tracking server health, performance, and resource usage to catch issues before they cause downtime. When a server crashes, it can mean lost revenue, frustrated users, and a mad scramble to fix the problem. The right server monitoring tool helps your IT team stay ahead by providing real-time alerts and visibility into critical metrics. In this guide, we’ll break down how server monitoring works, why it matters, and what to look for in a solution.

COREDUMP #005: The Current Realities of Cellular IoT

Join the Founders of Memfault and special guest Fabian Kochem, Director of Product Strategy at 1NCE, as they break down the latest advancements in cellular IoT. This conversation covers key considerations for businesses adopting cellular, common pitfalls, and the best tools to ensure connectivity success.

Why IoT Security Can't be Left To Users

A web-connected building intercom system is leaving homes across the US and Canada vulnerable to remote attacks—all because of one major security flaw. François Baldassari shares how weak IoT security can put thousands at risk and what manufacturers must do to fix it. Watch to learn why secure by default should be the standard for all connected devices.

Tech Debt as Innovation? How Netflix Turns It Into Opportunity

At Civo Navigate San Francisco 2025, Lisa Smith, from Netflix shares a fresh perspective on how tech debt can drive innovation instead of slowing teams down. Learn how to staff legacy systems, handle tricky deprecations, and evaluate the “tech debtiness” of your infrastructure to unlock growth and efficiency. Discover how to turn tech debt into a strategic advantage for your engineering team.

New In Playwright 1.51 - Can AI Fix Failing Tests With The New Error Prompt?

In this episode, Stefan Judis, Playwright ambassador, explores the new 'Copy as prompt' feature in Playwright 1.51. This feature allows you to copy a pre-filled LLM prompt with all the context of a failing test case. Does this mean that AIs can take over and magically fix all the failing tests? Let's find out!

Building optimized LLM chatbots with Canonical and NVIDIA

The landscape of generative AI is rapidly evolving, and building robust, scalable large language model (LLM) applications is becoming a critical need for many organizations. Canonical, in collaboration with NVIDIA, is excited to introduce a reference architecture designed to streamline and optimize the creation of powerful LLM chatbots. This solution leverages the latest NVIDIA AI technology, offering a production-ready AI pipeline built on Kubernetes.