Operations | Monitoring | ITSM | DevOps | Cloud

Apache Kafka Tiered Storage in Depth: How Writes and Metadata Flow

The idea behind KIP-405 is to simply store most of the cluster’s data in another service. As we covered in detail in the last article - it’s a simple-sounding idea that goes a very long way. This other server where the data gets stored is pluggable. KIP-405 was designed in such a way to make Kafka seamlessly extensible to store its data in any kind of external store through a solid interface.

2025 OneDrive Licensing Changes

Microsoft recently announced significant changes to its OneDrive licensing and storage policies, affecting organizations that heavily rely on cloud storage solutions. Starting January 27, 2025, unlicensed OneDrive accounts—those without assigned user licenses—will be automatically archived after 93 days, rendering them inaccessible unless covered by retention policies or legal holds.

Using Azure Blob Storage for InfluxDB 3 Core and Enterprise

InfluxDB 3 Core and Enterprise introduce a powerful new diskless architecture that lets you store your time series data in cloud object storage while running the database engine locally. This approach offers significant advantages: you get the performance of a local database combined with the durability, scalability, and cost-effectiveness of cloud storage. In this tutorial, I’ll show you how to set up InfluxDB 3 Core or Enterprise with Azure Blob Storage as your object store.

A Step-by-Step Guide to Choose an ERP System for Construction

In today's competitive construction industry, business owners and managers are faced with a ton of challenges, ranging from managing project scope to delivering projects on time and within budget. Effective resource allocation, maintaining control over finances, and improving coordination among teams hold the key to success. A very effective tool that has been helping companies in leveling their processes and minimizing gaps in productivity is an Enterprise Resource Planning (ERP) system.

Best Logging Practices: 14 Do's and Don'ts for Better Logging

Ever found yourself drowning in a sea of log data, struggling to make sense of the overwhelming noise? Or perhaps faced a major system breakdown, only to find that your logs didn’t provide the answers you needed, leaving you in the dark? Effective logging is a critical yet often overlooked aspect of software development and operations, highlighting why logging is important – it’s the foundation upon which observability, troubleshooting, and system maintenance are built.

Modernizing Data Centers for AI: Bridging Observability, Cost Control, and Intelligent Automation

Attend our webinar on April 3 to see our latest innovations live. Register IT Operations are more complex than ever, with modern data centers spanning on-premises, containers, multi-cloud environments, and AI-powered infrastructure. The rapid expansion of data sources has created an overwhelming volume of information, making manual monitoring across multiple tools impractical. Visibility gaps slow down troubleshooting and delay critical decisions, impacting business performance.

Server Monitoring Explained: How to Outwit Downtime Before it Strikes

Server monitoring is the practice of continuously tracking server health, performance, and resource usage to catch issues before they cause downtime. When a server crashes, it can mean lost revenue, frustrated users, and a mad scramble to fix the problem. The right server monitoring tool helps your IT team stay ahead by providing real-time alerts and visibility into critical metrics. In this guide, we’ll break down how server monitoring works, why it matters, and what to look for in a solution.

Building optimized LLM chatbots with Canonical and NVIDIA

The landscape of generative AI is rapidly evolving, and building robust, scalable large language model (LLM) applications is becoming a critical need for many organizations. Canonical, in collaboration with NVIDIA, is excited to introduce a reference architecture designed to streamline and optimize the creation of powerful LLM chatbots. This solution leverages the latest NVIDIA AI technology, offering a production-ready AI pipeline built on Kubernetes.