Operations | Monitoring | ITSM | DevOps | Cloud

Building your AI infra, our tips

Modular architecture: Decouple compute from storage so each can scale independently. This makes it easier to adapt to growing or shifting workloads over time. Future-ready hardware: Select GPUs and CPUs not just for current workloads but with an eye on scalability, including support for newer accelerator types. Scalable design: Ensure the system allows seamless addition of compute nodes or storage without a full redesign.

Running AI without blowing up your storage

Storage is often underestimated: In infrastructure discussions, compute and networking get most of the attention, while storage is treated as secondary. For AI workloads, that can be a costly oversight. Data throughput for specialized hardware: AI infrastructure powered by GPUs can process massive volumes of data at unprecedented speeds. This puts immense pressure on the storage system to keep up. Scale-out performance: An on-prem, scale-out, software-defined storage setup allows you to meet high performance demands, grow capacity as needed, and stay in control of infrastructure costs.

Is on-prem the top choice to run AI?

‎‎Subscribe. Fuel your curiosity. In this episode, we break down what we’ve learned from teams running AI at scale, and why on-premises infrastructure is making a strong comeback. We’re seeing a shift: performance, cost control, data sovereignty, and platform flexibility are driving conversations about on-prem strategies for AI. No one-size-fits-all answers, but if you’re building or scaling AI, this might help you think a few steps ahead.

Are you running AI the smart way?

Data locality: AI models often rely on large datasets. Locating compute close to the data reduces transfer times and improves training performance. Latency sensitivity: Real-time AI applications, like recommendation systems or edge analytics, depend on low-latency environments. This can be more easily tuned in private or hybrid setups. Hardware specialization: Some AI workloads benefit from custom hardware like GPUs or TPUs. Private cloud allows more control over this, while public cloud offers broader access but less customization.

The Tech Behind Europe's Space Missions | Canonical x ESA

‎‎Subscribe. Fuel your curiosity. “Open source software is… the glue for everything that everyone does, from sending an email through to managing critical operations, not just space operations.” The European Space Agency (ESA) runs missions ranging from investigating Earth’s forests, to exploring Jupiter’s moons, to deflecting incoming asteroids.

Automating Linux Disk Expansion with Resolve: Add & Extend VM Disks in Minutes!

Running into disk space issues on your Linux servers or virtual machines? In this step-by-step demo, we show how Resolve’s powerful automation platform can help you automatically add and expand disk space on Linux systems, eliminating manual processes, reducing human error, and improving operational efficiency. In this video, you’ll learn how to: Technologies Featured: Whether you're a system admin, IT operations engineer, or automation specialist, this demo highlights how to streamline critical disk management tasks that normally require elevated access and technical knowledge.

What is Linux Support?

In the world of enterprise IT, “support” can mean many things. For some, it’s a safety net – insurance for the day something breaks. For others, it’s the difference between a minor hiccup and a full-scale outage. At Canonical, it means a simple, comprehensive subscription that takes care of everything, so that everything you build works the way you want it to, for all the people who love to use it.

Deploying secure AI: Canonical + SpectroCloud for federal missions

As mission requirements evolve, federal agencies and defense teams need infrastructure supporting AI/ML workloads anywhere, from secure cloud environments to disconnected edge locations. In this fireside chat, Mark Lewis (VP, Application Services at Canonical) and William Crum (Senior Defense Success Engineer at SpectroCloud) discuss how their organizations are helping federal customers deploy secure, scalable, and consistent Kubernetes and AI infrastructure across hybrid and edge environments.