Operations | Monitoring | ITSM | DevOps | Cloud

15: Optimizing AI Workloads: Balancing Cost, Performance, and Scalability with Bijit Ghosh

In this episode, Andrew Hillier and Bijit Ghosh discuss the evolving landscape of AI, discussing the growing prominence of inference over training, hybrid cloud strategies, balancing cost with performance, and the orchestration of complex hardware environments. The conversation also touches on emerging concepts like AI factories, the challenges of sovereign cloud, and how enterprises are navigating data gravity and regulatory constraints. It's a deep dive into optimizing AI infrastructure, managing costs, and the disruptive changes that are transforming both technology and business outcomes.

Kubex Named a 2026 Leader by GigaOm

Industry analyst recognition means something different from an award. GigaOm does not hand out trophies. They evaluate products against a defined capability framework and tell the market where vendors actually stand. By that measure, Kubex has been named a Leader in two of GigaOm’s 2026 Radar Reports: Kubernetes Resource Management and Cloud Resource Optimization. In the Kubernetes report, we are positioned as an Outperformer. In Cloud Resource Optimization, a Fast Mover.

AI Factories Will Be Won on Efficiency: Why the Kubex + Rafay Partnership Matters

The early era for AI was defined by experimentation, standing up isolated environments, and finding the first practical use cases. Today, the conversation is different. Enterprises are no longer asking whether AI matters. They are asking how to scale it sustainably, securely, and economically. That shift is giving rise to the AI factory: a repeatable, governed, production-ready environment where data scientists, platform teams, and application teams can build, train, deploy, and operate AI at scale.

Kubernetes GPU Resource Optimization: Top 10 Solutions in 2026

TL;DR: Most Kubernetes clusters waste GPU compute through over-provisioned pod requests and suboptimal node selection. This guide covers 10 tools that fix this across four layers: resource lifecycle (Kubex, ScaleOps, Cast.ai), hardware partitioning (GPU Operator, MIG, time-slicing), inference serving (Triton, KServe), and observability (DCGM Exporter, NFD). For most teams, the biggest gains are at the resource lifecycle layer: no model changes required.

Agentic AI at Scale: Building the Kubex Agentic AI Platform

In the modern cloud infrastructure landscape, we don’t have a data problem; we have an actionable interpretation gap. Engineering teams are often drowning in metrics that describe a crisis without providing a clear path to remediation. Traditional FinOps, SRE, and DevOps work has become a reactive loop of dashboard-watching and manual firefighting.

GPU Fragmentation Is Killing AI Economics

By 2026, the GPU shortage isn’t a supply-chain hiccup anymore. It’s baked into the system. Even after pouring billions into CapEx, most enterprises still want 40% more GPU capacity than they actually have. And it’s not because they’re chasing moonshots. Technology companies are training foundation models while serving inference for millions of users on the same clusters. AI labs are juggling fine-tuning, evaluation, and real-time experimentation side by side.