Operations | Monitoring | ITSM | DevOps | Cloud

AI Tags: Why Cloud Tagging Breaks Down For AI Workloads (And What To Use Instead)

Tags have long been the backbone of cloud cost visibility and governance. They help teams understand who owns what, where spend comes from, and how infrastructure maps back to the value the business delivers. However, AI workloads have altered that model, and exposed the limitations of traditional AI tags in the process. In fact, many of the most expensive AI operations don’t run on taggable cloud resources at all.

How To Calculate Customer Retention Cost in 2026: The Hidden SaaS Metric

You may have heard that keeping an existing customer is five times cheaper than acquiring a new one. But that isn’t always true. “Hidden costs” often accompany customer retention, loyalty, and increasing “share of customer”. Could you be spending more on customer retention than on winning new customers? This quick guide will walk you through the meaning of Customer Retention Cost (CRC), why it’s important to calculate it, and how to calculate it.

Track OpenAI Spend: Explain Where Your OpenAI Budget Goes

The inevitable happened. A while back, Gartner projected that in 2026, 30–50% of all new SaaS product features would use LLM inference. That meant OpenAI-style costs would become a standard part of SaaS COGS. Today, OpenAI has become one of the most operationally significant line items for SaaS companies. But for many teams, this creates an uncomfortable gap. Engineering sees OpenAI as a fast path to innovation.

Oracle Cloud Pricing: A Comprehensive Guide To Oracle Cloud Costs

In 2025, Oracle shocked the market. Its cloud growth was so aggressive that Oracle’s stock surged, briefly making founder Larry Ellison the world’s richest person. That didn’t happen by accident. Oracle closed fiscal 2025 with $57.4 billion in revenue, mainly driven by cloud services. Oracle Cloud Infrastructure (OCI) grew roughly 50% year over year, driven by enterprise databases, AI workloads, and network-intensive applications migrating from more expensive platforms.

Webinar Recap: What It Really Takes To Make AI Profitable

Right now, 48% of organizations say they’re being asked to measure or report on AI-related costs. The problem is that they’re still figuring out how to do it. That was a very telling stat from a recent CloudZero webinar on AI and profitability, and speaks loudly to the reality that many organizations are still struggling to get a grasp on AI spend which our data shows to be rising sharply as a part of total spend in recent months.

Stateful Vs. Stateless Applications: What's The Difference (And Why It Matters)

Think of a stateful application like a conversation with a barista who remembers your order every time you walk in. They know what you had yesterday, how you like it prepared, and what you’ll probably want next. That memory makes the experience smoother, but it also means that if that barista isn’t around, your experience can break down entirely. A stateless application, on the other hand, is similar to ordering from a self-service kiosk.

Gemini Cost Per API Call in 2026: What You'll Actually Pay (And How to Control It)

On paper, Gemini pricing looks straightforward. You pay per token. Input tokens cost one amount, output tokens cost another, and different models come with different rates. But once Gemini is wired into a production SaaS product, that simplicity disappears. Fast. That’s because token usage compounds across context, retrieval, and output — not across requests. The same “API call” can cost pennies in one feature and dollars in another.

From Trough to Traction: 10 Real-World Lessons in Cloud and AI Efficiency

When CloudZero CTO Erik Peterson joined the SourceForge podcast in January 2026, he didn’t just talk about cloud costs. He reframed them as a launchpad for innovation, survival, and competitive advantage. Whether he was describing the “trough of lost innovation,” the “freemium tax,” or why efficiency is the next frontier of engineering culture, Erik’s expert insights go beyond FinOps hygiene.

AI Anomaly Detection: Catch AI Cost Surprises Before They Kill Margins

Consider this: traditional cloud cost monitoring was like checking your fuel gauge once a month — after the trip was already over. That model worked when infrastructure scaled slowly. You provisioned resources predictably and paid for stable, linear usage. AI breaks that model. Today, AI costs behave like a high-performance engine with a hypersensitive throttle. A small input, like a prompt change or a single power user, can dramatically increase your fuel burn in seconds.

AI In 2026: Autonomous, Invisible, Expensive

With all we’ve seen from AI in the last several years, it can be easy to forget that it’s still in its very early days. As torrid as its evolution has been thus far, it will only intensify. As SVP of Engineering at a B2B SaaS company, I’ve had a front-row seat for much of this evolution. Here are three ways I see AI heading in 2026.