Operations | Monitoring | ITSM | DevOps | Cloud

Data governance frameworks for distributed microservices applications

Implementing robust data governance in microservices architectures presents unique challenges and opportunities. As organizations decompose monolithic applications into distributed services, traditional centralized data management approaches no longer suffice. Each microservice may manage its own data store, creating potential inconsistencies, compliance risks, and security challenges.

Microservices versus monoliths

Monolithic and microservices architectures represent two fundamentally different approaches to software design. By understanding the benefits and drawbacks of each architectural style, developers can make informed decisions about which approach best fits their application needs. While monolithic architecture bundles all application functionality into a single deployable unit, microservices architecture breaks the application into smaller, independently deployable services.

Measuring success in microservices migration projects

Microservices migrations represent significant investments for organizations seeking greater agility, scalability, and development velocity. Yet without clear metrics to guide the journey and measure outcomes, these initiatives risk delivering technical change without meaningful business impact. Establishing appropriate success measures ensures that migration efforts stay aligned with organizational goals while providing visibility into progress and value delivery.

Strangler pattern implementation for safe microservices transition

Moving from monolithic applications to microservices represents a significant architectural transformation. The Strangler Pattern offers a controlled, incremental approach to this migration, enabling organizations to gradually replace functionality while keeping systems operational throughout the transition. This methodology substantially reduces risk compared to complete rewrites, making it an invaluable strategy for organizations with business-critical applications.

Find and fix CI build errors with AI

Software teams rely on CI/CD pipelines to build, test, and deploy code quickly. But when a build fails, it can disrupt the entire workflow. Digging through logs, chasing down errors, and switching between dashboards takes time you don’t want to waste. In this tutorial, you’ll learn how to use your AI coding assistant — powered by structured data from your CI system — to diagnose and fix build failures faster.

The value of product thinking for platform teams | webinar

Platform engineering can drive velocity, reduce risk, and increase value — but only if it's built with a product mindset. In this live event, Rob Zuber, CTO of CircleCI, hosts a panel of experts to explore how treating developers as customers helps platform teams deliver greater outcomes. Featuring Camille Fournier, Randy Shoup, Raju Gandhi, and Teresa Torres, this webinar covers practical strategies for building internal platforms that earn trust, abstract complexity, and fuel developer productivity.

Build a scalable internal developer portal with Backstage and CircleCI

Internal developer portals (IDPs) have become essential tools in platform engineering, helping standardize developer workflows and reduce friction by providing self-service access to tools, APIs, and infrastructure. During my time on a platform team, I experienced firsthand the transformative power of IDPs. Our team implemented custom solutions that significantly reduced load on developers, allowing them to focus on writing code rather than navigating complex infrastructure.

Preventing harmful LLM output with automated moderation

Large Language Models (LLMs) can produce impressive text responses, but they’re not immune to generating harmful or disallowed content. If you’re developing an LLM-powered application, you need a reliable way to detect and block risky outputs. Disallowed content – hate speech, explicit descriptions, harmful instructions – can damage your product’s reputation, endanger user safety, and potentially violate legal or platform guidelines.

Automating vulnerability scanning for Gradle dependencies with CircleCI

Detecting dependency vulnerabilities in a Gradle-based project is crucial because it prevents applications from using libraries (dependencies) with security holes. Imagine an application as a house. Each dependency, or library used in the project, is like building material (such as wood, glass, or bricks). If there’s a flawed or easily penetrable material, the house can become unsafe, such as being more vulnerable to thieves or collapsing during an earthquake.

CI/CD preprocessing pipelines in LLM applications

In Large Language Model (LLM) applications, the quality of the training data is paramount in determining the final model performance. One of the most important steps in preparing datasets is cleaning and transforming raw data into similar and usable formats. However, this process can be tedious and time-consuming when done manually. Automating these data cleaning workflows is essential to improve efficiency and maintain consistency across multiple datasets.

Creating and testing a RAG-powered AI app with Gemini and CircleCI

Have you ever asked an AI model a question and received an outdated or completely off-base response? I’ve been there too. The problem is that most AI models rely solely on their pre-trained knowledge, which becomes obsolete over time. This is where RAG can help: RAG is a hybrid AI technique that combines the advantages of retrieval systems and generative models. It bridges the gap by bringing in real-time information from external knowledge sources to improve the generation quality.

Managing EKS deployments with CircleCI deploys

Development teams managing Kubernetes-based applications face challenges in maintaining visibility and control over their deployment processes. Without a centralized interface, teams struggle to track, monitor, and manage releases across their Kubernetes clusters, leading to potential deployment errors, and difficulties in maintaining consistent deployment workflows.

7 tips for effective system prompting

Looking to get the most out of AI tools? In this video, we walk through 7 practical tips for writing effective system prompts that lead to more accurate, helpful, and context-aware responses. Whether you're building with LLMs or just refining your workflows, these tips will help you structure your prompts for success. Watch the full walkthrough and start improving your prompting strategy today.

CircleCI MCP server: Natural language CI for AI-driven workflows

The pace of software development has changed. With AI coding assistants now embedded into engineering workflows, developers are building faster, shipping sooner, and writing more code than ever before. But as velocity increases, so does the complexity of keeping that code running. When builds fail, developers need answers fast. They need clarity, context, and actionable feedback right where they’re working.

How to use LLMs to generate test data (and why it matters more than ever)

The way software is written is changing fast. In the past few years, AI coding assistants and large language models (LLMs) have gone from novelty to necessity for many developers. Tools like Cursor, ChatGPT, and custom in-house models are helping teams generate boilerplate, scaffold features, and even build entire apps within minutes. It’s exciting. But it also raises the stakes. When code is written faster, it’s deployed faster.

CircleCI deploys: Enterprise-scale deployment automation with zero downtime

Discover how CircleCI enables enterprises to safely manage thousands of daily deployments at scale. In this short demo, we showcase: Learn how CircleCI Deploys eliminates manual intervention while ensuring production stability. Perfect for DevOps teams looking to automate deployment workflows and implement progressive delivery strategies in enterprise environments.

Benchmarking Kotlin Coroutines performance with CircleCI

A benchmark can be interpreted as a standard of comparison used to assess something. In everyday life, for example, when we want to buy a new cellphone and want to know which one is faster, we can see the speed test (benchmark) by measuring how fast the cellphone opens applications or runs games. From there, we can compare which cellphone is better based on the numbers produced.