Operations | Monitoring | ITSM | DevOps | Cloud

Enforce type safety with TypeScript checks before deployments

TypeScript introduces the benefits of static typing to JavaScript, allowing developers to identify bugs at an earlier stage. However, relying solely on developers to run type checks locally isn’t enough. Without tsc being called, a person can just leave the invalid code and it may pass to production. This tutorial will show you how to set up CircleCI to automatically run the TypeScript type checks on each push.

Integrate CircleCI with Railway for automated deployments

The speed and reliability of deploying backend and full-stack applications are usually a concern for development teams. Fortunately, Railway is a developer-friendly platform that allows you to deploy apps with limited configuration. It is also quick, easy to use, and has reasonable defaults. Now, imagine pairing that with CircleCI, one of the strongest continuous integration platforms available.

Validate CDC data in your CI/CD pipeline using CircleCI

Change Data Capture (CDC) is a technique used to identify and capture changes, such as inserts, updates, and deletes, in a source database so they can be replicated to another system in real-time. This approach is crucial in modern data pipelines, especially for powering data lakes, analytics platforms, and event-driven applications that depend on up-to-date information. Setting up a CDC pipeline is only the first step.

Fix flaky tests in your sleep with Chunk by CircleCI

A test fails. You rerun it and it passes. You shrug and move on. This is how most teams deal with flaky tests. The “rerun until green” approach works in the moment, and rerunning from failed tests is a useful way to confirm whether a failure is real. But reruns don’t fix the underlying issue. Over time, they burn CI resources and can hide real instability in your code. On the other hand, fixing flaky tests can mean hours of work.

DORA is right: AI is an amplifier, for better or worse

The 2025 DORA report just surveyed nearly 5,000 technology professionals and delivered a verdict that should reshape how you think about AI investment: AI doesn’t create organizational excellence; it amplifies what already exists. For teams with solid foundations, AI is a force multiplier. For teams with broken processes and dysfunctional systems, AI magnifies the chaos.

Set up a live code editor in Next.js with CircleCI

Interactive playgrounds have changed the way developers learn and experiment with code. Instead of having to copy and paste code into a separate Read–Eval–Print Loop (REPL) or local environment, users can write, edit, and run code directly within the tutorial or application interface. Adding this type of editor to a Next.js app makes it more engaging and helps users understand better by eliminating the need to switch between different tools.

What is autonomous validation? The future of CI/CD in the AI era

Over the past decade, CI/CD has redefined how modern software is built and shipped. CircleCI has been a leader in that transformation, working alongside the world’s best engineering teams to build a reliable foundation for continuous delivery at scale. Today, those foundations are under new pressure as AI reshapes every aspect of the delivery cycle. Developers are producing more change with less certainty about what those changes touch.

Implementing image recognition with React and continuous deployment

Integrating artificial intelligence (AI) into web applications can significantly enhance user experience. AI offers features like image recognition to process and analyze user-uploaded images. Combining this with a robust continuous integration and continuous deployment (CI/CD) pipeline using CircleCI ensures seamless updates and reliable delivery. In this article, you will learn how to build a React app that uses TensorFlow.js for client-side image recognition and set up automated testing with CircleCI.

Building LLM agents to validate LangGraph tool use and structured API responses

Transitioning LLM agents from intriguing prototypes to reliable, production-grade solutions introduces a unique and significant challenge: the inherent stochasticity of LLMs. Unlike conventional software, where inputs predictably yield precise outputs, an LLM’s response can exhibit variability even when presented with identical prompts. To ensure the dependability of your LLM agent, you will need a rigorous validation strategy.

Navigating AI transformation ft. Meg Adams, Senior Director of Engineering at The New York Times

In this episode of The Confident Commit, Rob Zuber sits down with Meg Adams, Senior Director of Engineering at The New York Times, for a deep dive into leading engineering teams through the AI revolution while staying true to organizational mission. Meg shares how the Times approaches AI adoption with a "measured but focused" strategy, emphasizing experimentation and opinion-formation over mandates, and why she believes AI serves as a force multiplier for what already exists in your organization and workflows.

The new AI-driven SDLC

For decades, the software development life cycle (SDLC) has been the framework teams use to understand how software moves from idea to production. It breaks complex work into familiar phases: planning, design, development, testing, deployment, and maintenance. This structure gave organizations a shared way to coordinate teams, track progress, and build with confidence.

Automating Expo app build delivery to QA with CircleCI and EAS webhooks

Manually sharing mobile app builds with Quality Assurance (QA) engineers can be a tedious and error-prone process. Developers often find themselves exporting.apk or.ipa files, uploading them to Google Drive or Dropbox, and then pinging the QA team on Slack to announce the upload, all while juggling deadline and code reviews. This manual process not only slows down feedback cycles but also leaves room for human error, miscommunication, or outdated builds being tested.

Building and deploying a Python MCP server with FastMCP and CircleCI

Extending Large Language Models (LLMs) with custom tools has become increasingly valuable in today’s AI landscape. Model Context Protocol (MCP) servers provide a standardized way to connect external tools and resources to LLMs. This can enhance their capabilities beyond basic text generation. While thousands of pre-built MCP servers exist, creating your own allows you to address specific workflows. You can implement use cases that off-the-shelf solutions cannot handle.

Automated RAG pipeline evaluation and benchmarking with RAGAS

Retrieval-Augmented Generation (RAG) pipelines have become an integral part of how Large Language Models (LLMs) access information beyond their training cutoff. These pipelines enable LLMs to deliver current, accurate, and grounded responses. By fetching relevant external documents, RAG mitigates common LLM challenges like factual inaccuracies and hallucinations. However, this methodology introduces a new complexity: evaluating RAG pipeline performance is particularly challenging.

7 ways AI agents are transforming software delivery

For most teams, the slowest part of delivery isn’t writing code, it’s everything that happens after: automated tests, manual reviews, bug fixes, final approvals, and the long wait for deployment. The longer these phases run, the more expensive and painful late fixes become. As AI makes it easier to generate code at scale, those bottlenecks only get bigger.

Code coverage standards for a Next.js project using CircleCI and Coveralls

An essential part of software development, testing helps catch bugs and errors early, improves software quality, and ultimately prevents costly issues from being deployed to production. The effectiveness of software testing will remain uncertain until it can be measured and that is where code coverage comes in. Code coverage is a metric that tells developers what portion of their codebase is executed when specific tests are run.