This tutorial guides you on how to use the Amazon SageMaker Orb to orchestrate model deployment to endpoints across different environments. It also shows how to use the CircleCI platform to monitor and manage promotions and rollbacks. It will use an example project repository to walk you through every step, from training a new model package version to deploying your model across multiple environments.
Microsoft Azure provides an all-encompassing service that allows you to host Docker containers on the Azure Container Registry (ACR), deploy to a production-ready Kubernetes cluster via the Azure Kubernetes Service (AKS), and more. Using CircleCI, you can automatically deploy updates to your application, providing a safer and more efficient CI/CD process for managing your software. This article shows you how to automate deployments for a.Net application to Azure Kubernetes.
The cognitive bias known as the streetlight effect describes our desire as humans to look for clues where it’s easiest to search, regardless of whether that’s where the answers are. For decades in the software industry, we’ve focused on testing our applications under the reassuring streetlight of GitOps. It made sense in theory: wait for changes to the codebase made by engineers, then trigger a re-test of your code. If your tests pass, you’re good to go.
With automation and CI/CD practices, the entire AI workflow can be run and monitored efficiently, often by a single expert. Still, running AI/ML on GPU instances has its challenges. This tutorial shows you how to meet those challenges using the control and flexibility of CircleCI runners combined with Scaleway, a powerful cloud ecosystem for building, training, and deploying applications at scale.
In a traditional DevOps implementation, you automate the build, test, release, and deploy process by setting up a CI/CD workflow that runs whenever a change is committed to a code repository. This approach is also useful in MLOps: If you make changes to your machine learning logic in your code, it can trigger your workflow. But what about changes that happen outside of your code repository?
Amazon Web Services (AWS) provides a vast ecosystem of products that make DevOps an absolute dream. Products like AWS Elastic Beanstalk have ready-made services for autoscaling, deployment, and logging (to name a few). However, teams may prefer to take a barebones approach and build incrementally - in which case AWS Elastic Compute Cloud (EC2) would be the preferred option.
In part 1 of this tutorial, we showed you how to build a large language model (LLM) application that uses retrieval-augmented generation (RAG) to query your own documentation and then test it using a CircleCI continuous integration (CI) pipeline.