Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

From MLOps to LLMOps: The evolution of automation for AI-powered applications

Machine learning operations (MLOps) has become the backbone of efficient artificial intelligence (AI) development. Blending ML with development and operations best practices, MLOps streamlines deploying ML models via continuous testing, updating, and monitoring. But as ML and AI use cases continue to expand, a need arises for specialized tools and best practices to handle the particular conditions of complex AI apps — like those using large language models (LLMs).

What is an IDE?

An IDE (integrated development environment) is software that combines all the functions needed for development in one place. Without an IDE, developers would need to use both a text editor to enter code and a separate compiler to make the program understandable to the computer. An IDE combines these features into one tool, making development more efficient.

Splitting and parallelizing Android UI tests with Espresso and CircleCI

For Android developers, test automation on CI/CD platforms such as CircleCI has become an indispensable part of the development workflow. But merely implementing automated testing is no longer enough to remain competitive and continue to develop at speed. Developers must also work to continuously monitor, maintain, and improve their test automation. As an application grows in complexity, the scale of development grows, as does the number of automated tests.

What is iteration?

In Agile development, where development is repeated in short periods, the key unit of the development cycle is called an iteration. Iterations, consisting of Design, Development, Testing, and Improvement are usually set for 1 to 4 weeks, and they are characterized by completing a full cycle of system development. After completing one cycle and releasing it, known as Iteration 1, the process is repeated with Iteration 2, Iteration 3, and so on.the.

Build and test LLM applications with AIConfig and CircleCI

The power of LLMs to solve real-world problems is undeniable, but unfortunately, in some cases, only theoretical. What’s stopping us from getting the most out of OpenAI’s text completion capabilities in production apps? One common problem is the inability to confidently guard against bad outputs in production the way we’re used to doing with non-AI test suites. Let’s go one step deeper. There is no equivalent of code coverage for an LLM.