Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Four Challenges for ML data pipeline

Data pipelines are the backbone of Machine Learning projects. They are responsible for collecting, storing, and processing the data that is used to train and deploy machine learning models. Without a data pipeline, it would be very difficult to manage the large amounts of data that are required for machine learning projects.

Adaptive AI in 2023: Components, Use Cases, Ethics & Potential of Adaptive AI

AI is no longer optional for most businesses — and it’s far from a differentiating factor. In fact, researchers found that over 95% of companies have AI initiatives underway. To get ahead of the competition, leaders need to: Adaptive artificial intelligence (AI) is the next generation of AI systems. It has the ability to adjust its code for real-world changes, even when the coders didn’t know or anticipate these changes when they wrote the code.

Charmed Kubeflow 1.7 is now available

Canonical, the publisher of Ubuntu, announced today the general availability of Charmed Kubeflow 1.7. Charmed Kubeflow is an open-source, end-to-end MLOps platform that can run on any cloud, including hybrid cloud or multi-cloud scenarios. This latest release offers the ability to run serverless machine learning workloads and perform model serving, regardless of the framework that professionals use.

Charmed Kubeflow 1.7 Beta is here. Try it now!

Canonical is happy to announce that Charmed Kubeflow 1.7 is now available in Beta. Kubeflow is a foundational part of the MLOps ecosystem that has been evolving over the years. With Charmed Kubeflow 1.7, users benefit from the ability to run serverless workloads and perform model inference regardless of the machine learning framework they use.

Sponsored Post

Machine-Learning Automation: Processing, Storing, & Analyzing Data in the Digital Age

The world of software is growing more complex, and simultaneously changing faster than ever before. The simple monolithic applications of recent memory are being replaced by horizontal cloud-native applications. It is no surprise that such applications are more complex and can break into infinitely more ways (and ever new ways). They also generate a lot more data to keep track of. The pressure to move fast means software release cycles have shrunk drastically from months to hours, with constant change being the new normal.

TensorFlow Inference of Visual Images, Orchestrated by Cloudify, with Intel Optimizations

The following blog was written together with Petar Torre, Solutions Architect at Intel. This blog describes how Cloudify automates the deployment and monitoring of Machine Learning systems, by orchestrating an Intel-optimized TensorFlow workload running inference with a pre-trained ResNet-50 model from the Intel Model Zoo. In a nutshell, a container running a Jupyter Notebook with the Intel optimized TensorFlow model is scheduled as a Kubernetes pod on K3S on AWS EC2.

Unlocking the Potential of Machine Learning on the Cloud

Nowadays, when most people think about the term “machine learning,” they think of advanced, refined applications such as Chat-GPT, the chatbot-based deep learning text generator, or AlphaGo, the computer program that’s currently the “world's best player” of the board game Go.

What is the difference between unsupervised and supervised learning in machine learning?

Machine learning affects nearly every aspect of our daily lives. To understand how this technology works and how you can use machine learning, it’s necessary to know the difference between unsupervised and supervised machine learning. The following are essential points regarding the different aspects of unsupervised and supervised machine learning.