Operations | Monitoring | ITSM | DevOps | Cloud

5 UX Best Practices for Resilient & High-Performing Mobile Apps

What keeps users coming back to an app? Speed helps. Stability matters. But more than anything, people return to tools that feel easy to use, even under pressure. When an application responds clearly and behaves as expected, users are more likely to stick around. UX design plays a quiet but powerful role in this. It's not just about how something looks-it's how it works. The small details in navigation, layout, and screen flow all contribute to whether someone continues using an app or closes it within seconds.

Bunnyshell Named Startup of the Year 2024 in Palo Alto by HackerNoon

"If AI is writing the code, we make sure it runs." Alin Dobra, Founder Bunnyshell We’re proud to announce that Bunnyshell has been named Startup of the Year 2024 in Palo Alto by HackerNoon! This recognition reflects the work we’ve done to build the Software Delivery Platform for a new era—where code is written by AI, but validated by real environments.

3 Reasons Why You Should Use Custom Playwright Fixtures

In this video, Stefan Judis, Playwright ambassador, explains the power of Playwright fixtures while running tests in JavaScript or TypeScript. Learn how to streamline your test setup, remove repeated code, and leverage custom fixtures for cleaner and more efficient end-to-end tests. By the end of this video, you'll have a clear understanding of why you should use Playwright's native architecture to structure your testing project.

AI in Action with Kunal Kushwaha: 2 Demo Showcase. See What's Possible!

Join Kunal Kushwaha, Field CTO at Civo, for two demos using relaxAI. In the first demo, we'll show you how to deploy your own Large Language Model (LLM) inference engine using Ollama, giving you full control over your AI model. In the second demo, we'll demonstrate how to build custom AI integrations using relaxAI API, making it easy to add AI features to your existing applications. Whether you're an AI developer, MLOps team, or just curious about AI, this video is for you.

Working with GPUs on Kubernetes and making them observable

GPUs are everywhere powering LLM inference, model training, video processing, and more. Kubernetes is often where these workloads run. But using GPUs in Kubernetes isn’t as simple as using CPUs. You need the right setup. You need efficient scheduling. And most importantly you need visibility. This post walks through how to run GPU workloads on Kubernetes, how to virtualize them efficiently, and how Coroot helps you monitor everything with zero instrumentation or config.

Hyperparameter tuning for LLMs using CircleCI matrix workflows

Hyperparameter tuning is a critical step in optimizing large language models (LLMs). Parameters such as learning rate, batch size, weight decay, and number of training epochs can significantly affect convergence behavior and final model performance. While several approaches like grid search or random search are widely used, executing them manually is inefficient; especially when each training run is compute-intensive.

Announcing Go tracer v2.0.0

Datadog has long supported the monitoring of instrumented Go applications through our Go tracer v1. As the Go ecosystem has continued to mature, we’ve been hard at work collecting feedback and improving upon the tracer’s capabilities and usability features. We are now thrilled to announce the release of our Go tracer v2.0.0. This major update includes better security and stability, and a new and simplified API.