Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Get in front of delivery risks by managing work in progress

Sleuth’s product team is pleased to announce an exciting new feature that provides early and actionable visibility into emerging work-in-progress risk! With this release, Sleuth provides customers even more actionable visibility into their engineering efficiency. It extends Sleuth's deploy-centric tracking capabilities upstream in the developer workflow to provide real-time visibility into in-flight work and its emerging risks. Here's how it works.

Announcing issue-initiated Change Lead Time

Sleuth is pleased to announce a new option to start your Change Lead Time clock based on state transitions in your issue tracker! In our ongoing effort to meet customers where they are, we heard from many of you that you’d like Sleuth to account for and provide visibility into your pre-commit coding time. We’re pleased to offer this this new option to tell Sleuth which specific state transitions in your issue tracker should start your Change Lead Time clock!

Measuring Developer Productivity: Can, How, and Should You Do It?

Productivity is a big topic. We all want to be more productive — and software developers in particular get put under the microscope. Interestingly, their work is also particularly difficult to measure and assess what “productive” even is. But we need to do it because we want developers to be more productive — and happier — because we want to achieve business goals together, better.

Improving Software Failure: Measure, Change, Learn

How do you treat software development failure? Do you take time to measure and learn from software failure? Or do you try to fix it quickly only after your customers complain about it? Failure can be an opportunity to learn and get better. So how can you measure and learn from software failure, and turn failure into at least a partially positive experience? Failure happens all the time, but if you're not measuring it, how do you know what you’re missing?

The DORA metrics backstory

DORA metrics are becoming the industry standard for measuring engineering efficiency, but where did they come from? ‍ We talk a lot about DORA metrics here at Sleuth — what they are and how to measure them. But we haven’t shared much about the context of DORA metrics — their history and why we use them. So let’s do that. This article provides a summary.

Mean Time to Recovery (MTTR) explained

It's Friday afternoon, and you have mail. Apparently, a user received a 500 error when attempting to sign in. She contacted Customer Service. They didn't know what to do, so they forwarded the email to your engineering team. A close look at the email thread reveals that Customer Service received it... on Tuesday. And they sat on it until today. ‍ Hopefully, it was just this one user. You open your browser, navigate to the web application, and attempt to sign in. You also get a 500 error.

Change Failure Rate explained

This post is the third in a series of deeper dive articles discussing DORA metrics. In previous articles, we looked at: The third metric we’ll examine, Change Failure Rate, is a lagging indicator that helps teams and organizations understand the quality of software that has been shipped, providing guidance on what the team can do to improve in the future.