On March 16th, we announced our Customer Care Program, including four no-charge emergency response apps. We’ve already seen tremendous traction with the apps. As of March 25, nearly 1000 organizations have downloaded the apps. In addition, our amazing community has generated many new ideas, resources, tools, and stories. Some are best practices, some are specific to customers of the Now Platform®, and some come from ServiceNow partners who are helping with new apps, services, and strategy.
There are 1.3 billion websites out there in the great unknown and it’s hard not to think about what makes them different from one another. Why do users flock to one website and ignore the other completely? One major differentiator is, of course, content. I’m not going to dwell on what type of content is better. Another reason why users stick to one website over another is the user experience. Today we’ll be looking at a third major differentiator: Website Performance.
Longhorn is cloud-native distributed block storage for Kubernetes that is easy to deploy and upgrade, 100 percent open source and persistent. Longhorn’s built-in incremental snapshot and backup features keep volume data safe, while its intuitive UI makes scheduling backups of persistent volumes easy to manage. Using Longhorn, you get maximum granularity and control, and can easily create a disaster recovery volume in another Kubernetes cluster and fail over to it in the event of an emergency.
The global corporate landscape is on the brink of a complete premises lockdown in light of the COVID-19 crisis. Service disruption is inevitable, and enterprises’ business continuity plans are being put to the test. Despite this challenge, it’s heartening to see companies across nations take quick steps to ensure the health and safety of their employees during these trying times.
We’ve been working on something big. We’re building Request Metrics, a new service for web performance monitoring. TrackJS is a fantastic tool to understand web page errors, but what if your pages aren’t broken, just slow? What if the checkout page takes 10 seconds to load? What if that user API is slowing down from your recent database change? What pages have the worst user experience? Request Metrics will tell you that.
A lot of teams are asking us about how to do incident management when you’re suddenly remote. We understand. Going remote can be scary, and few things are scarier than having a service outage you aren’t prepared for. Nobody wants to be in a situation where an important service going down and the engineer who can help isn’t answering on Slack. And if your company isn’t used to working remotely, it can be harder than ever to be on the same page during an incident.
Monitoring has been around since the dawn of computing. Recently, however, there’s been a revolution in this field. Cloud native monitoring has introduced new challenges to an old task, rendering former solutions unsuitable for the job. When working with cloud native solutions such as Kubernetes, resources are volatile. Services come and go by design, and that’s fine—as long as the whole system operates in a regular way.
In today’s era of microservices, containers, and containerized applications, software architecture is more complex. Kubernetes is king in this environment, orchestrating an army of Docker containers in more distributed environments.