The world lives by processing the data. Humans process the data – each sound we hear, each picture we see – everything is data for our brain. The same goes for modern applications and algorithms – the data is the fuel that allows them to function and provide useful features. Even though such thinking is not new, what is new in recent years is the requirement of near-real-time processing of large quantities of events processed by our systems.
"The cloud" started as a term used mainly by tech industry insiders but quickly fell into everyday use over the past several years. As more computing processes moved into off-site data centers and more organizations turned to cloud storage, the masses began to talk about migration to "the cloud." Today, the cloud is the accepted term for this type of computing and is used widely in discussions of IT infrastructure, data storage, and certain types of software.
Technology is now everywhere. All enterprises have to step up and be tech-savvy not merely for a competitive edge, but as a mandate to be a part of the digital-first league, the majority today. As a result, IT infrastructure has become one of the fundamental building blocks of all enterprises to enable a modern, 21st-century experience that is “Always on”.
The word open source was first coined by Christine Petersen to a working group that was dedicated, with a goal to share open-source software practices in the broader marketplace. The working group values sharing of software for better use, cheaper offering and preventing vendor lock-in.
Our Analytics & ML lead Andrew Maguire recently had a chance to share our new Anomaly Advisor feature with the wider CNCF community. In his demonstration he did some light chaos engineering (using Gremlin and stress-ng) to generate some real anomalies on his infrastructure and watch how it all played out in the Anomaly Advisor in Netdata Cloud. There were also some great questions and discussion from the audience around ML in general and in the observability space itself.
Server load can tell you a lot about your day-to-day user traffic. A sudden spike in server traffic can indicate an attack, but that’s not always the case. As website and performance monitoring become more mainstream, and you add a wider variety of backend testing and web monitoring checks to your infrastructure – you have to ask the question – Is that spike in server traffic DDOS? Or is it me…
I’ve worked in IT for over 20 years and specifically in End User Computing (EUC) for the last 10 years, notably working for Citrix and Dell Technologies. I want to share with you what some of the key differences are from a Unified Endpoint Management (UEM) platform and a Digital Employee Experience (DEX) platform (such as Nexthink Experience), and how they complement one another, and where there is overlap.
Inventory management can be a challenge for all organizations. With effective inventory control, companies can make significant improvements by avoiding wastage, enhancing traceability, and maintaining compliance. In this blog, we will know how we can control inventory and why it is important! So, let us begin!