Operations | Monitoring | ITSM | DevOps | Cloud

Adobe meets huge requirements with a lean approach: 200B requests per day, 6 data centres, 5 people

How do you support a diverse infrastructure that spans six data centres, three continents and the public cloud? In this talk given at Ubuntu Masters Conference, Joe Sandoval, SRE Manager at Adobe Ad Platform, talks about how Adobe uses open-source technologies, including Ubuntu, Kubernetes and OpenStack, to craft a feature-rich platform that developers can build on to best serve their customers.

Lessons in Building Well-Formed Scrum and Kanban Teams

In the early days of Amazon, Jeff Bezos set a rule: teams shouldn’t be larger than what two pizzas can feed, no matter how large a company gets. Setting this rule of small teams meant individuals spent less time providing status updates to each other and more time actually getting stuff done. It also allowed team members more time to focus on continuous improvement. PagerDuty, like Amazon, has a strong culture of continuous improvement.

Control the phase transition timings in ILM using the origination date

As part of Elasticsearch 7.5.0, we introduced a couple of ways to control the index age math that’s used by index lifecycle management (ILM) for phase timings calculations using the origination_date index lifecycle settings. This means you can now tell Elasticsearch how old your data is, which is pretty handy if you’re indexing data that’s older than today-days-old.

Kafka Data Pipelines for Machine Learning Enterprise Applications

Traditional enterprise application platforms are usually built with Java Enterprise technologies and this is the case as well for OpsRamp. However, in machine learning (ML) world, Python is the most commonly used language, with Java rarely used. To develop ML components within enterprise platforms, such as the AIOps capabilities in OpsRamp, we have to run ML components as Python microservices and they communicate with Java microservices in the platform.

Can You Tell Debug Data and BI Data Apart?

A few blogs posts ago I wrote about new BI for digital companies and in that blog I alluded that quite a bit of that BI is based on log data. I wanted to follow up on the topic of logs, why they exist and why they contain so much data that is relevant to BI. As I said in that post, logs are an artifact of software development and they are not premeditated, they are generated by developers almost exclusively for the purpose of debugging pre-production code. So how is it that logs are so valuable for BI?