A while ago, we covered the invocation (trigger) methods supported by Lambda and the integrations available with the AWS catalog. Now we’re launching a series of articles to correlate these integration possibilities with common serverless architectural patterns (covered by this literature review). In Part I, we will cover the Orchestration & Aggregation category. Subscribe to our newsletter and stay tuned for the next parts of the series.
The pay-per-use economy of serverless is pushing computing into the commodity space in Wardley Maps slowly but consistently. Unsurprisingly (to me at least), this movement is not pushed solely by cloud vendors, but also by tech giants such as Twilio and Atlassian as they re-package FaaS for the needs of their customers and charge them with the pay-per-use scheme. The last prominent move in this approach came by Salesforce last week, when they introduced serverless functions on their platform.
This was originally posted on The New Stack. Once upon a time, log management was relatively straightforward. The volume, types, and structures of logs were simple and manageable. However, over the past few years, all of this simplicity has gone out the window. Thanks to the shift toward cloud native technologies—such as loosely coupled services, microservices architectures, and technologies like containers and Kubernetes—the log management strategies of the past no longer suffice.
From the Cloud First policy established in 2010 to last year’s Cloud Smart update, it’s clear the government is driving federal agencies toward cloud computing. The strategy makes sense, but for agencies migrating, however, the decision is less clear cut, because there are several options agencies can choose based on individual agency needs.
At Lumigo, we recently ran into some issues with a service we built on top of our Nodejs AWS Lambda handler. These issues were the result of lambda execution leaks from within our serverless code. In this article, I’ll explain about node.js lambda execution leaks and how to avoid them.
The development of container-based microservice architectures is being accelerated in the cloud, as leading cloud service platforms are delivering targeted solutions for these workloads. One such solution is Azure Kubernetes Service (AKS), which offers the most popular container orchestration platform–Kubernetes–in a managed-service model.
The FinOps journey’s third phase, “Operate”, is the last step in the FinOps cycle. But it is by no means the end. The first phase of the FinOps journey, Inform, is about gaining visibility into your cloud operations and creating accountability. Next, the Optimize phase focuses on discovering ways to optimize cloud services and resources, and creating frameworks designed to make spend more efficient.
It’s a day for celebration! Our migration is complete, and our applications are now running in the cloud environment best suited to their needs. The rest of our application inventory, the ones not cut out for the cloud, remain on-premises where they belong. Actually…we’re not done yet. We still have some work to do to make sure our hybrid environment runs smoothly and delivers the business value we expect. Fortunately, we aren’t the first ones to travel this path.