In the previous blog in this series, we delved into the redesigned architecture of Amazon Prime Video and how they integrated different architectural styles for optimal performance and cost efficiency. We also discussed the impact of Amazon’s decision on the concept of a “serverless-first” mindset, highlighting the importance of considering alternative architectural approaches based on specific use cases and requirements.
This post gives an overview of how to build applications using the updated Docker + WASM technical preview, along with some observability best practices.
In this post, we will compare two of Amazon Web Services’ (AWS) most popular computing services: AWS Lambda and Amazon EC2. Both services offer unique advantages and can be used for different purposes.
Lambda allows you to allocate memory for your functions in increments of 1 MB, ranging from a minimum of 128 MB to a maximum of 10,240 MB (10 GB). When we specify the memory size for a Lambda function, AWS will allocate CPU proportionally. For example, a 256 MB function will receive twice the processing power of a 128 MB function.
This July, the community spirit was profoundly vibrant in the scenic city of Munich, as Kubernetes Community Day (KCD) Munich brought together a meeting of minds and inspired the open-source collaboration we all know and love. The event was a testament to the strength and vitality of the Kubernetes community, which pulsed with an energy of shared intellectual curiosity and passion for all things Kubernetes.
AJ Stuyvenberg is a Staff Engineer at Datadog and an AWS Serverless Hero. A version of this post was originally published on his blog. In AWS Lambda, a cold start occurs when a function is invoked and an idle, initialized sandbox is not ready to receive the request. Features like Provisioned Concurrency and SnapStart are designed to reduce cold starts by pre-initializing execution environments.
This is the second blog in our deep dive series on serverless architectures. In the first installment, we explored the benefits and trade-offs of microservices and serverless architectures, highlighting the case of Amazon Prime Video's architectural redesign for cost optimization.
At Lumigo, we see ourselves as your reliable ally in the noble mission of detecting and vanquishing troublesome issues that lurk within your serverless and container applications. Our secret sauce? Equipping you with a wealth of detailed trace data, ensuring you’re always well-lit and ready for battle when the nefarious ‘bugs’ make their unsolicited appearances.
Serverless computing, also known as Functions as a Service (FaaS), has taken the world of cloud computing by storm. A game-changer in its own right, serverless computing has completely transformed the way developers approach and design their applications by abstracting the underlying infrastructure layer. But what makes it a powerful paradigm shift?
Health checks are an important factor when working with containerized applications in the cloud and are the source of truth for many applications in terms of their running status. In the context of AWS Elastic Container Service (ECS), health checks are a periodic probe to assess the functioning of containers. In this blog, we will explore how Lumigo, a troubleshooting platform built for microservices, can help provide insights into container crashes and failed health checks.