The latest News and Information on Cloud monitoring, security and related technologies.
Redis is an in-memory data store. It’s predominantly a key/value store, so it does not have features in many relational databases. It can be used as a simple database, a cache, or as a pub/sub system. Since it’s all in-memory, it is very fast, but it also requires alot of memory. Amazon Web Services, Microsoft Azure, and Google Cloud Platform all provide their own managed Redis services. The available versions and features vary from provider to provider. Let’s take a closer look.
AWS Lambda has become the most widely used deployment pattern for serverless applications. It allows developers to set aside worrying about server provisioning, maintenance, idle capacity management and scaling, and instead to focus solely on writing business logic. But that’s not entirely true. Because while Lambda is a self-managed AWS service, it still requires careful design to get the best performance out of the computation capabilities it provides.
Amazon recently announced the rollout of their new AWS Savings Plans, a new way to reduce your cloud compute costs. These allow you to achieve the discounts associated with their popular Reserved Instances (up to 72% off on-demand pricing) without having to engage in the headache of managing the same. With the new plans, you would commit to a particular hourly spend of your choosing on either a 1 year or 3 year fixed term.
Serverless is a quickly-maturing technology and, if you’ve followed its evolution over the past several years, you’ve likely seen a host of great (and not-so-great) documentation of its technology and practices. Now that serverless is a much more mainstream topic for reputable outfits to report on, it’s become easier to comb through serverless fact and fiction in the media.
At Skeddly we’re focused on bringing you the best in AWS help tutorials, AWS scheduler services, and AWS backup services. However, from time to time we like to reach out to other leaders in the AWS space to help you, our blog readers, stay on top of the latest developments and news within the AWS ecosystem.
In my previous post on new approaches to managing hybrid cloud environments, I discussed the issues that commonly arise for IT operations teams. While hybrid cloud gives IT great flexibility to design infrastructure that’s uniquely suitable to diverse business and user requirements, it also brings about more complexity. Hybrid and multi-cloud businesses generate significantly more IT event data and its coming from many more places now.
In the second of our new series of posts, Yan Cui highlights the key insights from the Amazon Builders’ Library article, Using load shedding to avoid overload, by AWS Principal Engineer (AWS Lambda) David Yanacek.
The popularity of serverless infrastructure, like AWS Lambda, is on the rise, which is easy to understand, given its promise of a cheaper price tag and less maintenance. However, as companies are lifting and shifting apps into lambda, many are discovering that it’s not that simple. Like any shift, such as moving from on-prem to the cloud, the reality is, applications need to be designed a certain way in order for you to reap the cost and efficiency benefits.