Serverless Framework: Warming up AWS Lambda to avoid “cold start”

Michele Riso
ITNEXT
Published in
5 min readFeb 18, 2020

--

Goals

In my previous tutorials, we started learning about the Serverless framework. In particular, in:

Today we are going to learn:

  • What it is the AWS Lambda “cold start” issue
  • How to avoid the cold “start issue” using a plugin made available from the Serverless Framework

What’s the “cold start” issue

Typical cold start issue

To understand the “cold start” issue, we need to better investigate how AWS Lambda functions work behind the scenes. AWS Lambda let us deploy and run some code proving us with a highly scalable architecture and allowing us to remove the burden of provisioning a server.

Although it is a true statement from a developer point of view, AWS still need to provision a server on the fly once the Lambda function is invoked. In particular, it needs to provision a container will provision a container where the code is executed

While running in the container the function is considered to be active (a.k.a. hot). Once inactive for a certain period, the container will be terminated and the function is considered to be cold

Once cold, the function can experience a delay in its execution time up to 5–10s, hence the “cold start” issue.

“In a nutshell, a cold start is a latency experienced when you trigger a function”

Hang on, why does it take so long?

When a Lambda function is triggered for the “first time”, there is a whole request lifecycle that AWS has to perform. AWS, in fact, requires some time to download your code, start a new container, deploy the code, bootstrap the runtime and eventually run it. All those operations delay the overall execution time

The request lifecycle optimization responsibility is shared almost 50:50 between AWS and the developer

AWS Lambda lifecycle

From its side, AWS always try to optimize its side of the request lifecycle reducing the container’s spin-up time

From the developer side, the lambda execution time depends on a number of factors:

  • Language/runtime used: each language has a different bootstrap time. For instance, Java can take up to 300ms whereas nodejs take much less
  • Code size and code implementation: e.g. the number of node modules used in a NodeJS project
  • The amount of memory dedicated to the function (MB)
Cold start issue in Java and NodeJS
Cold start issue for Java and NodeJS

In order to optimize it, we can intervene in some of the factors that cause the delay (e.g. optimizing the code, reducing the bundle size, increasing the memory assigned, using a different language…)

How to reduce the “cold start” issue?

Although the code optimization might help to reduce the bootstrap time, it won’t remove the time AWS needs to spin-up a new container when the function is cold.

On the contrary, if the function is already warm, it is available straightway with no sensible delay.

We, therefore, need to find a way to almost always have warm lambda functions.

One way to keep lambda functions is to invoke them on a regular schedule.

In order to achieve it, we could configure a Cloudwatch event that invokes a new lambda function on a regular schedule (e.g. every 5 minutes). This lambda triggers async calls to any lambdas we need to warm up.

Cold start reduction pattern

How to warm up lambda functions using Serverless Framework

Luckily for us, there is a handy plugin, called Warm Up Plugin, made available from the Serverless Framework that implements the above implementation with only one line of code

Let’s start installing the plugin in our code directory

$ npm install serverless-plugin-warmup --save-dev

and adding the plugin in the plugins section of the serverless.yml file

plugins:- serverless-plugin-warmup

After that, we need to enable the plugin in any functions we need to be warmed up, using the “warmup: true” parameter

functions: app:  warmup: true  handler: app.server  events:   - http:      path: /      method: ANY      cors: true   - http:      path: /{proxy+}      method: ANY      cors: true

Deploy!

Let’s deploy it again with the already well-known command sls deploy. This time Serverless has deployed one more function named as our lambda + a prefix of “warmup-plugin”

AWS Lambda console

Using the AWS console we can see that the WarmUp plugin configured a Cloudwatch event, running every 5 minutes, that triggers this new lambda function

Provisioned Concurrency

At AWS re:Invent in December 2019, AWS introduced the concept of “provisioned concurrency” that allows the developer to remove the cold start issue specifying the number of Lambda workers that should be always warm.

The configuration is straightforward from both the AWS console and the Serverless Framework.

In the first case, the developer should open the Lambda service page, scrolling all the way down and set the desired number of provisioned concurrency

In the second case, we need to add the “provisionedConcurrency: 5" parameter, where 5 is the desired number of concurrent warm instances, to the function configuration in the serverless.yml file

app: handler: app.server provisionedConcurrency: 5...

Conclusion

In this tutorial, we’ve learnt

  • what is the “cold start” issue for AWS Lambda functions
  • how we can optimize the function bootstrap time
  • how we can warm the function up using AWS
  • how we simplify the above point using the WarmUp plugin of Serverless Framework

Stay tuned for other tutorials about Serverless Framework!

Here the link to the bitbucket repo

--

--

Cloud Architect — Cloud Modernisation SME — Serverless SME — #AWS #Azure #Openshift linkedin.com/in/micheleriso