Update: Before you continue reading, learn from my experience and be prepared!

I've just rounded up migrating my static Jekyll website from my classic LEMP (Linux, Nginx, MySQL and PHP) stack to serverless stack using AWS (Amazon Web Services). In this blog post I'll describe my experience with it and might later write another post where I go into detail on how you can do the same.

What is serverless and why use it?

Serverless is not really serverless, there's still servers but you're just not managing them. With serverless you're basically using a bunch of services that you need and connecting them together.

Amazon is one of the bigger players in this area with Amazon Web Services, but also Google and its Google Cloud and Microsoft with Azure have a lot of nice things to offer and are worth looking at. I've only tried AWS myself and will only focus on them for now.

As with all things, there are a advantages and disadvantages, and it's up to you to determine which is the right tool (stack?) for the job.


  • No more managing servers. Unless you like messing around with config files and keeping servers up to date with security patches, it's a good thing. Though serverless also requires a some configuration, it's a lot simpler.
  • It scales by default. Out with the worries and always prepared for that small chance something might go viral and hit your website/app with thousands of users at a time.
  • Pay for what you use. This is in many cases an advantage but could also be a disadvantage, in some cases it's cheaper to have a VPS or a dedicated server running. I personally had a VPS running from Digital Ocean that only costed me $5/month, but I don't get much traffic so it's idle most of the time.



This could be an issue, in my case it remained the same and even improved globally due to Amazon's globally-distributed network but I guess it's also something that depends on your use case. Basically latency might increase because you don't have an idle system ready to handle your requests, but rather a service that my have to spawn your language runtime and load your code from somewhere.

Vendor lock-in

With containers or plain old Lamp stack, you can just move from one hosting to another with minimal effort. With provisioning software it's only a matter of running some scripts. With serverless you're somewhat tied to the vendor, AWS in my case, and it's hard to move away if you're using a lot of their services.

While all the current big vendors offer similar services, they might need different configuration, handle events differently and your code might be very specific to the vendor. I for example needed cache headers for my assets and had to write a Lambda function that is triggered each time a file is added in S3 and store them into my asset files as meta data. It's a very specific solution that I'd have to port over to Google cloud for example, or think of a completely different solution there because they handle files in a different way.

While I don't think these vendors will just disappear or suddenly stop offering these services, it's still something to keep in mind. If Amazon goes bankrupt tomorrow, you're going to have a lot of work porting all your applications over. A more likely case could be that they suspend your account for some reason, you'll have to take your business elsewhere.

How I migrated

I bumped into the Serverless Framework and experimented with it prior to migrating, I had a lot of fun playing around with it but wasn't thinking about migrating my static Jekyll website at the time due to some limitations in AWS.

I looked at the (Serverless Framework examples)[https://github.com/serverless/examples] to get an idea of the things I had to do to get it working.


I had spent a lot of time optimizing my website and would not migrate unless all of the optimizations would be possible on the new platform.

This is how I thought it would look like:

  • An S3 bucket set up for website hosting for the assets. It's possible to avoid the website hosting on the bucket if you don't mind the index.html in the URL's.
  • A Cloudfront distribution so I could make use of SSL, this would point to the S3 bucket website endpoint.
  • A Lambda@Edge function that adds custom headers, most of which are security headers.

What I ended up with:

The above, but also the following:

  • A second S3 bucket that contains nothing and only servers as a redirect to redirect the www domain to the root domain.
  • A second Cloudfront distribution to handle SSL on the second bucket.
  • A Lambda function that adds cache control meta data to assets so it's served as a header when they are required. I initially wanted to do this with the Lambda@Edge function but I'd have to put the same age for all of them, I could not find a way to detect the file type to determine the cache max-age.

Custom headers

One limitation was that there was no way to add custom headers for requests, at least not until Amazon released Lambda@Edge, which they did very recently.

Lambda@Edge lets you run Lambda functions at AWS Regions and Amazon CloudFront edge locations in response to CloudFront events..

Amazon Cloudfront has a few triggers for requests and responses that allow me to run a Lamda@Edge function and modify that request/response.

For example, this Lamda@Edge function would be triggered by a Cloudfront response event and add an X-Powered-By header containing "Amazon Web Services" to the response. It's written in javascript but other languages are also supported!

'use strict';
exports.handler = (event, context, callback) => {
    const response = event.Records[0].cf.response;
    const headers = response.headers;

    headers['x-powered-by'] = [
        { key: "X-Powered-By", value: "Amazon Web Services" }

    callback(null, response);

This can't be done with regular Lambda functions because they are only available in one region and Cloudfront delivers from several regions (based on users location). What happens with Lambda@Edge is that it gets replicated to several regions and as Cloudfront delivers content in region X, the Lambda@Edge function that was replicated to region X is called. You can read more about it in the Lambda@Edge docs.

Cache headers

Cache headers was a different story as I could only get the filename in the cloudfront request event, not in the response. I was trying to set the cache headers based on the file extension.

I ended up making a Lambda function that is triggered when an file is created in the S3 bucket. It would do some basic regex checks on the file extension (I already had them available from my nginx config files) and save a `Cache-Control: max-age: 3600' header in its meta data. Amazon adds that meta data as headers when the file is requested.

While writing this post, I've realized I could have done it in the Lambda@Edge function after all, since the Content-Type header is present. Oh well, maybe I'll change it later!

Simplified (and incomplete) example of the event code in javascript:

'use strict';

let aws = require('aws-sdk');
let s3 = new aws.S3({apiVersion: '2006-03-01'});

module.exports.cacheControl = (event) => {
    // Bucket name & File path
    const bucket = event.Records[0].s3.bucket.name;
    const key = decodeURIComponent(event.Records[0].s3.object.key).replace(/\+/g, ' ');

    let cacheDuration = 0;

    // Set image cache for 1 month
    if (key.match(/\.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$/)) {
        cacheDuration = 6696000;

    // More checks for other file types (js, css, fonts, ..) here
    // ...

    // No cache needed, do nothing.
    if (cacheDuration <= 0) {

    let cacheControlHeader = 'max-age=' + cacheDuration;

    // Get the file so we cacn check the meta data.
    s3.getObject(params, (err, data) => {
        // Something went wrong or file already has the cache control meta data, don't do anything.
        if (err || data.CacheControl == cacheControlHeader) {

        let params = {
            Bucket: bucket,
            Key: key,
            CopySource: encodeURIComponent(bucket + '/' + key),
            ContentType: data.ContentType,
            CacheControl: cacheControlHeader,
            Metadata: {},
            MetadataDirective: 'REPLACE'

        // Overwrite the file.
        s3.copyObject(params, (err, data) => {
            // Something went wrong, don't do anything.
            if (err) {

            console.log('Metadata updated successfully!');

Based on this one


Automation is very important, I already had ansible roles to set up my VPS on Digital Ocean and I would not have done this if I did not find out about the Serverless Framework.

It did take me a while to configure it as it was my first time with the framework and had to dig through the Cloudformation quite a lot to do what I needed to do. Luckily, both AWS and Serverless Framework have a fair amount of docs and you'll find everything there, eventually.

A couple of side notes, there's three steps I have to do manual on the initial deploy:

  • Adding the CORS configuration to the bucket, not supported yet.
  • For custom headers I need set up the Lambda@Edge function (Serverless currently does not support Edge functions due to AWS limitations) and add it as a Cloudfront Origin Response trigger.
  • Set up a CNAME/A record (as alias) for the Cloudfront endpoint in the domain DNS settings. (I may actually be able to do that, but haven't looked at it yet)


I've had a lot of fun, but also frustrating moments. It's nice to play around with new stuff and I'm certain I'll go serverless more in the future, except.. not for static websites.

Some things I had to do felt rather hacky, such as creating an empty S3 bucket to redirect from www to non-www. The Lambda@Edge function to set headers I could live with, it has a lot of potential and this was a good use-case for me.

I'd certainly use it again if I had to make an app from the ground up that would benefit from being serverless.