Update: Before you continue reading, learn from my experience and be prepared!
I've just rounded up migrating my static Jekyll website from my classic LEMP (Linux, Nginx, MySQL and PHP) stack to serverless stack using AWS (Amazon Web Services). In this blog post I'll describe my experience with it and might later write another post where I go into detail on how you can do the same.
Serverless is not really serverless, there's still servers but you're just not managing them. With serverless you're basically using a bunch of services that you need and connecting them together.
Amazon is one of the bigger players in this area with Amazon Web Services, but also Google and its Google Cloud and Microsoft with Azure have a lot of nice things to offer and are worth looking at. I've only tried AWS myself and will only focus on them for now.
As with all things, there are a advantages and disadvantages, and it's up to you to determine which is the right tool (stack?) for the job.
This could be an issue, in my case it remained the same and even improved globally due to Amazon's globally-distributed network but I guess it's also something that depends on your use case. Basically latency might increase because you don't have an idle system ready to handle your requests, but rather a service that my have to spawn your language runtime and load your code from somewhere.
With containers or plain old Lamp stack, you can just move from one hosting to another with minimal effort. With provisioning software it's only a matter of running some scripts. With serverless you're somewhat tied to the vendor, AWS in my case, and it's hard to move away if you're using a lot of their services.
While all the current big vendors offer similar services, they might need different configuration, handle events differently and your code might be very specific to the vendor. I for example needed cache headers for my assets and had to write a Lambda function that is triggered each time a file is added in S3 and store them into my asset files as meta data. It's a very specific solution that I'd have to port over to Google cloud for example, or think of a completely different solution there because they handle files in a different way.
While I don't think these vendors will just disappear or suddenly stop offering these services, it's still something to keep in mind. If Amazon goes bankrupt tomorrow, you're going to have a lot of work porting all your applications over. A more likely case could be that they suspend your account for some reason, you'll have to take your business elsewhere.
I bumped into the Serverless Framework and experimented with it prior to migrating, I had a lot of fun playing around with it but wasn't thinking about migrating my static Jekyll website at the time due to some limitations in AWS.
I looked at the (Serverless Framework examples)[https://github.com/serverless/examples] to get an idea of the things I had to do to get it working.
I had spent a lot of time optimizing my website and would not migrate unless all of the optimizations would be possible on the new platform.
This is how I thought it would look like:
What I ended up with:
The above, but also the following:
One limitation was that there was no way to add custom headers for requests, at least not until Amazon released Lambda@Edge, which they did very recently.
Lambda@Edge lets you run Lambda functions at AWS Regions and Amazon CloudFront edge locations in response to CloudFront events..
Amazon Cloudfront has a few triggers for requests and responses that allow me to run a Lamda@Edge function and modify that request/response.
For example, this Lamda@Edge function would be triggered by a Cloudfront response event and add an
This can't be done with regular Lambda functions because they are only available in one region and Cloudfront delivers from several regions (based on users location). What happens with Lambda@Edge is that it gets replicated to several regions and as Cloudfront delivers content in region X, the Lambda@Edge function that was replicated to region X is called. You can read more about it in the Lambda@Edge docs.
Cache headers was a different story as I could only get the filename in the cloudfront request event, not in the response. I was trying to set the cache headers based on the file extension.
I ended up making a Lambda function that is triggered when an file is created in the S3 bucket. It would do some basic regex checks on the file extension (I already had them available from my nginx config files) and save a `Cache-Control: max-age: 3600' header in its meta data. Amazon adds that meta data as headers when the file is requested.
While writing this post, I've realized I could have done it in the Lambda@Edge function after all, since the
Content-Type header is present. Oh well, maybe I'll change it later!
Based on this one
Automation is very important, I already had ansible roles to set up my VPS on Digital Ocean and I would not have done this if I did not find out about the Serverless Framework.
It did take me a while to configure it as it was my first time with the framework and had to dig through the Cloudformation quite a lot to do what I needed to do. Luckily, both AWS and Serverless Framework have a fair amount of docs and you'll find everything there, eventually.
A couple of side notes, there's three steps I have to do manual on the initial deploy:
I've had a lot of fun, but also frustrating moments. It's nice to play around with new stuff and I'm certain I'll go serverless more in the future, except.. not for static websites.
Some things I had to do felt rather hacky, such as creating an empty S3 bucket to redirect from www to non-www. The Lambda@Edge function to set headers I could live with, it has a lot of potential and this was a good use-case for me.
I'd certainly use it again if I had to make an app from the ground up that would benefit from being serverless.