My previous post has gotten a lot of attention over the last few days so I figure I'd do a follow up on it.
First I want to thank everyone all around the world for the comments, feedback and suggestions! There has been positive as well as negative feedback but I understand everyone's views and perspective, it's all valuable information. The post has gotten a lot of attention, much more than I expected!
Second, some Amazon employees have reached out to me through social media as well as in the comments of my blog post. I really appreciate it and hope this issue can be resolved in some way!
Another thing I wanted to clarify is that it was just a side project, my personal website, the one you're on right now. I wanted to play around with the whole serverless thing I've seen so much about and my personal website was the ideal candidate. Yes, you caught me, I like to hop on the bandwagon of new shiny tech and like to try out new things. But I think most of us do, right? I like to learn and try new things, if I didn't I wouldn't be a developer.
Lastly, this was my mistake, it's not Amazon's fault nor an issue with serverless. I should have been more careful, I did not think about the consequences of something going wrong.
Because I was testing and prototyping, I was a bit careless in the setup and didn't give any attention to unit or integration tests. As many people have pointed out, tests would have reduced the chances of this whole thing turning out the way it did! I'll definitely keep this in mind in the future when working with pay-per-use services and alike.
Cloudwatch alarm on Lambda invocations
Another thing that could have saved me was having an alarm on the number of invocations on Lambda functions or on total duration, this can be set up using Cloudwatch. This was suggested by a user on Reddit named karlw00t, thanks for that!
Multiple S3 buckets
Another good piece of advice came from Benjamin Kitt who suggested using a 2nd bucket to pick up the files and another to save the new ones, this prevents the lambda from re-triggering and possible ending up in a loop like it did in my case.
A lesson I learned early on in serverless is never to perform an action that could re-trigger the same function. If I'm processing a file, I always pick up from one bucket and write to another so there is no risk of the write re-triggering the fn. It's an incredibly easy mistake to make and I totally feel for you.
~ Benjamin Kitt
Staying with Digital Ocean
As pointed out by many people, paying a fixed price for a dedicated server, VPS or even just shared hosting is indeed a wise decision. While serverless architecture and other auto scaling services have their use cases, a fixed price solution will always be better for those on a budget. Auto scaling can catch peaks, you might go viral for example but may end up with a large bill while a VPS with fixed price would result in degraded performance and possibly bring it down. It depends on how important that traffic is to you.
In my case, it was more about experimentation, as I've said earlier. I did however switch back to Digital Ocean, it didn't budge when ~500 concurrent users where reading my previous post yesterday. My $5 VPS handled it perfectly without using much of its resources.
My post was shared mainly on Reddit and Twitter. I've posted it on Hackernews and /r/programminghorror myself, there have been a lot of comments and I recommend reading them all! Some sharing their own stories, some suggesting solutions and some criticizing me, cloud services and/or serverless. It's great!
Also take a look at the disqus comments on the post itself! Thanks for reading!