The Shortcomings of Serverless Design in AWS

Brian Hoehne
Senior Software Engineer

Serverless is coming.  My team recently deployed our first serverless application, and I’m ready to swear by that statement.

It only took two weeks to completely refactor our web application to run inside of AWS’s serverless infrastructure, and the immediate price reduction was staggering.  Using an API Gateway, 7 Lambda functions for business logic, and S3 to store our UI and user data, we could spend a fraction of what one Heroku environment costs to run Prod, Dev, and multiple sandbox environments for our developers.

Before you ditch those servers, there are a few shortcomings to working with this application model. To start, the developer experience suffers because it is challenging to recreate the AWS backend locally.  Standing up a local version of a server or a micro services design is usually easy with a VM or containerized application, but there are a whole new set of challenges when you are consuming an API tool and relying on AWS to manage your lambda code.  There are some options for mocking out the AWS, like Atlassian’s Localstack, but the setup is massive and complicated, and ultimately slowed down our progress.  Alternatively, we started using the AWS console to make changes and eventually automated the creation of custom stacks for each feature branch for developers to test their lambda code.  This approach worked well, but it took a long time to set up and required developers have access to the internet when developing.

That also brings up the limitations with the Serverless Application Model (SAM) CloudFormation.  The templates provide a powerful, but buggy, method of deploying and updating serverless stacks with the AWS CLI.  There are some workarounds for those bugs, like using hashed lambda deployments and creatively naming functions in the template, but those adjustments ultimately make the deployment pipeline more complicated than it really needs to be.

And then there is performance.  Your backend will take a bit of a performance hit when triggering an “inactive” lambda.  We were willing to live with the possibility of a slower response, but some applications don’t have that luxury.  We have noticed no performance issues with speed of “active” lambdas.

The nice thing is, if you can manage those shortcomings, there are many benefits to adopting the serverless model.  I’ll cover those other benefits another time.

So yes, serverless is coming.  I’ve only dipped my toes in, but I’m ready to see what other problems it can solve, and what the other cloud providers bring to the table with their offerings.