Here at Expero, we often need to deliver UI Single Page Applications to clients. The traditional way one would find elsewhere to deliver these demos is to either:
- Send them the assets and tell them to double-click the index.html file
- Deploy the SPA to a locked down web server, then give the client the URL + credentials
These methods are not ideal. The first method is error prone and a version management nightmare. The second requires us to host a bunch of web servers (or a multi-tenant web server) as well as manage a bunch of credentials to support our different applications.
Expero’s preferred distribution method is to serve the assets via an AWS S3 Bucket. This gives us the simplicity of a “web server” without having to stand one up. This saves time and money typically spent on setting up and maintaining a server. However, we lose the ability to control access, unless we want to hand out AWS credentials to our clients.
Enter AWS Lambda@Edge. This service allows you to distribute functions across the AWS Cloudfront edge network that can then be triggered to execute on any request to your private resources to perform custom logic. Unlike regular AWS Lambda Functions, these Lambda@Edge functions are running on the edge servers and allow you to continue to leverage Cloudfront’s content caching capabilities.
When we combine Lambda@Edge with an identity service like Auth0, we can keep our simple distribution method and protect our applications with serverless authentication & authorization. This technique gives Expero a unique, cost-effective way to securely deploy front-end assets.
How does it work?
The system architecture for the completed system looks like this:
To make this work, we’ve written a Lambda@Edge function that will be triggered on every Viewer Request. This function will check the request for a cookie containing a valid JSON Web Token (JWT). If the token does not exist, or it is not valid, we respond to the request with a redirect to send the user to the login page hosted by the identity provider. When the user successfully logs in, they are redirected back to our site via a special callback URL. Our Lambda@Edge function is invoked again, processes the login response, retrieves a JWT from the identity provider, sets this JWT as a cookie and finally redirects the user to their original request URL. At this point, the user requests will have a valid cookie and so our Lambda function will allow the request to be processed by Cloudfront and retrieve the asset from the S3 bucket.
As I was writing this authenticator function (in nodeJS), I learned a few things:
Lambda@Edge functions have to be 1MB or less for the entire package (including dependencies)
If you’ve ever written a nodejs app, then you know it is very easy to go over this limit by adding a single dependency, which itself has dozens of MB of transitive dependencies. Because of this, I avoided all of the libraries that make it easy to work with REST APIs and instead relied upon the built in node.js Http module. My only npm dependencies for this function are:
I wanted to use the same function to protect multiple websites
I didn’t want to deploy a new function for each website - I wanted to deploy 1 function and then control which websites were protected by it via configuration. Thus I introduced a special S3 bucket which could hold the authentication configuration for each protected website. This configuration includes:
- Information about the identity provider (i.e. url to login page + any options to pass)
- Information required to validate JWT’s (i.e. the audience, issuer, public key)
Now, when the lambda function is executed, it first retrieves (and caches) the configuration for the requested website, and then uses this information to validate the JWT or redirect the user to the correct login page.
The final interaction diagram looks like this:
Since I’m already familiar with it, I used Auth0 as my identity provider. Auth0 gives me:
- Hosted login page
- Ability to enable dozens of 3rd party identity providers through configuration
- Ability to enable username/password authentication through configuration
We usually setup Auth0 rules that allow anyone from Expero to login to the demo from their Expero Google account, and then make custom username/passwords for our clients.
With this setup, all that is required to deploy a new website is:
- Create new private S3 bucket (with this configuration you do not make the S3 bucket public - you force all requests to come through Cloudfront (and the Lambda function)
- Configure new Auth0 App in their dashboard
- Create new Cloudfront distribution from S3 bucket
- Add auth config json file to the S3 auth configuration bucket
- Add Viewer Request trigger for the new Cloudfront distribution to run the already-deployed lambda function
- Upload assets to the S3 bucket whenever they change
Benefits of this solution
- Simplified deployment, no server provisioning or maintenance
- Only pay for resources actually used
- For an SPA with no backend components, this is very cheap
Compatible with any OAuth Provider
UI Assets Protected
Leverages Cloudfront Caching
Since we are still serving UI assets through Cloudfront, we are still taking full advantage of its caching. This means your users page load times are not compromised by this solution.
Compatible with Backend Services
Even though I’ve only been discussing SPA’s thus far, there’s no reason this technique can’t also be used for an application that needs to have authorized communications with backend services. The UI has access to the JWT that resulted from the authentication and it is free to pass this token to any backend services to apply further authorization rules.