A guide to localstack (part 1) - How to mock Amazon web services in local

Localstack logo

Introduction

This article has been updated early 2020 with localstack in version 0.10.7

At commonledger, we have been developing software using Vagrant as our development environment for a while. For a long time we used very few Amazon services in production: Cache Cluster (redis) and RDS (mysql). We could easily run these in local within Vagrant. Then we started to use s3 buckets. We quickly found workarounds to keep a sane dev environment by creating “local buckets” (which is truly was online bucket named “local”). Then we leveraged lambda functions, followed by Dynamodb, SNS and SES.

As our production stack increased in complexity, our ability to release code quickly and with confidence decreased. We soon found ourselves stuck with our local environment, having a hard time keeping up the pace with all these new services and being forced to mock AWS api calls within our code. We eventually got to a point where a lambda function had to connect to a mysql database. We couldn’t find any workarounds to make it work with our database running under Vagrant. We thought about exporting our dev environments to the cloud, but this would have come with a cost. The reverse question was: how to run lambda in local?

This article will present how we solved these issues using localstack. This article is technical and assume knowledges about docker and docker-compose.

To illustrate what we did I created a very simple web application which provides all the challenges we had to tackle. I will explain step by step the issues we encountered and how we overcome them to get to a dev environment we are happy to work with.

In this first post I will simply present localstack and how to interact with it the same way you would do with real AWS resources.

You can get all files used in this tutorial from this github repository.

Setup

First, let’s create a docker-compose.yml file that will compose our stack:

version: '3.3'

networks:

  default:
    external:
      name: localstack-tutorial

volumes:

  localstack:

services:

  localstack:
    image: localstack/localstack:0.10.7
    ports:
      - 8080:8080
      - 4569:4569 # dynamodb
      - 4574:4574 # lamba
    environment:
      - DATA_DIR=/tmp/localstack/data
      - DEBUG=1
      - DEFAULT_REGION=ap-southeast-2
      - DOCKER_HOST=unix:///var/run/docker.sock
      - LAMBDA_EXECUTOR=docker-reuse
      - PORT_WEB_UI=8080
      - SERVICES=lambda,dynamodb
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - localstack:/tmp/localstack/data

Take some time to read the configuration. In this scenario, the docker network needs to be created externally beforehand. If you prefer not to, docker-compose can create one by default and name it according to the folder the docker-compose.yml file is in. If you are working in a team where each person could have a different workspace I would advise against it to avoid surprises as well as an unnecessary layer of configuration (your code will likely need to know the network name in order to reach the localstack resources).

To create the network, just run:

docker network create localstack-tutorial

Let’s go through the docker-compose file:

  • The port 8080 will expose a web ui displaying our local resources
  • The other ports expose the local services on the host. The list of all services can be found here: https://github.com/localstack/localstack#overview
  • Persistent data will be stored in a named volume mapped to/tmp/localstack/data inside the localsack container
  • LAMBDA_EXECUTOR is set to docker-reuse. This way one Docker container per function will be created and reused across invocations. This allow to speed things up a little. If a lambda is not called for 10 minutes, the container gets killed by localstack.

The rest is self explanatory.

Now run docker-compose up -d followed by docker-compose logs -f localstack and wait for localstack to be ready. You can access http://localhost:8080.

Deploying a DynamoDB table

Time to deploy our first resource. First you will need the AWS cli installed on your machine. See the docs at https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html

After the installation, run aws configure:

  • Set the AWS Access Key ID to a anything you want (fake will do the trick)
  • Set the AWS Secret Access Key to a anything you want as well
  • Set the Default region name to ap-southeast-2
  • The Default output format can be left empty

Let’s create a simple DynamoDB table. It will be named table_1 and will have one required attribute - id - which will serve as hash key:

aws dynamodb create-table \
  --endpoint-url http://localhost:4569 \
  --table-name table_1 \
  --attribute-definitions AttributeName=id,AttributeType=S \
  --key-schema AttributeName=id,KeyType=HASH \
  --provisioned-throughput ReadCapacityUnits=20,WriteCapacityUnits=20

See how we specify the endpoint to our local service reachable at localhost:4569. Hit http://localhost:8080 again. You should see the newly created table. Even better, use the cli as you would do to reach real AWS resources:

aws dynamodb list-tables --endpoint-url http://localhost:4569

This will output:

{
    "TableNames": [
        "table_1"
    ]
}

Deploying a lambda function

A good way to illustrate how localstack can act as a true local AWS environment is to have its resources communicating with each other. What if we have a lambda function that needs to read data from a DynamoDB table?

Create a lambda/ folder container a main.js file.

mkdir lambda
touch lambda/main.js

Then paste the following content:

'use strict';

const AWS = require('aws-sdk');

AWS.config.update({
    region: 'ap-southeast-2',
    endpoint: 'http://localstack:4569'
});

class DynamoDBService {
    constructor() {
        this.docClient = new AWS.DynamoDB.DocumentClient({ apiVersion: '2012-08-10' });
    }

    async increment(id) {
        return new Promise(async (resolve, reject) => {
            try {
                const count = await this.getCount(id);
                var params = {
                    TableName: 'table_1',
                    Item: {
                        count: count + 1,
                        id: id
                    }
                };

                this.docClient.put(params, function(err, data) {
                    if (err) {
                        reject(err);
                    } else {
                        resolve(data);
                    }
                });
            } catch (err) {
                reject(err);
            }
        });
    }

    async getCount(id) {
        return new Promise(async (resolve, reject) => {
            var params = {
                TableName: 'table_1',
                Key: {id}
            };

            this.docClient.get(params, function(err, data) {
                if (err) {
                    reject(err);
                } else {
                    resolve(data['Item'] ? data['Item']['count'] : 0);
                }
            });
        });
    }
}

exports.handler = async (event, context, callback) => {
    try {
        const dynamoDBService = new DynamoDBService();
        await dynamoDBService.increment(event.id);
        callback(null, {});
    } catch (error) {
        callback(error);
    }
}

This lambda is pretty dumb. The function will update an item in the table table_1. If an item with the given id exists, its count value will be incremented. If not it will be created and a count attribute will be set to 0.

Notice the endpoint set to http://localstack:4569 which matches the docker-compose service name. This is how the lambda container can communicate with the localstack one. The docker documentation states that “a container created from a docker-compose service will be both reachable by other containers sharing a network in common, and discoverable by them at a hostname identical to the service name”.

Create an archive out of this file, then deploy it:

cd lambda
zip -r ../lambda.zip .
cd ..
aws lambda create-function \
  --function-name counter \
  --runtime nodejs8.10 \
  --role fake_role \
  --handler main.handler \
  --endpoint-url http://localhost:4574 \
  --zip-file fileb://$PWD/lambda.zip

Let’s invoke it and check what’s happening:

aws lambda invoke --function-name counter --endpoint-url=http://localhost:4574 --payload '{"id": "test"}' output.txt
# in another terminal (be patient it could take a while)
docker-compose logs -f localstack

If everything works is as expected… you should see the following error through a bunch of logs:

Inaccessible host: localstack. This service may not be available in the ap-southeast-2 region.

Why is that? It’s simply because localstack spinned up a container which is not in our docker-compose network, leading to containers unable to communicate with each other. Fortunately since November 2018, we can now specify the LAMBDA_DOCKER_NETWORK to make it work.

The container will attempt to run the function 3 times before stopping, just as it would happen in AWS.

Let’s update the docker-compose.yml file and add a new environment variable to the localstack service: LAMBDA_DOCKER_NETWORK=localstack-tutorial

Run docker-compose up -d to apply your changes. Wait for localstack to be ready by looking at the logs. Unfortunately lambdas are not persisted in localstack. This means they need to be deployed on every restart. Run the create-function command again to deploy it then invoke the function a second time and log the output just as before. No more errors this time! You can even scan the DynamoDB table to check if everything is good.

aws dynamodb scan --endpoint-url http://localhost:4569 --table-name table_1                                         

It should result with:

{
    "Count": 1, 
    "Items": [
        {
            "count": {
                "N": "1"
            }, 
            "id": {
                "S": "test"
            }
        }
    ], 
    "ScannedCount": 1, 
    "ConsumedCapacity": null
}

Invoke the lambda as many times as you wish to see the count being incremented.

What’s next?

You should have enough for now to understand the capabilities of localstack. There are however a few hicups in my opinion:

  • It is painful to create resources. We don’t have a nice web interface to create resources as we would in AWS. You might find some cool tools around each service (Have a look at https://github.com/aaronshaf/dynamodb-admin for a nice example).
  • All resources are not persisted. This is the case for lambda functions that you need to deploy every time you start your projet.
  • Running the lambda for the first time is slow. When a lambda function gets called synchrously and a http request is pending, the web server will usually time out before everything is bootstrapped and ran for the first time, plus the idle containers behaviour that makes container getting killed after 10 minutes of inactivity does not help.

You might have a few ideas on how to attend to these issues. Follow part 2 of this tutorial if you want to know more!

If you found this tutorial helpful, star this repo as a thank you! ⭐