A guide to localstack (part 3) - Automatic provisioning
10 Feb 2020Requirements
It’s been a long time since I wrote the 2 previous articles on localstack. Some things were fixed by localstack in between and I updated the previous posts to reflect the changes. If you ended up here without reading them, I strongly advise you to do so and come back on this tutorial later.
To follow it you will need:
- the configuration files from the second part (lambda.zip file included)
- docker and docker-compose installed
Also make sure the docker network localstack-tutorial is present. If not create it with docker network create localstack-tutorial
You don’t even need to install Terraform or the AWS cli anymore, we can run everything in docker
Those in a hurry can get all final files used in this tutorial from this github repository.
Introduction
What do we want to achieve? As seen previously, even if deploying resources efficiently can be solved with Terraform, there are still some negative aspects to localstack:
- All resources are not persisted so you will have to apply the terraform configuration everytime localstack restarts.
- Running lambdas for the first time is slow. This is not an issue for asynchronous lambdas. However for lambda function that gets triggered by http requests and that are called synchrously, this is not as simple. The browser will be waiting for an http response and the request might time out before the lambda gets bootstrapped and runs for the first time. Also, the idle containers behaviour that makes lambda containers getting killed after 10 minutes of inactivity does not help.
To workaround these issues I came up with 2 solutions:
- Using docker events, we can provision localstack automatically by running terraform as soon as localstack is ready. As a developper you don’t even have to take care of initializing localstack anymore.
- We will use our own localstack container that prevent lambas containers to be destroyed. (03/03/2020 update : A pull request I opened on the localstack repository was approved. Building a custom image is not needed anymore. More on that further in this post)
Let’s work on it.
Creating the docker event listener container
What we want is a container that will listen to docker events. As soon as localstack starts, this container will trigger the terraform apply
command to provision localstack.
The docker-compose service
In the docker-compose.yml file, add the service docker-events-listener.
services:
localstack:
...
container_name: localstack # 1
depends_on:
- docker-events-listener # 2
docker-events-listener:
build:
context: docker-events-listener-build # 3
volumes:
- /var/run/docker.sock:/var/run/docker.sock # 4
- ./terraform:/opt/terraform/ # 5
- We set a fixed name for the localstack container. When listening to docker events we need to know - in a predictable manner - which events involve the localstack container. Setting the container name is the easiest way to do this.
- The docker-events-listener service should be started before localstack (to be sure it can react to its events).
- We create a folder named docker-events-listener-build. It will be the context of the build, i.e. it will contains all files required to build the image
- The container will require an access to the docker socket to listen to events.
- The container will need access to the terraform resources.
Terraform files
As defined in the docker-compose file, create a folder named terraform, and move the localstack.tf and lambda.zip files in it.
mkdir terraform
mv lambda.zip terraform
mv localstack.tf terraform
The build files
The container will run AWS CLI and Terraform commands so we need to install them. The AWS CLI can lookup its configuration from files.
If not done already, create the docker-events-listener-build folder. Then create the following files inside.
Add a aws_config.txt file with the following content:
[default]
output = json
region = ap-southeast-2
Then a aws_credentials.txt file with the following content:
[default]
aws_secret_access_key = fake
aws_access_key_id = fake
Now the crucial part is the bash script that will be the container main process. This script listens to docker events and take actions accordingly, in this case it runs terraform apply as soon as localstack starts.
Below is the content of the script named listen-docker-events.sh
#!/bin/bash
docker events --filter 'event=create' --filter 'event=start' --filter 'type=container' --format '{{.Actor.Attributes.name}} {{.Status}}' | while read event_info
do
event_infos=($event_info)
container_name=${event_infos[0]}
event=${event_infos[1]}
echo "$container_name: status = ${event}"
if [[ $container_name = "localstack" ]] && [[ $event == "start" ]]; then
sleep 20 # let localstack some time to start
terraform init
terraform apply --auto-approve
echo "The terraform configuration has been applied."
fi
done
Thanks to the –filter options we only listen to events related to containers being started or created, and we format the ouput to display only the container name followed by the event name. The rest is pretty self explanatory.
Finally, here is the content of the Dockerfile to glue everything together.
FROM docker:19.03.5
RUN apk update && \
apk upgrade && \
apk add --no-cache bash wget unzip
# Install AWS CLI
RUN echo -e 'http://dl-cdn.alpinelinux.org/alpine/edge/main\nhttp://dl-cdn.alpinelinux.org/alpine/edge/community\nhttp://dl-cdn.alpinelinux.org/alpine/edge/testing' > /etc/apk/repositories && \
wget "s3.amazonaws.com/aws-cli/awscli-bundle.zip" -O "awscli-bundle.zip" && \
unzip awscli-bundle.zip && \
apk add --update groff less python curl && \
rm /var/cache/apk/* && \
./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws && \
rm awscli-bundle.zip && \
rm -rf awscli-bundle
COPY aws_credentials.txt /root/.aws/credentials
COPY aws_config.txt /root/.aws/config
# Install terraform
RUN wget https://releases.hashicorp.com/terraform/0.12.20/terraform_0.12.20_linux_amd64.zip \
&& unzip terraform_0.12.20_linux_amd64 \
&& mv terraform /usr/local/bin/terraform \
&& chmod +x /usr/local/bin/terraform
RUN mkdir -p /opt/terraform
WORKDIR /opt/terraform
COPY listen-docker-events.sh /var/listen-docker-events.sh
CMD ["/bin/bash", "/var/listen-docker-events.sh"]
If these 4 files are created in the docker-events-listener-build folder, keep reading.
Docker events in action
Let’s build this new image and run the containers. To be sure everything is cleaned up before you can run docker-compose down -v
.
docker-compose build
docker-compose up -d
docker-compose logs -f docker-events-listener
If you didn’t touch the terraform files, you should see the following error afer a few seconds:
What’s wrong? If you read the localstack.tf file carefully, you will notice that terraform is configured to reach the dynamodb service on localhost, which is the docker-events-listener container. Of course dynamodb is not available in the docker-events-listener container. It is accessible in the localstack one.
To fix this let’s update the localstack.tf file and replace every occurence of localhost with localstack. Why localstack? Because it matches the docker-compose service name.
As explained in the first tutorial, the docker documentation states that “a container created from a docker-compose service will be both reachable by other containers sharing a network in common, and discoverable by them at a hostname identical to the service name”.
If the value of container_name in the docker-compose configuration had been my_localstack, we would have replaced localhost by my_localstack. Docker will resolve the docker-compose service name with the ip of the container.
Let’s try again once localstack.tf has been edited:
docker-compose down -v
docker-compose build
docker-compose up -d
docker-compose logs -f docker-events-listener
If you get the following error: