How to use Terraform via Docker Compose for Pros
This topic is covered in-depth in our 14 hour DevOps course.
In this tutorial, I’ll show you how professional developers should use Terraform on a project.
If you prefer to watch this tutorial in video format, find the video below:
If you just want to see the finished sample code, you can find that here: github.com/LondonAppDeveloper/tf-docker-compose-starter-code
Terraform is a fantastic tool for defining your deployment infrastructure as code.
Following the Infrastructure as Code (IaC) practice brings so many benefits, such as:
- Saving money by easily stopping infrastructure that’s not being used
- Smoother disaster recovery process
- Including infrastructure changes in the peer review process
- Save time when creating new development environments
However, one common challenge developers face is: How do you manage Terraform versions across different projects?
Many developers use Terraform by installing it directly on their machine.
However, in my experience this is a risky way to use Terraform, especially when working in a professional environment.
The problem with running Terraform directly on your machine
Terraform works by maintaining a “state file” which describes the current status of the infrastructure for your project.
If you work in a team, then this state file will need to be centralised so it is accessible by all developers that need to run Terraform code.
Unfortunately, this state file is not backward compatible with older versions of Terraform. This can cause conflicts if developers are using inconsistent versions of Terraform on their machine.
Imagine the following scenario:
- Developer A creates a project using Terraform v0.11.
- Developer B works on the project using v0.12, so the state file is automatically upgraded to work with 0.12.
- Developer A tried to work on the project but no longer can, because they still have v0.11 on their machine.
You may be thinking: Why can’t Developer A get with the times and upgrade to version 0.12 and… problem solved?
It’s not quite as simple as that.
What happens if v0.12 introduces breaking changes which requires the project code to be updated to use new syntax?
Additionally, what if there is an automated pipeline which is using v0.11?
This could leave you in a situation where you’re unable to deploy anything until a DevOps engineer upgrades your entire project to use v0.12.
How to manage Terraform versions on your projects?
Whenever I use Terraform, I always run it via Docker containers that are configured using a docker-compose.yml file inside the project.
There are many benefits to working this way, for example:
- You can pin a specific version of Terraform to your project
- It standardises the Terraform deployment, so it’s the same on your local machine or CI/CD pipeline
- Developers don’t need to install Terraform locally at all
- Everyone is using the same version
I’m going to show you how you configure a project to use Terraform via Docker Compose.
Specifically, we’ll do the following:
- Clone a sample project which we setup specifically for this guide.
- Create a docker-compose.yml configuration for Terraform that handles credential management using aws-vault.
- Create a sample EC2 instance.
- Run
plan
,apply
anddeploy
jobs through Docker Compose.
Requirements
Before you get started, you’ll need the following:
- Windows, macOS or Linux machine
- Docker Desktop must be installed and working (if you’re using Linux, then ensure Docker and Docker Compose are installed separately)
- The aws-vault tool should be setup and configured for your AWS IAM account (ideally using MFA)
- You’ll need a code editor (like VSCode)
Create Docker Compose configuration
Before we get started, ensure you’ve cloned the sample project to your local machine.
Inside the project, add a new file at deploy/docker-compose.yml with the contents (each line is explained below):
version: '3.8'
services:
terraform:
image: hashicorp/terraform:0.14.10
volumes:
- .:/infra
working_dir: /infra
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN}
The version: '3.8'
specifies the version fo the Docker Compose syntax we want to use. This is optional, but it’s a good idea to specify it just in-case developers have different versions of Docker Compose installed (the whole point of this work is to to standardise versions after all…)
Then we have the services: block
, this is used to define the various services that will be managed through Docker compose.
After that we have terraform:
. This is the name of the service that will run Terraform. We’ll need to use it for each Terraform command we run, so feel free to give it a shorter name (like tf
) if you prefer.
Next we define image: hashicorp/terraform:0.14.10
, which sets the Docker image we want to use for our service. In this example, we’re using the hashicorp/terraform image which is publicly available on the Docker Hub, and we’re pinning it to version 0.14.10 using the provided Docker tag. If you wanted to use a different version for your project, this is where you would set it.
After that we define the volumes for our service as follows:
volumes:
- .:/infra
Docker containers use virtualization to run in an environment that’s isolated from the host machine. As a result, containers don’t have access to the file system by default. This is an issue, because Terraform needs to be able to access the code from our project in order to work.
To get around this, we have - .:/infra
which creates volume mapping .
(which is shorthand for the current project directory) to /infra
inside the docker container. This way, our project code will be accessible by the executable running inside Docker.
After that we set working_dir: /infra
, which tells our Docker container to work from the /infra
directory which we are mapping our code to.
Finally we have this block:
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN}
In order to authenticate with AWS, we’ll be using aws-vault to generate environment variables for our AWS credentials.
These credentials need to be made available to our container for authentication to work.
We do this by using the environment: block
to map variables from our host machine to the running docker container.
Take the first example: AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
The ${}
syntax is used to tell Docker Compose to retrieve environment variables from the host machine.
So in the above snippet, we are saying: Take the value of AWS_ACCESS_KEY_ID
for the host machine, and map it to AWS_ACCESS_KEY_ID
on the container.
Running Terraform through Docker Compose
Next we’ll go ahead and initialise our Terraform via Docker Compose.
Start by authenticating with your AWS IAM account using aws-vault by openening up your Terminal (macOS/Linux) or Command Prompt (Windows), and run the following:
aws-vault exec --duration=12h account-name
(Note: replace account-name with the name of your account you configured in aws-vault).
This should set your AWS credentials in your current Terminal session.
It’s important that you run the docker-compose
command below inside this same Terminal, because aws-vault only sets the credentials for your current session. If you close the terminal or reboot, you’ll need to run the above command again to generate new credentials.
Next we’ll initialise Terraform in our project by running the below command:
docker-compose -f deploy/docker-compose.yml run --rm terraform init
This command does the following:
docker-compose
is the name of the docker compose command.-f deploy/docker-compose.yml
references the Docker Compose config file we want to run the command on. The command above assumes you’re running the command from the root of your project, but if you are change to the deploy/ directory you can omit this part of the command.run
is the Docker Compose command for running a container--rm
removes the container after it is finished running, this is optional but helps to avoid a build up of lingering containers on your systemterraform
is the name of the service. This should match the service name defined in deploy/docker-compose.yml.init
is the command we want to pass to the Terraform CLI inside the container.
After running the command, you should see something like this:
Note that the first part (pulling the image) only happens the first time you run the command. Each subsequent run will use a cached image, so the process will be a lot faster.
When Terraform initialises, it creates some working files like .terraform and .terraform.lock.hcl in your project:
Ensure these are excluded from your Git project by adding them to the .gitignore file.
Creating an EC2 Instance
Next we are going to add some Terraform code that deploys an EC2 instance so we can test running Terraform on AWS.
Create a new file in deploy/bastion.tf and add the following contents:
data "aws_ami" "amazon_linux" {
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-hvm-2.0.*-x86_64-gp2"]
}
owners = ["amazon"]
}
resource "aws_instance" "bastion" {
ami = data.aws_ami.amazon_linux.id
instance_type = "t2.micro"
}
The first block will retireve the latest AMI for Amazon Linux 2, which we’ll be using for our EC2 instance.
The second block is a resource which creates a new t2.micro instance in EC2.
Important: Your AWS account may charge you to create this instance. Please review the Amazon EC2 Pricing documentation to confirm your happy with the costs before continuing.
Running Terraform through Docker
Now we’ve added our Terraform code, we’ll go ahead and run the fmt command to format it.
Open Terminal or Command Prompt, and run the following:
docker-compose -f deploy/docker-compose.yml run --rm terraform fmt
This should format our code by lining everything up, making it neat and tidy:
Next we’ll validate our Terraform, by running the following command:
docker-compose -f deploy/docker-compose.yml run --rm terraform validate
If everything is correct, we should see a success message like this:
If there are issues with your code, it should be explained in the output of the command.
Now we’ll run the plan
command, which will tell us what Terraform intends to do:
docker-compose -f deploy/docker-compose.yml run --rm terraform plan
This should return a long output that details everything Terraform will do if you apply the changes. This is a “non destructive” command, meaning that it doesn’t actually make the changes, but just summarises the changes it plans to make.
It’s best practice to run this every time you make changes so you can reduce the risk of accidentally deleting or changing an important resource by mistake.
The output should look something like this:
Creating deploy_terraform_run ... done
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.bastion will be created
+ resource "aws_instance" "bastion" {
+ ami = "ami-0742b4e673072066f"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ id = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = (known after apply)
+ tenancy = (known after apply)
+ vpc_security_group_ids = (known after apply)
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ enclave_options {
+ enabled = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ root_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
This says that Terraform plans to add 1 new resource to our AWS account.
Now we can actually make the change by running the apply
command as follows:
docker-compose -f deploy/docker-compose.yml run --rm terraform apply
When should be presented with the same plan again, and be prompted to confirm that you want to perform these actions.
If you’re happy with the plan, type yes and hit enter:
Terraform will then connect to your AWS account and create the instance you specified.
This can take a few minutes:
Once it’s done, you’ll see a success message like this:
If you head over to your AWS account, you should see the instance running in US East (N. Virginia):
Note: The region is defined (us-east-1) within main.tf in the sample Terraform code
Now we’ve tested our Terraform running through Docker Compose, we can destroy this instance by running:
docker-compose -f deploy/docker-compose.yml run --rm terraform destroy
You should see the destroy plan. If you’re happy with it, type yes and hit enter.
Terraform will now destroy the instance:
Shortening the Command
One of the drawbacks of this approach is having to type docker-compose -f deploy/docker-compose.yml run
every time you use Terraform.
I’m not too worried about this because it helps me memorise the commands easily, and I use Zsh which has some excellent command recall features.
However, if you want to shorten the command, you can create Makefile in your project containing something like this:
.PHONY: tf-init
tf-init:
docker-compose -f deploy/docker-compose.yml run terraform init
.PHONY: tf-fmt
tf-fmt:
docker-compose -f deploy/docker-compose.yml run terraform fmt
.PHONY: tf-validate
tf-validate:
docker-compose -f deploy/docker-compose.yml run terraform validate
.PHONY: tf-plan
tf-plan:
docker-compose -f deploy/docker-compose.yml run terraform plan
.PHONY: tf-apply
tf-apply:
docker-compose -f deploy/docker-compose.yml run terraform apply
.PHONY: tf-destroy
tf-destroy:
docker-compose -f deploy/docker-compose.yml run terraform destroy
This way, instead of running the full command, you can run the shortcuts like:
make tf-fmt
Note: If you’re using Windows you may need to install some tools for running make commands.
You can find the finished source code on the finished-code-blog branch of: github.com/LondonAppDeveloper/tf-docker-compose-starter-code
That’s it for our tutorial on making Terraform versions using Docker Compose. If you have any thoughts or opinions on this approach, or if you’ve found a more effective approach, please share it in the comments below.
This topic is covered in-depth in our 14 hour DevOps course.
Leave a Reply
Want to join the discussion?Feel free to contribute!