Using GitHub Actions to Automate Terraform Deployments to AWS

If you're relatively new to cloud infrastructure, you've probably heard of the importance of automation and CI/CD processes. I used GitHub actions on a client project but wanted to do some additional learning and deploy some pipelines of my own so in this post, I'll be taking you through the process of automating terraform deployments to AWS.

What Is GitHub Actions ?

GitHub Actions is a feature on GitHub that automates tasks and workflows within repositories. It allows you to define custom workflows in YAML files and supports a range of triggers, parallel jobs for faster execution, and reusable actions from the GitHub Marketplace. You can define workflows in a .github/workflows directory in your repository. These workflows are triggered by events such as code pushes, pull requests, or other custom events.

Writing the Terraform

Most projects will deploy their infrastructure as code using pipelines so its really important to learn at least one IaC tool (I'll be doing a post about this soon).

Here I've set up an S3 backend and will be deploying a basic EC2 instance to test my pipeline. Once you've got the pipeline working you can always deploy more advanced infrastructure.

terraform {
  backend "s3" {
    bucket = "remote-state-bucket-example-1"
    key    = "github-actions/terraform.tfstate"
    region = "eu-west-2"
  }
}
resource "aws_instance" "test-machine" {
  ami = "ami-04fb7beeed4da358b"
  instance_type = "t2.micro"

  tags = {
    Name = "Github-test"       
  }
}

Creating the Workflows

I created two separate pipelines to apply and destroy the infrastructure. For the terraform-apply.yml file I set the event to trigger on push to the main branch whilst the terraform-destroy.yml file is set to trigger on the workflow_dispatch event which allows you to trigger a workflow manually.

You will need to configure an IAM role for your GitHub repo to allow it to assume a role in your AWS account to make changes. This can be done through creating an OIDC-trusted connection here.

You can follow a similar approach as below to create your terraform destroy pipeline.

name: pipeline-name
on:
    push:
        branches:
            - main

env:
    AWS_REGION: "eu-west-2" 

permissions:
    id-token: write
    contents: read

jobs: 
    terraform: 
        name: terraform
        runs-on: ubuntu-latest

        steps: 
            - name: Checkout
              uses: actions/checkout@v2

            - name: Configure AWS Credentials
              uses: aws-actions/configure-aws-credentials@v1
              with:
                role-to-assume: your-role-here
                role-session-name: your-session-name-here
                aws-region: ${{ env.AWS_REGION }}

            - name: Use Terraform
              uses: hashicorp/setup-terraform@v3

            - name: Initialise and Plan
              run: terraform init && terraform plan
              continue-on-error: false
              working-directory: 'your-dir-here'

            - name: Terraform Apply 
              run: terraform apply --auto-approve
              continue-on-error: false
              working-directory: 'your-dir-here'

Once my workflows were created my apply pipeline was running (after some troubleshooting with the IAM role and incorrect permissions) and my EC2 instance was deployed. I was able to successfully delete it through manually triggering my destroy pipeline.

Now that I have created this template, in the future I will be using these pipelines for other repos to deploy more complex projects rather than doing them manually through the cli.