<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2877026&amp;fmt=gif">

Ace Terraform Deployments with Best Practices

By Sachin Arora - July 14, 2021

Learn the Best Practices of Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

The key features of Terraform are:

  • Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your data center to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

  • Execution Plans: Terraform has a “planning” step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.

  • Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

  • Change Automation: With minimal human interaction, complex changesets can be applied to your infrastructure. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

If you have started using Terraform, you should adopt the best practices for better production infrastructure provisioning:

1) Structuring the Project

When you are working on a large production infrastructure project using Terraform, you should follow a proper directory structure to take care of the complexities that may occur in the project. It would be best if you had separate directories for different purposes.

For example, if you are using terraform in development, staging, and production environments, have separate directories for each of them.

├── dev
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── modules
│ ├── emr
│ │ ├── emr.tf
│ │ └── main.tf
│ └── vpc
│ ├── main.tf
│ └── vpc.tf
├── prod
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── stg
├── main.tf
├── outputs.tf
└── variables.tf

Even the terraform configurations should be separate because, after a period, the configurations of a growing infrastructure will become complex.

For example — you can write all your terraform codes (modules, resources, variables, outputs) inside the main.tf file itself, but having separate terraform codes for variables and results makes it more readable and easy to understand.

2) Good Naming Convention

Naming conventions are used in Terraform to make things easily understandable.

For example, let’s say you want to make three different workspaces for different environments in a project. So, rather than naming them as env1, env2, env3, you should call them dev, stage, prod. It becomes pretty clear from the name itself that there are three different workspaces for each environment.

Similar conventions for resources, variables, modules, etc., also should be followed.

3) Usage of Shared Modules

It is strongly suggested to use official Terraform modules. No need to reinvent a module that already exists. It saves a lot of time and pain. The Terraform Registry has plenty of modules readily available. If needed, you can modify and enhance the existing modules as per your needs.

Also, each module should concentrate on only one aspect of the infrastructure, such as creating an AWS EC2 instance, creating MySQL databases, etc.

For example, if you want to use AWS VPC in your terraform code, you can use — simple VPC

module "vpc_example_simple-vpc" {
= "terraform-aws-modules/vpc/aws//examples/simple-vpc"
version = "3.4"

4) Backup System State

Always backup the state files of Terraform.

These files called terraform.tfstate are stored locally inside the workspace directory by default.

Without these files, Terraform will not be able to figure out which resources are deployed on the infrastructure. So, it is essential to have a backup of the state file. By default, a file with a name terraform.tfstate.backup will be created to keep the state file.

├── awsec2.tf
├── terraform.tfstate
└── terraform.tfstate.backup

If you want to store a backup state file to another location, use -backup flag in the terraform command and give the location path.

There will be multiple developers working on a project. So, to give them access to the state file, it should be stored at a remote location using a terraform_remote_state data source.

The following example will take a backup to S3.

data "terraform_remote_state" "vpc" {
   backend = "s3"
   config = {
      bucket = “s3-terraform-bucket”
      key = “vpc/terraform.tfstate"
      region = “us-east-1”

5) Lock State File

There can be multiple scenarios where more than one developer tries to run the terraform configuration at the same time. This can lead to the corruption of the terraform state file or even data loss. The locking mechanism helps to prevent such scenarios. It makes sure that at a time, only one person is running the terraform configurations, and there is no conflict.

Here is an example of locking the state file, which is at a remote location using DynamoDB.

resource “aws_dynamodb_table” “terraform_state_lock” {
    name = “terraform-locking”
    read_capacity = 3
    write_capacity = 3
    hash_key = “LockingID”
    attribute {
        name = “LockingID”
        type = “S”
terraform {
    backend “s3” {
        bucket = “s3-terraform-bucket”
        key = “vpc/terraform.tfstate”
        region = “us-east-2”
        dynamodb_table = “terraform-locking”

6 ) Use var-file

In terraform, you can create a file with extension .tfvars and pass this file to terraform apply command using -var-file flag. This helps you pass those variables you don’t want to put in the terraform configuration code.

It is always suggested to pass variables for a password, secret key, etc. locally through -var-file rather than saving it inside terraform configurations or on a remote location version control system.

For example, if you want to launch an ec2 instance using terraform, you can pass access key and secret key using -var-file

Create a file terraform.tfvars and put the keys in this file.

gedit terraform.tfvars

access_key = "XXXXXXXXXXXXXXX"
secret_key = "XXXXXXXXXXXXXXX"
terraform apply -var-file=/home/clairvoyant/terraform.tfvars

7) Use Docker

When running a CI/CD pipeline build job, it is recommended to use docker containers. Terraform provides official Docker containers that can be used. In case you are changing the CI/CD server, you can easily pass the infrastructure inside a container.
Before deploying infrastructure on the production environment, you can also test the infrastructure on the docker containers, which are very easy to deploy.

By combining Terraform and Docker, you get portable, reusable, repeatable infrastructure.

Reach out to us at Clairvoyant for all your Cloud based services requirements and experience the best business solutions. Also, check out our blog AWS CloudShell and Terraform to learn more.

Sachin Arora

Tags: Cloud Services

Fill in your Details