Infrastructure as Code with Terraform

Infrastructure as Code with Terraform

Table of Contents

Why Infrastructure as Code (IaC)?

System admins used to create infrastructure manually not long ago or with individual scripts.

This situation created a necessity for companies to have separate admins to manage & maintain data centers, network, storage, hardware, and software engineers. But is operational overhead the only reason to go for Infrastructure as Code (IaC)? Not really!

One of the main reasons is the difficulty of managing and provisioning infrastructure. IaC lets you set up your complete infrastructure through the code in minutes and eliminates the manual process between different groups such as software engineers and IT admins. 

In this tutorial, I will go through Terraform in basics as Infrastructure as Code tool.

What is Terraform?

Terraform is one of the most popular Infrastructure as Code tools created by HashiCorp.

It gives the flexibility of defining and provisioning a complete infrastructure by using human-friendly Hashicorp Configuration Language (HCL) or in JSON format.

Unlike many exclusive IaC tools, Terraform supports most cloud providers such as Amazon Web Services, Google Cloud Platform, Microsoft Azure, Oracle Cloud Infrastructure, and IBM Cloud.

Install Terraform

First, install the HashiCorp tap, a repository of all HashiCorp packages, and then install Terraform.

					$ brew tap hashicorp/tap
$ brew install hashicorp/tap/terraform

Verify the installation in a new terminal window;

					$ terraform -help
Usage: terraform [global options] <subcommand> [args]

The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.

Main commands:
  init          Prepare your working directory for other commands
  validate      Check whether the configuration is valid
  plan          Show changes required by the current configuration
  apply         Create or update infrastructure
  destroy       Destroy previously-created infrastructure



Terraform Docker Provider Elasticsearch Example

The Docker provider helps you to create and manage Docker containers by using Docker API. In the example, we will create an Elasticsearch container with Terraform locally.

Once Terraform installation is done, create a directory named my-terraform-repo and navigate into it. 

					$ mkdir my-terraform-repo
$ cd my-terraform-repo 

Paste the following Terraform configuration into a file named 

					terraform {
  required_providers {
    docker = {
      source  = "kreuzwerker/docker"
      version = "~> 2.13.0"

provider "docker" {}

resource "docker_image" "elastic" {
    name = "t0t0/docker-alpine-elasticsearch:latest"

resource "docker_container" "elasticsearch" {
  image = docker_image.elastic.latest
  name  = "elasticsearch"
  env = ["discovery.type=single-node"]
  ports {
    internal = 9200
    external = 9200


Initialize the working directory for the Terraform configuration files.

					$ terraform init  
Initializing the backend...
2021-10-24T22:49:50.676+0200 [DEBUG] New state was assigned lineage "4e465cb4-8582-d2ad-6731-2b51c990e2bb"
2021-10-24T22:49:50.850+0200 [DEBUG] checking for provisioner in "."
2021-10-24T22:49:50.852+0200 [DEBUG] checking for provisioner in "/usr/local/bin"
2021-10-24T22:49:50.853+0200 [INFO]  Failed to read plugin lock file .terraform/plugins/darwin_amd64/lock.json: open .terraform/plugins/darwin_amd64/lock.json: no such file or directory

Initializing provider plugins...
- Reusing previous version of kreuzwerker/docker from the dependency lock file
2021-10-24T22:49:50.859+0200 [DEBUG] Service discovery for at
2021-10-24T22:49:51.110+0200 [DEBUG] GET
- Using previously-installed kreuzwerker/docker v2.13.0
Terraform has been successfully initialized!


Now it is time to apply the configuration you mentioned in the file against infrastructure.

					$ terraform apply

Terraform will perform the following actions:

  # docker_container.elasticsearch will be created
  + resource "docker_container" "elasticsearch" {
      + attach           = false
      + bridge           = (known after apply)
      + command          = (known after apply)
      + container_logs   = (known after apply)
      + entrypoint       = (known after apply)
      + env              = [
          + "ENVIRONMENT=operations",
          + "PROJECT=stage",
          + "discovery.type=single-node",
      + exit_code        = (known after apply)
      + gateway          = (known after apply)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.


Once the Terraform apply is completed validate the Elasticsearch cluster by command; 

					$ curl -X GET "localhost:9200/_cat/nodes?v=true&pretty"

host      ip        heap.percent ram.percent load node.role master name              4          71 0.21 d         *      Cordelia Frost 


or validate Elasticsearch cluster details on the browser http://localhost:9200/

  "name" : "Cordelia Frost",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.3.2",
    "build_hash" : "b9e4a6acad4008027e4038f6abed7f7dba346f94",
    "build_timestamp" : "2016-04-21T16:03:47Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  "tagline" : "You Know, for Search"


Enable Terraform error logging in case you want to see exclusive logs;

					export TF_LOG=DEBUG

Terraform’s official website has excellent documentation for Terraform Providers and Terraform Language Documentation. The best way to learn a tool or a language is to challenge it on yourself. Feel free to explore official documents with different providers.

Ready with Infrastructure and need to develop a program to run on this platform? 

Check out my other posts to find the best option for you!

Leave a Reply

Your email address will not be published. Required fields are marked *