Published on

How did I provision and deploy basic three-tier web application to AWS using Terraform

Authors

Terraform: Provision and Deploy Basic Three-Tier Web Application to AWS

In this blog post, I'm going to share how I provisioned and deployed a basic three-tier web application to AWS using Terraform. The application setup includes:

  • An Application Load Balancer (ALB) for distributing traffic.
  • ECS Fargate for running containerized workloads.
  • An Aurora RDS database for persistent data storage.

Prerequisites

I'm assuming you have the following prerequisites:

  • An AWS CLI profile configured with the necessary permissions.
  • Terraform installed on your local machine.

Step 1: Create GitHub Repository and Initialize Terraform

I created a new GitHub private repository to store the Terraform configuration files. Initially I included .gitignore file like below:

# Local .terraform directories
**/.terraform/*

# .tfstate files
*.tfstate
*.tfstate.*

# Crash log files
crash.log
crash.*.log

# Exclude all .tfvars files, which are likely to contain sensitive data, such as
# password, private keys, and other secrets. These should not be part of version
# control as they are data points which are potentially sensitive and subject
# to change depending on the environment.
*.tfvars
*.tfvars.json

# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# Ignore transient lock info files created by terraform apply
.terraform.tfstate.lock.info

# Include override files you do wish to add to version control using negated pattern
# !example_override.tf

# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*

# Ignore CLI configuration files
.terraformrc
terraform.rc


```hcl

.tfvars is just like environment variables, but it's used for Terraform. It's a good practice to keep sensitive information like access keys, secret keys, and other secrets in .tfvars files and exclude them from version control.

Then I created S3 bucket and DynamoDB table to store Terraform state file and lock file. I used AWS CLI with following command (bucket name is different) to create the bucket and table:

aws s3api create-bucket --bucket {REPLACE_WITH_YOUR_S3_BUCKET_NAME} --region ap-northeast-2 --create-bucket-configuration LocationConstraint=ap-northeast-2

# Enable versioning on the bucket (optional but recommended)

aws s3api put-bucket-versioning --bucket {REPLACE_WITH_YOUR_S3_BUCKET_NAME} --versioning-configuration Status=Enabled

# Create a DynamoDB table for state locking

aws dynamodb create-table --table-name {REPLACE_WITH_YOUR_DYNAMODB_TABLE_NAME} --attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH --billing-mode PAY_PER_REQUEST

After creating the bucket and table, I added backend.tf file, in ./mock_web_app directory, which contains the configuration for remote state management using S3 bucket and DynamoDB table:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    cloudflare = {
      source  = "cloudflare/cloudflare"
      version = "~> 3.0"
    }
  }

  backend "s3" {
    bucket         = "{REPLACE_WITH_YOUR_S3_BUCKET_NAME}"    # S3 bucket name
    key            = "mock-web-app/terraform.tfstate" # Folder/Key structure in S3
    region         = "ap-northeast-2"
    dynamodb_table = "{REPLACE_WITH_YOUR_DYNAMODB_TABLE_NAME}" # State locking table
    encrypt        = true                   # Encrypt the state file
  }
}

Here’s how my project directory looked:

root/
├── mock-web-app/               # Main folder for the application
│   ├── alb.tf                  # ALB resources (HTTP/HTTPS listeners, Target Groups)
│   ├── backend.tf              # Remote state management
│   ├── database.tf             # Aurora database resources
│   ├── ecs.tf                  # ECS cluster, task definition, and service
│   ├── ecr.tf                  # ECR repository
│   ├── main.tf                 # Main Terraform configuration
│   ├── outputs.tf              # Outputs for ALB DNS, database endpoints, etc.
   ├── provider.tf             # AWS provider configuration
│   ├── variables.tf            # Input variables for resources
│   └── terraform.tfvars        # Values for variables
├── .gitignore                  # Git ignore file
├── README.md                   # Project README

Step 2: Import Existing VPC and Subnets

I have an existing VPC and subnets in my AWS account that I want to use for the application. So instead of creating vpc and other networking resources using Terraform, I would import the VPC and subnets into the Terraform state.

First I created a new file called main.tf in the mock-web-app directory and added the following code to import the VPC and subnets:

provider "aws" {
  region = "ap-northeast-2" # Ensure it matches your backend region
}

data "aws_vpc" "practice" {
  filter {
    name   = "tag:Name"       # Use the tag "Name" to identify the VPC
    values = ["practice-vpc"] # The name of your VPC
  }

}

resource "aws_subnet" "public_subnet_a" {
  vpc_id                  = data.aws_vpc.practice.id
  cidr_block              = "172.0.32.0/24"
  availability_zone       = "ap-northeast-2a"
  map_public_ip_on_launch = true

  tags = {
    Name  = "PublicSubnetA"
    Stage = "prod"
  }
  lifecycle {
    prevent_destroy = true
  }
}


resource "aws_subnet" "public_subnet_b" {
  vpc_id                  = data.aws_vpc.practice.id
  cidr_block              = "172.0.33.0/24"
  availability_zone       = "ap-northeast-2b"
  map_public_ip_on_launch = true

  tags = {
    Name  = "PublicSubnetB"
    Stage = "prod"
  }
  lifecycle {
    prevent_destroy = true
  }
}


resource "aws_subnet" "private_subnet_a" {
  vpc_id                  = data.aws_vpc.practice.id
  cidr_block              = "172.0.0.0/20"
  availability_zone       = "ap-northeast-2a"
  map_public_ip_on_launch = false

  tags = {
    Name  = "PrivateSubnetA"
    Stage = "prod"
  }
  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_subnet" "private_subnet_b" {
  vpc_id                  = data.aws_vpc.practice.id
  cidr_block              = "172.0.16.0/20"
  availability_zone       = "ap-northeast-2b"
  map_public_ip_on_launch = false

  tags = {
    Name  = "PrivateSubnetB"
    Stage = "prod"
  }
  lifecycle {
    prevent_destroy = true
  }
}

Then I ran the following commands to import the VPC and subnets into the Terraforms state:

terraform import aws_subnet.public_subnet_a {public-subnet-a-id}
terraform import aws_subnet.public_subnet_b {public-subnet-b-id}
terraform import aws_subnet.private_subnet_a {private-subnet-a-id}
terraform import aws_subnet.private_subnet_b {private-subnet-b-id}

public-subnet-a-id, public-subnet-b-id, private-subnet-a-id, private-subnet-b-id are the actual subnet IDs in your AWS account.
When running the import command, Terraform will map the existing resources to the corresponding Terraform configuration.

For example, the public_subnet_a resource in the Terraform configuration will be mapped to the existing public subnet with the ID public-subnet-a-id.
I also defined subnets for both public and private resources while enabling delete protection using prevent_destroy = true.

Step 3: Create ECR Repository and Aurora Database

Next, I created an ECR repository to store the Docker image for the application and an Aurora RDS database for persistent data storage.

Here's the code I added to ecr.tf

resource "aws_ecr_repository" "mock_web_app_repo" {
  name                 = "mock-web-app"
  image_tag_mutability = "MUTABLE" # Allows overwriting tags (default)
  image_scanning_configuration {
    scan_on_push = true
  }

  tags = {
    Name  = "mock-web-app"
    Stage = "prod"
  }
}
    ```

But pushing to ECR repository should have been done manually, so I ran this commands and those command can be found in the AWS ECR console:

aws ecr get-login-password --region ap-northeast-2 | docker login --username AWS --password-stdin {REPLACE_WITH_YOUR_AWS_ACCOUNT_ID}.dkr.ecr.ap-northeast-2.amazonaws.com
docker build -t mock-web-app .
docker tag mock_web_app:latest {REPLACE_WITH_YOUR_AWS_ACCOUNT_ID}.dkr.ecr.ap-northeast-2.amazonaws.com/mock-web-app:latest
docker push {REPLACE_WITH_YOUR_AWS_ACCOUNT_ID}.dkr.ecr.ap-northeast-2.amazonaws.com/mock-web-app:latest

Then I added the following code to database.tf to create an Aurora RDS database:

module "aurora_postgresql_v2" {
  source  = "terraform-aws-modules/rds-aurora/aws"
  version = "~> 7.0"

  name              = "mock-web-app-aurora-postgresql"
  engine            = "aurora-postgresql"
  engine_mode       = "provisioned"
  engine_version    = "16.1"
  storage_encrypted = true
  master_username   = "mock-web-app_admin"
  master_password   = random_password.master.result

  vpc_id               = data.aws_vpc.practice.id
  db_subnet_group_name = aws_db_subnet_group.aurora_subnets.name
  publicly_accessible  = false
  # Disable creation of subnet group in the module
  create_db_subnet_group = false

  vpc_security_group_ids = [aws_security_group.mock_web_app_db_sg.id]
  create_security_group  = false

  # Serverless v2 configuration
  instances = {
    1 = {
      instance_class      = "db.serverless"
      publicly_accessible = false
    }
  }


  serverlessv2_scaling_configuration = {
    min_capacity = 0.5
    max_capacity = 10
  }

  db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.aurora_pg_params.name

  apply_immediately   = true
  skip_final_snapshot = true

  tags = {
    Name  = "mock-web-app-aurora"
    Stage = "prod"
  }
}


resource "aws_db_subnet_group" "aurora_subnets" {
  name       = "aurora-subnet-group"
  subnet_ids = [aws_subnet.private_subnet_a.id, aws_subnet.private_subnet_b.id]

  tags = {
    Name = "aurora-subnet-group"
  }
}

resource "aws_security_group" "mock_web_app_db_sg" {
  name        = "mock_web_app-db_sg"
  description = "Allow Aurora DB connections"
  vpc_id      = data.aws_vpc.practice.id
  # Allow inbound traffic from ECS Task Security Group

  ingress {
    from_port       = 5432
    to_port         = 5432
    protocol        = "tcp"
    security_groups = [aws_security_group.mock_web_app-api-sg.id] # Only allow traffic from ECS Task SG
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}


resource "random_password" "master" {
  length  = 20
  special = true
}

resource "aws_rds_cluster_parameter_group" "aurora_pg_params" {
  name        = "aurora-postgresql-params"
  family      = "aurora-postgresql16"
  description = "Custom parameter group for Aurora PostgreSQL 16"

  parameter {
    name  = "log_statement"
    value = "all"
  }

  parameter {
    name  = "log_min_duration_statement"
    value = "500"
  }

  tags = {
    Name = "aurora-parameter-group"
  }
}

I used the official Terraform AWS Aurora module to create the Aurora RDS database. The module supports both provisioned and serverless Aurora configurations.
I first tried engine_mode = "serverless" but it was somehow returned error and I found out it was deprecated in Aurora Serverless v2. The official GitHub repository used engine_mode = "provisioned" in the Serverless v2 configuration, so I followed the same configuration.

mock_web_app-db_sg security group allows inbound traffic from the ECS Task Security Group on port 5432 (Postgres port) and egress traffic to all destinations.
To make sure the database is initailzed in a private subnet, I created a DB subnet group with the private subnets. And set publicly_accessible = false in instance configuration.

Step 4: Create ECS components

After creating the ECR repository and Aurora RDS database, I created the ECS components to deploy the application.

Here's the code I added to ecs.tf to create the ECS cluster, task definition, and service:

resource "aws_ecs_cluster" "mock_web_app_cluster" {
  name = "prod-mock-web-app-cluster"

  tags = {
    Name  = "mock-web-app-cluster"
    Stage = "prod"
  }
}

resource "aws_ecs_service" "mock_web_app_service" {
  name            = "mock-web-app-service"
  cluster         = aws_ecs_cluster.mock_web_app_cluster.id
  task_definition = aws_ecs_task_definition.mock_web_app_task.arn
  desired_count   = 1
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = [aws_subnet.public_subnet_a.id, aws_subnet.public_subnet_b.id]
    security_groups  = [aws_security_group.mock_web_app-api-sg.id]
    assign_public_ip = true # if not enabled, this gave an error when deployed
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.mock_web_app_tg.arn
    container_name   = "mock-web-app-api"
    container_port   = 8080
  }

  tags = {
    Name  = "mock-web-app-service"
    Stage = "prod"
  }
}


resource "aws_iam_role" "ecs_task_execution_role" {
  name = "ecs-task-execution-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect    = "Allow"
        Principal = { Service = "ecs-tasks.amazonaws.com" }
        Action    = "sts:AssumeRole"
      }
    ]
  })
}

resource "aws_iam_policy_attachment" "ecs_task_execution_role_policy" {
  name       = "ecs-task-execution-policy-attach"
  roles      = [aws_iam_role.ecs_task_execution_role.name]
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

resource "aws_iam_policy_attachment" "ecs_task_execution_role_ecr_policy" {
  name       = "ecs-task-ecr-policy-attach"
  roles      = [aws_iam_role.ecs_task_execution_role.name]
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}

resource "aws_ecs_task_definition" "mock_web_app_task" {
  family                   = "mock-web-app-api"
  cpu                      = "512"  # 0.5 vCPU
  memory                   = "1024" # 1 GB
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn

  container_definitions = jsonencode([
    {
      name  = "mock-web-app-api"
      image = "${aws_ecr_repository.mock_web_app_api_repo.repository_url}:latest"
      portMappings = [
        {
          containerPort = 8080
          hostPort      = 8080
          protocol      = "tcp"
        }
      ]
      environment = [
        {
          name  = "POSTGRES_USERNAME"
          value = var.db_username
        },
        {
          name  = "POSTGRES_PASSWORD",
          value = var.db_password

        },
        {
          name  = "POSTGRES_HOST",
          value = module.aurora_postgresql_v2.cluster_endpoint
        }

      ]
      logConfiguration = {
        logDriver = "awslogs"
        options = {
          awslogs-group         = aws_cloudwatch_log_group.ecs_log_group.name
          awslogs-region        = "ap-northeast-2"
          awslogs-stream-prefix = "ecs"
        }
      }
    }
  ])
}

resource "aws_security_group" "mock_web_app_api_sg" {
  name        = "mock-web-app-api-sg"
  description = "Security group for ECS tasks"
  vpc_id      = data.aws_vpc.practice.id

  ingress {
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name  = "mock-web-app-api-sg"
    Stage = "prod"
  }
}

resource "aws_cloudwatch_log_group" "ecs_log_group" {
  name              = "/ecs/mock-web-app"
  retention_in_days = 7 # Optional: Adjust log retention period as needed

  tags = {
    Name  = "mock-web-app-logs"
    Stage = "prod"
  }
}

I ran into an error related to ECS Task can't read from ECR repository.And I figured it out that starting from Fargate version 1.3.0 and later, the assign_public_ip parameter is required when using Fargate launch type.
If assign_public_ip was not enabled in the network_configuration block, then NAT Gateway is needed to ensure Fargate to outbound Internet access
I thought it was better to enable assign_public_ip to true than setting up NAT Gateway, so I enabled it and gave the ECS task a public ip.

I also created variables.tf file in the mock-web-app directory to define input variables for the Terraform configuration:

variable "db_username" {
  description = "The username for the database"
  type        = string
}
variable "db_password" {
  description = "The password for the database"
  type        = string
  sensitive   = true
}

variable declared in variables.tf file can be used in the Terraform configuration files.

And it can be populated with tfvars file like below:

terraform.tfvars

db_username = {REPLACE_WITH_YOUR_DB_USERNAME}
db_password = {REPLACE_WITH_YOUR_DB_PASSWORD}

When there's terraform.tfvars file, Terraform will automatically load the values from the file.
Otherwise, you can pass the values using the -var option when running Terraform commands or use tfvars file like below:

terraform apply -var="db_username={REPLACE_WITH_YOUR_DB_USERNAME}" -var="db_password={REPLACE_WITH_YOUR_DB_PASSWORD}"

terraform apply -var-file="development.tfvars" ## the name of the file can be different


# Step 5: Create Application Load Balancer

Finally, I created the Application Load Balancer (ALB) to distribute traffic to the ECS service. Here's the code I added to alb.tf:

```hcl

resource "aws_lb" "mock_web_app_alb" {
  name               = "mock-web-alb"
  load_balancer_type = "application"
  security_groups    = [aws_security_group.mock_web_app_alb_sg.id] # Use ALB SG
  subnets            = [aws_subnet.public_subnet_a.id, aws_subnet.public_subnet_b.id]

  tags = {
    Name  = "prod-mock-web-alb"
    Stage = "prod"
  }
}

resource "aws_security_group" "mock_web_app_alb_sg" {
  name        = "mock-web-app-alb-sg"
  description = "Security group for ALB allowing HTTP and HTTPS traffic"
  vpc_id      = data.aws_vpc.practice.id

  # Allow inbound HTTP traffic
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # Allow inbound HTTPS traffic
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # Allow all outbound traffic
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name  = "mock-web-app-alb-sg"
    Stage = "prod"
  }
}

resource "aws_lb_target_group" "mock_web_app_tg" {
  name        = "mock-web-app-tg"
  port        = 8080
  protocol    = "HTTP"
  vpc_id      = data.aws_vpc.practice.id
  target_type = "ip" # Specify target type as IP for Fargate

  health_check {
    path                = "/"
    interval            = 30
    timeout             = 5
    healthy_threshold   = 2
    unhealthy_threshold = 2
  }

  tags = {
    Name  = "mock-web-app-tg"
    Stage = "prod"
  }
}

resource "aws_lb_listener" "http" {
  load_balancer_arn = aws_lb.mock_web_app_alb.arn
  port              = 80
  protocol          = "HTTP"

  default_action {
    type = "redirect"
    redirect {
      port        = "443"
      protocol    = "HTTPS"
      status_code = "HTTP_301"
    }
  }
}


resource "aws_lb_listener" "https" {
  load_balancer_arn = aws_lb.mock_web_app_alb.arn
  port              = 443
  protocol          = "HTTPS"
  ssl_policy        = "ELBSecurityPolicy-2016-08"
  certificate_arn   = {REPLACE_WITH_YOUR_ACM_CERTIFICATE_ARN} # ACM certificate ARN

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.mock_web_app_tg.arn
  }
}

After creating the ALB, I added the following code to outputs.tf for debugging purposes:

output "vpc_id" {
  value = data.aws_vpc.practice.id
}

output "ecr_repository_uri" {
  value = aws_ecr_repository.mock_web_app_repo.repository_url
}

# Optional: Output the cluster endpoint
output "db_cluster_endpoint" {
  description = "Writer endpoint for the cluster"
  value       = module.aurora_postgresql_v2.cluster_endpoint
}

output "db_cluster_reader_endpoint" {
  description = "Reader endpoint for the cluster"
  value       = module.aurora_postgresql_v2.cluster_reader_endpoint
}

output "db_cluster_master_password" {
  description = "Master password"
  value       = random_password.master.result
  sensitive   = true
}

output "ecs_sg_id" {
  value = aws_security_group.mock_web_app_sg.id
}

output "db_sg_id" {
  value = aws_security_group.mock_web_app_db_sg.id
}

output "alb_dns_name" {
  value = aws_lb.mock_web_app_alb.dns_name
}

Terraform outputs does not affect the infrastructure, but it's useful for debugging and sharing information about the resources created by Terraform.
When running terraform apply, Terraform will display the outputs in the console.
Those outputs can be used in other Terraform code too.

Running terraform plan will show the changes that Terraform will apply to the infrastructure. To avoid accidental changes, it's a recommended practice to review the plan output before applying the changes.

Now I can run terraform apply to create the resources in the AWS account:

terraform init # Initialize the Terraform configuration
terraform plan # Review the changes to be applied
terraform apply # Apply the changes to create the resources

This will create the ALB, ECS Fargate service, and Aurora RDS database in the AWS account.
After the resources are created, I can access the application using the ALB DNS name.
Also, I had configured domain bought from Cloudflare so I configured DNS in Cloudflare to point to the ALB DNS name.
After running terraform apply and configuring DNS in Cloudflare, I was able to access the application using the domain name.

For next steps, I plan to write about how I orgnized the Terraform code to manage multiple environments

To be continued...