Back to all patterns

About

Development Tool

terraform

Type

pattern

License

map[label:Apache 2.0 link:https://github.com/aws-ia/ecs-blueprints/blob/main/LICENSE]

Load balanced public service with Terraform

Use Terraform to deploy a public facing load balanced service.

About

Terraform by HashiCorp is an infrastructure automation tool that can be used to provision and manage resources on AWS.

This pattern will show how to deploy a load balanced web service using Amazon ECS and Terraform. It builds on top of the pattern “Create an Amazon ECS cluster with Terraform”

Dependencies

  • Terraform (tested version v1.2.5 on darwin_amd64)
  • Git (tested version 2.27.0)
  • AWS CLI
  • AWS test account with administrator role access
  • Configure AWS credentials

Architecture

This pattern will create the following AWS resources:

  • ALB: We are using Application Load Balancer for this service. Note the following key attributes for ALB:
    • ALB security group - allows ingress from any IP address to port 80 and allows all egress
    • ALB subnet - ALB is created in a public subnet
    • Listener - listens on port 80 for protocol HTTP
    • Target group - Since we are using Fargate launch type, the targe type is IP since each task in Fargate gets its own ENI and IP address. The target group has container port (3000) and protocol (HTTP) where the application container will serve requests. The ALB runs health check against all registered targets. In this example, ALB send HTTP GET request to path “/” to container port 3000. We are using target group default health check settings. You can tune these settings to adjust the time interval and frequency of health checks. It impacts how fast tasks become available to serve traffic. (See ALB target health check documentation to learn more.)
  • ECR registry for the container image. We are using only one container image for the task in this example.
  • ECS service definition:
    • Task security group: allows ingress for TCP from the ALB security group to the container service port (3000 for this example). And allows all egress.
    • Service discovery: You can register the service to AWS Cloud Map registry. You just need to provide the namespace but make sure the namespace is created in the core-infra step.
    • Tasks for this service will be deployed in private subnet
    • Service definition takes the load balancer target group created above as input.
    • Task definition consisting of task vCPU size, task memory, and container information including the above created ECR repository URL.
    • Task definition also takes the task execution role ARN which is used by ECS agent to fetch ECR images and send logs to AWS CloudWatch on behalf of the task.

Deploy the core infrastructure

If you have not already done so follow the instructions in “Create an Amazon ECS cluster with Terraform” to setup the required underlying infrastructure that will support the ECS service.

ℹ️ Info: This pattern and the core infrastructure pattern are designed to be decoupled and deployed into two different Terraform workspaces. The core infrastructure pattern creates underlying resources with a specific tag, and this pattern uses Terraform data lookups to locate those resources by looking for that specific tag. If you see an error message about not finding data, ensure that you are deploying the core infrastructure in the same AWS account and region, with the same tag core-infra that this pattern is expecting.

Define the architecture

Download the following three files that define the load balanced service

File: main.tf Language: tf
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
provider "aws" {
  region = local.region
}

locals {
  name   = "ecsdemo-frontend"
  region = "us-east-2"

  container_image = "public.ecr.aws/aws-containers/ecsdemo-frontend"
  container_port  = 3000 # Container port is specific to this app example
  container_name  = "ecsdemo-frontend"

  tags = {
    Blueprint  = local.name
    GithubRepo = "github.com/aws-ia/ecs-blueprints"
  }
}

################################################################################
# ECS Blueprint
################################################################################

module "service_alb" {
  source  = "terraform-aws-modules/alb/aws"
  version = "~> 8.3"

  name = "${local.name}-alb"

  load_balancer_type = "application"

  vpc_id  = data.aws_vpc.vpc.id
  subnets = data.aws_subnets.public.ids
  security_group_rules = {
    ingress_all_http = {
      type        = "ingress"
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      description = "HTTP web traffic"
      cidr_blocks = ["0.0.0.0/0"]
    }
    egress_all = {
      type        = "egress"
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      cidr_blocks = [for s in data.aws_subnet.private_cidr : s.cidr_block]
    }
  }

  http_tcp_listeners = [
    {
      port               = "80"
      protocol           = "HTTP"
      target_group_index = 0
    },
  ]

  target_groups = [
    {
      name             = "${local.name}-tg"
      backend_protocol = "HTTP"
      backend_port     = local.container_port
      target_type      = "ip"
      health_check = {
        path    = "/"
        port    = local.container_port
        matcher = "200-299"
      }
    },
  ]

  tags = local.tags
}

resource "aws_service_discovery_service" "this" {
  name = local.name

  dns_config {
    namespace_id = data.aws_service_discovery_dns_namespace.this.id

    dns_records {
      ttl  = 10
      type = "A"
    }

    routing_policy = "MULTIVALUE"
  }

  health_check_custom_config {
    failure_threshold = 1
  }
}

module "ecs_service_definition" {
  source  = "terraform-aws-modules/ecs/aws//modules/service"
  version = "~> 5.0"

  name               = local.name
  desired_count      = 3
  cluster_arn        = data.aws_ecs_cluster.core_infra.arn
  enable_autoscaling = false

  subnet_ids = data.aws_subnets.private.ids
  security_group_rules = {
    ingress_alb_service = {
      type                     = "ingress"
      from_port                = local.container_port
      to_port                  = local.container_port
      protocol                 = "tcp"
      description              = "Service port"
      source_security_group_id = module.service_alb.security_group_id
    }
    egress_all = {
      type        = "egress"
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      cidr_blocks = ["0.0.0.0/0"]
    }
  }

  load_balancer = [{
    container_name   = local.container_name
    container_port   = local.container_port
    target_group_arn = element(module.service_alb.target_group_arns, 0)
  }]

  service_registries = {
    registry_arn = aws_service_discovery_service.this.arn
  }

  # service_connect_configuration = {
  #   enabled = false
  # }

  # Task Definition
  create_iam_role        = false
  task_exec_iam_role_arn = one(data.aws_iam_roles.ecs_core_infra_exec_role.arns)
  enable_execute_command = true

  container_definitions = {
    main_container = {
      name                     = local.container_name
      image                    = local.container_image
      readonly_root_filesystem = false

      port_mappings = [{
        protocol : "tcp",
        containerPort : local.container_port
        hostPort : local.container_port
      }],
      "environment" = [{
        "name"  = "NODEJS_URL",
        "value" = "http://ecsdemo-backend.default.core-infra.local:3000"
      }]
    }
  }

  ignore_task_definition_changes = false

  tags = local.tags
}

################################################################################
# Supporting Resources
################################################################################

data "aws_vpc" "vpc" {
  filter {
    name   = "tag:Name"
    values = ["core-infra"]
  }
}

data "aws_subnets" "public" {
  filter {
    name   = "tag:Name"
    values = ["core-infra-public-*"]
  }
}

data "aws_subnets" "private" {
  filter {
    name   = "tag:Name"
    values = ["core-infra-private-*"]
  }
}

data "aws_subnet" "private_cidr" {
  for_each = toset(data.aws_subnets.private.ids)
  id       = each.value
}

data "aws_ecs_cluster" "core_infra" {
  cluster_name = "core-infra"
}

data "aws_iam_roles" "ecs_core_infra_exec_role" {
  name_regex = "core-infra-*"
}

data "aws_service_discovery_dns_namespace" "this" {
  name = "default.${data.aws_ecs_cluster.core_infra.cluster_name}.local"
  type = "DNS_PRIVATE"
}
File: outputs.tf Language: tf
1
2
3
4
output "application_url" {
  value       = "http://${module.service_alb.lb_dns_name}"
  description = "Copy this value in your browser in order to access the deployed app"
}
File: versions.tf Language: tf
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
terraform {
  required_version = ">= 1.1"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 4.43"
    }
  }
}

You should have three files:

  • main.tf - Main file that defines the core infrastructure to create
  • outputs.tf - A list of output variables that will be passed to other Terraform modules you may wish to deploy
  • versions.tf - A definition of the underlying requirements for this module.

Deploy it

First we need to download all the dependency modules (defined in versions.tf) that this pattern relies on:

terraform init

Next we can review the deployment plan, and then deploy it:

terraform plan
terraform apply --auto-approve

When the Terraform apply is complete you will see output similar to this:

Apply complete! Resources: 17 added, 0 changed, 0 destroyed.

Outputs:
application_url = "http://ecsdemo-frontend-alb-748205711.us-east-2.elb.amazonaws.com"

Test it out

Load up the application URL in your browser. You should see a page similar to this:

The page will automatically refresh itself so that you can see traffic going to different instances of the backend container.

ℹ️ Info: You may initially see a 503 Service Unavailable message for about 30 seconds. This is because Terraform does not wait for the service to be fully healthy before it reaches “Apply complete” stage. This makes the Terraform apply faster, but means that the application startup will continue in the background for a about half a minute.

Tear it down

You can use the following command to teardown the infrastructure that was created.

terraform destroy

See also