Serverless public facing website hosted on AWS Fargate

Nathan Peck profile picture
Nathan Peck
Senior Developer Advocate at AWS

About

A public facing web service is one of the most common architecture patterns for deploying containers on AWS. It is well suited for:

  • A static HTML website, perhaps hosted by NGINX or Apache
  • A dynamically generated web app, perhaps served by a Node.js process
  • An API service intended for the public to access
  • An edge service which needs to make outbound connections to other services on the internet

With this pattern you will deploy a serverless container through Amazon ECS, which is hosted on AWS Fargate capacity.

WARNING

This pattern is not well suited for:

  • A private internal API service
  • An application that has very strict networking security requirements

For the above use cases instead consider using the private subnet version of this pattern, designed for private API services.

Architecture

The architecture is as follows:

Public subnetPublic subnetVPCAvailability Zone 1Availability Zone 2ContainerInternet gatewayApplicationLoad BalancerLoad Balancer has nodes in each AZTrafficAWS FargatePort 80Port 80ContainerPort 80ContainerPort 80

All resources run in a public subnet of the VPC. This means there is no NAT gateway charge or other networking resources required. The public facing ALB is hosted in the public subnet, as are the container tasks that run in AWS Fargate.

AWS Fargate provisions Elastic Network Interfaces (ENIs) for each task. The ENI has both a public and a private IP address. This allows the task to initiate outbound communications directly to the internet via the VPC's internet gateway.

In order to protect the task from unauthorized inbound access it has a security group which is configured to reject all inbound traffic that did not originate from the security group of the load balancer.

TIP

Application Load Balancer has a constant hourly charge, which gives it a baseline cost, even if your application receives no traffic.

If you expect your website to receive very low traffic, or intermittent traffic, then you may prefer to use the API Gateway pattern for AWS Fargate. Amazon API Gateway charges per request, with no minimum fee. However, this can add up to a higher per request cost if you do receive large amounts of traffic.

Dependencies

This pattern requires that you have an AWS account, and that you use AWS Serverless Application Model (SAM) CLI. If not already installed then please install SAM CLI for your system.

Choose a VPC

This pattern can be deployed on top of either of the following VPC patterns:

Which one you choose depends on your goals for this deployment. You can choose the low cost VPC to start with and upgrade to the large sized VPC later on if you have additional private services, or private database servers you wish to deploy in the VPC.

Download the vpc.yml file from your chosen pattern, but do not deploy it yet. Deployment will be done later in the process.

Define the Amazon ECS cluster

The following AWS CloudFormation template creates a simple Amazon ECS cluster that is setup for serverless usage with AWS Fargate.

File: cluster.ymlLanguage: yml
AWSTemplateFormatVersion: '2010-09-09'
Description: Empty ECS cluster that has no EC2 instances. It is designed
             to be used with AWS Fargate serverless capacity

Resources:
  # Cluster that keeps track of container deployments
  ECSCluster:
    Type: AWS::ECS::Cluster
    Properties:
      ClusterSettings:
        - Name: containerInsights
          Value: enabled

  # This is a role which is used within Fargate to allow the Fargate agent
  # to download images, and upload logs.
  ECSTaskExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service: [ecs-tasks.amazonaws.com]
            Action: ['sts:AssumeRole']
            Condition:
              ArnLike:
                aws:SourceArn: !Sub arn:aws:ecs:${AWS::Region}:${AWS::AccountId}:*
              StringEquals:
                aws:SourceAccount: !Ref AWS::AccountId
      Path: /

      # This role enables basic features of ECS. See reference:
      # https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonECSTaskExecutionRolePolicy
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy

Outputs:
  ClusterName:
    Description: The ECS cluster into which to launch resources
    Value: !Ref ECSCluster
  ECSTaskExecutionRole:
    Description: The role used to start up a task
    Value: !Ref ECSTaskExecutionRole

You will notice that this cluster is extremely minimal. It has no EC2 capacity, nor EC2 capacity provider because it is intended to be used exclusively with AWS Fargate capacity. If you wish to deploy EC2 instances take a look at the similar patter for an ECS managed web service on EC2 instances.

Define the service deployment

Next we need to define an Amazon ECS service that deploys a container using AWS Fargate:

File: service.ymlLanguage: yml
AWSTemplateFormatVersion: '2010-09-09'
Description: An example service that deploys in AWS VPC networking mode
             on AWS Fargate. Service runs with networking in public
             subnets and public IP addresses

Parameters:
  VpcId:
    Type: String
    Description: The VPC that the service is running inside of
  PublicSubnetIds:
    Type: List<AWS::EC2::Subnet::Id>
    Description: List of public subnet ID's to put the load balancer and tasks in
  ClusterName:
    Type: String
    Description: The name of the ECS cluster into which to launch capacity.
  ECSTaskExecutionRole:
    Type: String
    Description: The role used to start up an ECS task
  ServiceName:
    Type: String
    Default: web
    Description: A name for the service
  ImageUrl:
    Type: String
    Default: public.ecr.aws/docker/library/nginx:latest
    Description: The url of a docker image that contains the application process that
                 will handle the traffic for this service
  ContainerCpu:
    Type: Number
    Default: 256
    Description: How much CPU to give the container. 1024 is 1 CPU
  ContainerMemory:
    Type: Number
    Default: 512
    Description: How much memory in megabytes to give the container
  ContainerPort:
    Type: Number
    Default: 80
    Description: What port that the application expects traffic on
  DesiredCount:
    Type: Number
    Default: 2
    Description: How many copies of the service task to run

Resources:

  # The task definition. This is a simple metadata description of what
  # container to run, and what resource requirements it has.
  TaskDefinition:
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: !Ref ServiceName
      Cpu: !Ref ContainerCpu
      Memory: !Ref ContainerMemory
      NetworkMode: awsvpc
      RequiresCompatibilities:
        - FARGATE
      ExecutionRoleArn: !Ref ECSTaskExecutionRole
      ContainerDefinitions:
        - Name: !Ref ServiceName
          Cpu: !Ref ContainerCpu
          Memory: !Ref ContainerMemory
          Image: !Ref ImageUrl
          PortMappings:
            - ContainerPort: !Ref ContainerPort
              HostPort: !Ref ContainerPort
          LogConfiguration:
            LogDriver: 'awslogs'
            Options:
              mode: non-blocking
              max-buffer-size: 25m
              awslogs-group: !Ref LogGroup
              awslogs-region: !Ref AWS::Region
              awslogs-stream-prefix: !Ref ServiceName

  # The service. The service is a resource which allows you to run multiple
  # copies of a type of task, and gather up their logs and metrics, as well
  # as monitor the number of running tasks and replace any that have crashed
  Service:
    Type: AWS::ECS::Service
    # Avoid race condition between ECS service creation and associating
    # the target group with the LB
    DependsOn: PublicLoadBalancerListener
    Properties:
      ServiceName: !Ref ServiceName
      Cluster: !Ref ClusterName
      LaunchType: FARGATE
      NetworkConfiguration:
        AwsvpcConfiguration:
          AssignPublicIp: ENABLED
          SecurityGroups:
            - !Ref ServiceSecurityGroup
          Subnets: !Ref PublicSubnetIds
      DeploymentConfiguration:
        MaximumPercent: 200
        MinimumHealthyPercent: 75
      DesiredCount: !Ref DesiredCount
      TaskDefinition: !Ref TaskDefinition
      LoadBalancers:
        - ContainerName: !Ref ServiceName
          ContainerPort: !Ref ContainerPort
          TargetGroupArn: !Ref ServiceTargetGroup

  # Security group that limits network access
  # to the task
  ServiceSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Security group for service
      VpcId: !Ref VpcId

  # Keeps track of the list of tasks for the service
  ServiceTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      HealthCheckIntervalSeconds: 6
      HealthCheckPath: /
      HealthCheckProtocol: HTTP
      HealthCheckTimeoutSeconds: 5
      HealthyThresholdCount: 2
      TargetType: ip
      Port: !Ref ContainerPort
      Protocol: HTTP
      UnhealthyThresholdCount: 10
      VpcId: !Ref VpcId
      TargetGroupAttributes:
        - Key: deregistration_delay.timeout_seconds
          Value: 0

  # A public facing load balancer, this is used as ingress for
  # public facing internet traffic.
  PublicLoadBalancerSG:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Access to the public facing load balancer
      VpcId: !Ref VpcId
      SecurityGroupIngress:
        # Allow access to public facing ALB from any IP address
        - CidrIp: 0.0.0.0/0
          IpProtocol: -1
  PublicLoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Scheme: internet-facing
      LoadBalancerAttributes:
      - Key: idle_timeout.timeout_seconds
        Value: '30'
      Subnets: !Ref PublicSubnetIds
      SecurityGroups:
        - !Ref PublicLoadBalancerSG
  PublicLoadBalancerListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
        - Type: 'forward'
          ForwardConfig:
            TargetGroups:
              - TargetGroupArn: !Ref ServiceTargetGroup
                Weight: 100
      LoadBalancerArn: !Ref 'PublicLoadBalancer'
      Port: 80
      Protocol: HTTP

  # Open up the service's security group to traffic originating
  # from the security group of the load balancer.
  ServiceIngressfromLoadBalancer:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      Description: Ingress from the public ALB
      GroupId: !Ref ServiceSecurityGroup
      IpProtocol: -1
      SourceSecurityGroupId: !Ref 'PublicLoadBalancerSG'

  # This log group stores the stdout logs from this service's containers
  LogGroup:
    Type: AWS::Logs::LogGroup

Deploy it all

You should have the following three files:

  • vpc.yml - Template for the base VPC that you wish to host resources in
  • cluster.yml - Template for the ECS cluster and its capacity provider
  • service.yml - Template for the web service that will be deployed on the cluster

Use the following parent stack to deploy all three stacks:

File: parent.ymlLanguage: yml
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: Parent stack that deploys VPC, Amazon ECS cluster for AWS Fargate,
             and a serverless Amazon ECS service deployment that hosts
             the task containers on AWS Fargate

Resources:

  # The networking configuration. This creates an isolated
  # network specific to this particular environment
  VpcStack:
    Type: AWS::Serverless::Application
    Properties:
      Location: vpc.yml

  # This stack contains the Amazon ECS cluster itself
  ClusterStack:
    Type: AWS::Serverless::Application
    Properties:
      Location: cluster.yml

  # This stack contains the container deployment
  ServiceStack:
    Type: AWS::Serverless::Application
    Properties:
      Location: service.yml
      Parameters:
        VpcId: !GetAtt VpcStack.Outputs.VpcId
        PublicSubnetIds: !GetAtt VpcStack.Outputs.PublicSubnetIds
        ClusterName: !GetAtt ClusterStack.Outputs.ClusterName
        ECSTaskExecutionRole: !GetAtt ClusterStack.Outputs.ECSTaskExecutionRole

Use the following command to deploy all three stacks:

Language: shell
sam deploy \
  --template-file parent.yml \
  --stack-name serverless-web-environment \
  --resolve-s3 \
  --capabilities CAPABILITY_IAM

Test it Out

Open the Amazon ECS cluster in the web console and verify that the service has been created with a desired count of two. You will observe the service create tasks that launch into AWS Fargate. Last but not least ECS will register the task's IP addresses with the Application Load Balancer so that it can send them traffic.

On the "Health & Metrics" tab of the service details page you can click on the load balancer name to navigate to the load balancer in the EC2 console. This will give you the load balancer's public facing CNAME that you can copy and paste into your browser to verify that the sample NGINX webserver is up and running.

Tear it down

You can tear down the entire stack with the following command:

Language: shell
sam delete --stack-name serverless-web-environment

See Also

Alternative Patterns

Not quite right for you? Try another way to do this:

API  Serverless public facing API hosted on AWS Fargate

An API service with private networking configured.

EC2 Instances  Public facing website hosted on EC2 instances

Consider EC2 instances for very large deployments.

Website  Serverless API Gateway Ingress for AWS Fargate, in CloudFormation

Application Load Balancer has a constant hourly charge that gives it a relatively higher baseline cost. If you have a very low traffic service you may prefer to use API Gateway as a serverless ingress with no baseline cost, just per request, pay as you go pricing.