Public Service, Private Network

Sometimes you want to create a public facing service, but you want stricter control over the networking of the service. This pattern is suitable for many of the same use cases of the public facing service, but it is especially used in the following cases:

  • A service which is public facing but needs an extra layer of security hardening by not even having a public IP address that an attacker could send a request directly to.
  • A service which needs to be massively horizontally scalable while not being constrained by number of IP addresses available.
  • A service which initiates outbound connections but to the public you want those connections to originate from a specific and limited set of IP addresses that can be whitelisted.

At a high level the architecture looks like this:

public subnet public lb

Everything is deployed in an Amazon Virtual Private Cloud (VPC) which has two subnets:

  • Public subnet: Has an attached internet gateway to allow resources launched in that subnet to accept connections from the internet, and initiate connections to the internet. Resources in this subnet have public IP addresses.
  • Private subnet: For internal resources. Instances in this subnet have no direct internet access, and only have private IP addresses that are internal to the VPC, not directly accessible by the public.

The public facing subnet hosts a couple resources:

  • Public facing load balancer: Accepts inbound connections on specific ports, and forwards acceptable traffic to resources inside the private subnet.
  • NAT gateway: A networking bridge to allow resources inside the private subnet to initiate outbound communications to the internet, while not allowing inbound connections.

The private subnet is used to run your application containers. The EC2 instances hosting the containers do not have a public IP address, only a private IP address internal to the VPC. As a result if your application initiates an outbound connection the connection gets routed through the NAT gateway in the public subnet. Additionally, there is no way for any traffic to directly reach your container. Instead all inbound connections must go to the load balancer which will pick and choose whether to pass the inbound connection on to the protected container inside the private VPC subnet.

Deploy in an self managed EC2 cluster

Use these templates:
Launch a custom EC2 cluster in a private VPC with a NAT gateway Launch Download
Add an external, public ALB ingress Launch Download
Deploy a public facing, privately networked EC2 service Launch Download

Deploy in AWS Fargate

Use these templates:
Launch an AWS Fargate cluster in a private VPC with a NAT gateway Launch Download
Add an external, public ALB ingress Launch Download
Deploy a public facing, privately networked Fargate service Launch Download

Check the CloudFormation outputs for the ALB ingress template to get the public facing URL of your service. You can add your own custom hostname using Route53 to create a CNAME for this address.