Consul Connect Service Mesh

Consul Connect is a service mesh built in to Consul, one of the most popular service registry solutions. With Consul Connect the same software that is keeping track of all your services can also serve as a layer 4 proxy that securely routes traffic from one service to another.

This architecture is particularly well suited to applications that have strong networking security requirements. With Consul Connect you get an extra layer of security on top of the built in security of an Amazon VPC. Consul Connect uses mutual TLS to automatically encrypt communications between containers.

At a high level the architecture looks like this:

consul connect service mesh

The cluster consists of multiple EC2 instances, each running a Consul agent which is connected to a central Consul server which tracks what tasks are running in the cluster, and where they are located.

Each task running in the cluser is made up of an application container, and a Consul Connect sidecar container. On startup the Consul Connect sidecar registers the application container’s IP address into Consul via the Consul agent.

The Consul Connect sidecar can also be configured to provide a local proxy that serves as a secure network channel to another application container. For example if Container A wants to talk to Container B then the Consul Connect listens for traffic on a local port. When Container A opens a connection to that local port Consul Connect looks up the location of Container B and its sidecar proxy, then opens a secured TLS connection to Container B via its Consul Connect sidecar.

Deploy Consul Connect

The first step is to deploy a cluster of EC2 instances for ECS to use, setup a Consul server, and install the Consul Agent on each EC2 instance so that Consul Connect can reach Consul.

Use these templates:
Launch an EC2 cluster Launch Download
Add the Consul server, and consul-agent daemon Launch Download

Launching the EC2 cluster requires an SSH key. Make sure that you have created an EC2 key pair to use for launching the cluster.

At this point what we have looks like this:

consul connect cluster and agent

We have some EC2 instances, each running a Consul Agent connected to a central Consul server. Let’s verify by accessing the Consul dashboard. In order to do this we’ll open an SSH tunnel from your local machine to the remote cluster, which allows you to access the Consul admin dashboard at http://localhost:8500 locally. Look at the outputs tab of the second template you deployed above to get the exact SSH command for your cluster. It will be something like this:

ssh -i "~/.ssh/<key name>.pem" -L 127.0.0.1:8500:<consul instance host name>:8500 ec2-user@<consul instance host name>

As long as this SSH session stays open you will be able to access the Consul dashboard on your local machine. Navigate to the Consul nodes tab (http://localhost:8500/ui/dc1/nodes) to verify that the EC2 hosts are all registered. You should see something like this:

consul dashboard nodes

It may take up to a couple minutes after the CloudFormation template completes deploying for Consul to elect a leader, and become healthy. Don’t worry if the dashboard doesn’t respond immeadiately, just wait a couple minutes and try again.

Launch some services in the cluster

Now that we have a Consul enabled ECS cluster, lets launch some Consul Connect enabled tasks in the cluster:

Use these templates:
Launch the greeting service Launch Download
Launch the name service Launch Download
Launch the greeter service Launch Download

Once these template have all deployed open the Consul services tab (http://localhost:8500/ui/dc1/services)

You should see something like this:

consul dashboard nodes

We have deployed three microservices, each with a Consul Connect sidecar proxy that has registered the task into Consul. At a high level the microservices communicate with each other like this:

consul greeter microservices

The greeter microservice needs to fetch from the greeting and the name services. It talks to the local Consul Connect proxy running inside its task, and that proxy looks up the locations of the other service proxies and opens a connection to them. The application containers don’t (and in fact can’t) talk directly to each other. All communication goes via the Consul Connect proxy, which encrypts communications with TLS.

Let’s test out communications between these services. In the SSH session that we opened earlier run the following commands:

sudo yum install -y wget unzip
wget https://releases.hashicorp.com/consul/1.3.0/consul_1.3.0_linux_amd64.zip
unzip consul_1.3.0_linux_amd64.zip
./consul connect proxy -service test -upstream greeter:8080 &

These commands have started a local Consul Connect proxy on the machine, listening on port 8080 and forwarding traffic to the greeter service. Now we can test to make sure that we can communicate with the greeter service via the Consul Connect proxy:

curl localhost:8080

Run the command a few times. You should see output like:

From ip-10-0-1-99.ec2.internal: Hi (ip-10-0-0-19.ec2.internal) Jackson (ip-10-0-0-190.ec2.internal)
From ip-10-0-1-89.ec2.internal: Hello (ip-10-0-1-58.ec2.internal) Alexandrea (ip-10-0-1-37.ec2.internal)
From ip-10-0-1-99.ec2.internal: Hey (ip-10-0-0-19.ec2.internal) Miguel (ip-10-0-0-190.ec2.internal)
From ip-10-0-1-89.ec2.internal: Greetings (ip-10-0-1-58.ec2.internal) Vesta (ip-10-0-1-37.ec2.internal
From ip-10-0-1-89.ec2.internal: Greetings (ip-10-0-1-58.ec2.internal) Sherwood (ip-10-0-1-37.ec2.internal)

So we have verified that we can use Consul Connect to communicate to one of our services, and that service can use a Consul Connect proxy to talk to two backing services. But we also want to enable public access to the front facing greeter service.

Create a public facing ingress

In order to get connections from the public into the cluster, we need to use an ingress, since the Consul Connect proxy can’t be connected to directly by the public. We will use an Application Load Balancer, which talks to a Nginx reverse proxy, which uses a local Consul Connect sidecar as a gateway to the cluster to access the greeter service.

Use this template:
Launch the load balancer and ingress service Launch Download

Once this template launches our architecture looks like this:

consul connect nginx ingress

Traffic from the public enters the cluster via an Application Load Balancer, which routes traffic to an Nginx proxy, which uses a local Consul Connect sidecar as a gateway to the cluster. Through the Consul Connect Gateway it is able to communicate with the greeter service which is behind the scenes also using Consul Connect to fetch from the greeting and name services.

Check the outputs tab of the template we just launched for the public facing address of the load balancer:

consul connect public address

Conclusion

You have launched a small microservices deployment in an ECS cluster. Each microservice task communicates via a Consul Connect service mesh which automatically uses TLS to secure service to service communication. You also exposed one of the services to the public via an Nginx reverse proxy behind an Application Load Balancer.

To cleanup the environment just delete the CloudFormation templates in the reverse order they were created. You can delete the ingress, and the three service templates in parallel. Then you can delete the daemon template, and finally the cluster template.