Developer Automation Tools Bootcamp

Learning about development tools and automation

View project onGitHub

Introduction to Consul

The team from Hashicorp strikes again.
At its core Consul tackles one of the, in my opinion, hardest problems facing IT departments today, Service Discovery. But Consul is also great for Health Checks, Key/Value Store and is Multi Datacenter aware.

Official Intro to Consul

HTTP API for Consul

Consul's main interface to its functionality is via a RESTful HTTP API.

Using the HTTP API for Consul you can:

Why expose all this info via APIs? Because then external tools like Jenkins can start interacting with Consul without needing anything extra installed. Just make the necessary HTTP call, get a list of servers in a JSON format, loop over that list and deploy code to all active nodes in a cluster.

While you're at it, make sure you secure those HTTP API end-points using Consul's Encryption.

Consul is a DNS server

Did you know that Consul can be your DNS server?
What better way to lookup the IP address of a node in your cluster, than to use your Service Discovery tool Consul, which knows about all the machines you maintain as soon as they come or go.
Whether you want to know the IP of another node in your application, or if you want to look up the name and address of the healthy message queue server in your West Data Center, Consul can be your DNS server that knows and serves this information.
Now you do not need to run separate DNS Servers or Services, it comes bundled in the functionality of your Consul nodes. So feel free to take advantage of it.

Service Definitions and Checks

Beyond just knowing what machines you have, Consul can be configured to watch over the Services you run and monitor them with a health check.
These Services and Checks can be used to pull nodes out of the API or DNS responses when a node becomes unhealthy. There are three types of Checks you can write.

  1. A Script check
  2. A HTTP check
  3. A TTL check
These checks are expected to be run with high frequency. The examples say every 10 seconds, but you can adjust that number based on what frequency machines are coming and going from your app.


Watches are how you can script Consul to react to changes in your infrastructure. These scripts are called Handlers.

Imagine you have a HAProxy load balancer sitting in front of your web application cluster. And you have AWS Auto Scaling configured on the web cluster as well. New machines will be coming and going as needed, but if HAProxy doesn't know about them, what's the point?
Use a Consul Watch to fire off a Handler that will update the HAProxy configuration file whenever a new host is detected in the web application cluster.

Alternatively, check out Consul Template for a great example of this functionality with Consul.


We are going to complete the following actions, following along the example provided by Jeff Lindsay and his docker-consul container.

  1. Go to the Automation Tool's git repo's consul directory in your terminal.
  2. vagrant up the VM that already has Docker installed and vagrant ssh into it. If prompted for Vagrant's password, it is vagrant.
  3. Start a series of Consul agents in Docker containers
    docker run -d --name han -h han progrium/consul -server -bootstrap-expect 3
    JOIN_IP="$(docker inspect -f '{{.NetworkSettings.IPAddress}}' han)"
    echo $JOIN_IP
    docker run -d --name chewbacca -h chewbacca progrium/consul -server -join $JOIN_IP
    docker run -d --name leia -h leia progrium/consul -server -join $JOIN_IP
  4. We now have three Consul machines in a cluster (which we can see via docker ps). Let's add a fourth; the one which will expose the ports to communicate with the Consul cluster.
    docker run -d -p 8400:8400 -p 8500:8500 -p 8600:53/udp --name luke -h luke progrium/consul -join $JOIN_IP
  5. View the Consul WebUI from the fourth container. (Watch that port number in the URL in case Vagrant had a conflict and changed it for you.)
  6. View the HTTP API to list Consul nodes.