Friday, May 11, 2018

Moving to the Medium

I am a very fortunate person. I have a great wife, great family, great friends and a job doing things that I love. Unfortunately I have a very small window of free time so blogging about the stuff I get to work on has taken a backseat. I feel that because I am so fortunate to do such cool work that I have an obligation to share the problems I am solving to the world. So much of the problems that I solve come from reading others work. I would like to do a better job contributing back to the community that makes it possible to solve many of the problems that I get to work on. I have decided to move my work from Blogger to the Medium since I am finding so much of my time being spent there anyways and the built-in features allow me to write much faster with the small amount of time I have. I would also like to spend some time writing about things other than tech and The Medium seems like a better place for a diverse blogging. 

You can catch my Medium stories here: https://medium.com/@wbassler23

I plan to keep this domain as well as my older posts on Blogger but anything new going to The Medium. 

I hope that I am able to better keep up and share with the community the problems that I get to solve. Hopefully I will be able to help someone out as I have been helped out by so many others. We are better together. 

Cheers. 

Wednesday, April 19, 2017

Installing OpenCV on MacOS Sierra 10.12.4

Had some issues today getting OpenCV library rolling on my MacOS (Using Python 3). I was getting an error after the initial install: 

pip install python-opencv

This code:

import cv2

img = cv2.imread('test.jpg',cv2.IMREAD_GRAYSCALE)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()


failed with this error:

The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or
Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and
pkg-config, then re-run cmake or configure script.


I was fortunate enough to find the following article: https://solarianprogrammer.com/2016/11/29/install-opencv-3-with-python-3-on-macos/

I use Anaconda to manage my Python environments. Here is what I did to resolve the issue. NOTE: I could not get it to work on Python version newer than 3.5.2. I also use PyCharm as my interpreter, I had to refresh my interpreter for the project for the changes to take affect. 

Switch to your desired Python Environment. 



[Wed Apr 19 15:29:14] ~
 wb@Westons-MBP > source activate py3

(py3)
[Wed Apr 19 15:29:59] ~
 wb@Westons-MBP > conda install --channel https://conda.anaconda.org/menpo opencv3
Fetching package metadata ...........
Solving package specifications: .

Package plan for installation in environment /Users/wb/anaconda/envs/py3:

The following NEW packages will be INSTALLED:

  hdf5: 1.8.17-1
  mkl: 2017.0.1-0
  numpy: 1.12.1-py35_0
  opencv3: 3.1.0-py35_0 menpo
  tbb: 4.3_20141023-0 menpo

Proceed ([y]/n)? y

hdf5-1.8.17-1. 100% |#####################################| Time: 0:00:00 5.67 MB/s
numpy-1.12.1-p 100% |#####################################| Time: 0:00:00 13.01 MB/s
opencv3-3.1.0- 100% |#####################################| Time: 0:00:03 11.73 MB/s

(py3)
[Wed Apr 19 15:30:35] ~
 wb@Westons-MBP > python --version
Python 3.5.2 :: Continuum Analytics, Inc.

Friday, September 30, 2016

Mesos-DNS as your upstream DNS or alonside your Enterprise DNS

In this post I will show you how you can get going with your own external Mesos-DNS that you can either A) use as an upstream DNS server or B) incorporate/forward Mesos Tasks to your upstream DNS server. The reason for using either one of these methods is so that you are able to have machines outside of the Mesos cluster discover DNS of Mesos tasks running internal the Mesos cluster and vice versa. Using either one of these methods will provide a way for all DNS entries on your network to query one another. We are currently using Method B in our enterprise so that our Mesos tasks are able to communicate with other services running outside of the cluster such as our Gitlab server.

We will be using docker to run our Mesos-DNS in both methods running on our Bootstrap server.
https://mesosphere.github.io/mesos-dns/

NOTE: This is currently being used with version DCOS Open & Enterprise 1.7.x and Mesos-DNS version 0.5.2. It has not been tested or used with the latest releases of DCOS or Mesos-DNS. Read the release notes of latest DCOS in terms of VIPs to DNS. Will be testing this functionality in the near future. This setup also means that you have exposed your private agents to routing from outside the Mesos Network and not just through the use of the public agent which we are hoping to change in the future as well.

I would like to send a shout out to Mesosphere for continuing to make an incredible product and opening up DCOS to the community. What a powerful and fun community to work with! I have been fortunate enough to have been involved in Apache Mesos for the past year and a half and have watched this project grow rapidly. Mesosphere is doing some amazing things that are changing the way that we treat Data Centers and Development. Looking forward to continuing the journey with them!

References: 



Method A: Using Mesos-DNS as an Upstream DNS Server for ALL your DNS

In this method you are able to use your Mesos-DNS as a DNS server for all DNS on your network. You can plug in the IP of your Mesos-DNS server in your resolv.conf file or you can run a dig against it. This will give you the IP address of the Mesos Agent where the service is running. You can also get the port address by obtaining the SRV records. 

1) Create and edit the json.config for Mesos-DNS. See parameters for explanation: "resolvers" is very important here. Also, you can make the domain what you please. Default is "mesos".

2) Run it in docker:

3) You can now use this Mesos-DNS as your DNS server. Place the IP of Mesos-DNS in your resolv.conf or dig against its IP.

From the example config.json, your services will run under <service>.pick.your.domain.com. some examples would be:
"leader.pick.your.domain.com" for the leader, 
"master.pick.your.domain.com" for a list of your mesos master nodes,
"agent.pick.your.domain.com" for a list of your mesos agents,
"marathon.pick.your.domain.com" for marathon and
"nginx.marathon.pick.your.domain.com" for a service named "nginx" running on marathon root.

Be sure to check out Mesos DNS documentation on the naming

You will also be able to query all DNS from all DNS servers defined in "resolvers". This is what provides you the ability to query both internal Mesos and external DNS.


Method B: Incorporating Mesos-DNS with your Enterprise upstream DNS Server

This method provides you with the same capabilities only this method uses sub zones or sub domains on your upstream DNS server. Best benefit from this method is that this doesn't require any changes to your DNS configuration on your servers. Nobody has to know there is an external Mesos-DNS server out there that is forward Mesos Tasks DNS to it.

I haven't personally setup a sub domain on a DNS server before, but there are several good references out there on how to do it for your specific DNS. From the example, you would create "pick.your.domain.com" as the sub domain on your DNS server.

1) Create the sub domain on your specific DNS provider. This is the only additional step needed from Method A.

2) Follow steps 1-3 above from Method A.

Done.



Friday, May 13, 2016

Orchestrating Communications to Docker as you Scale Like a Boss

One of the more difficult things to manage as you begin to scale and deploy containers at mass is trying to manage communications and access to your services. Not only is managing communications to your services difficult, but also doing it in a way that makes sense, is static and feels normal for your users. With an orchestration tool such as Mesos, your containers will most likely move from host to host quite often. This is exactly how it should be for environments running large amounts of containers. It shouldn't matter where your container lives and you should also not have to search for it as it moves around. Nor should you have to manage ports as your Infrastructure grows and your apps scale. I believe this to be one of the major pieces to consider when planning your container based environment. Think about the reason that you are considering using containers and then think about how you plan to orchestrate access to them and assume that your services will not be running in the same place tomorrow as they were today. We achieve this solution through a simple mechanism of Service Discovery and Load Balancing.

In this post, Ill describe the tools that I have chosen to use in my docker based PaaS solution backed by Apache Mesos. I went with a solution that not only would suit the needs of our Mesos based services but would work along side any docker container that was deployed in the environment. Simply setup a node with load balancing and access to your service discovery and have the users route through this node to access their service.

Demonstrations will be done using Marathon, Consul, consul-template + HaProxy, but as I said there are a ton of projects out there that can be used to help solve this issue. 

Components used:


Workflow:
  1. Docker service deployed with Marathon to Mesos
  2. Registrator running on Mesos Agents registers the service to Consul
  3. consul-template updates HAProxy with port mappings of service(s) and reloads config
  4. ACCESS TO SERVICE(S)!!!!



































Getting Started. Note: You will need a running Mesos Cluster with Marathon and also a running Consul cluster. See my post for getting a Consul cluster up in 10 minutes here:

1) On a server that you would like to use to proxy traffic, install HAProxy and consul-template

# yum install -y haproxy unzip && cd /usr/local/bin/ && wget -O consul-template.zip wget https://releases.hashicorp.com/consul-template/0.14.0/consul-template_0.14.0_linux_amd64.zip
   
# unzip consul-template.zip 

2) Configure consul-template for HAProxy. It will reload the config each time there is a change with the service such as a scale up, down or a failure. 

# mkdir -pv /etc/consul-template/ && cd /etc/consul-template

Create new file /etc/consul-template/consul-haproxy.json which will be the configuration file to manage reloading haproxy anytime there is a change is service discovery. 

# cat /etc/consul-template/consul-haproxy.json
consul = "$CONSUL:$PORT"

template {
  source = "/etc/haproxy/haproxy.template"
  destination = "/etc/haproxy/haproxy.cfg"
  command = "systemctl reload haproxy"


}

Create the source and destination files for haproxy based on the config above.

# cat /etc/haproxy/haproxy.template
global
  daemon
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  maxconn 4096

defaults
  log            global
  retries             3
  maxconn          2000
  timeout connect  5000
  timeout client  50000
  timeout server  50000

listen http-in
  bind *:80
  mode tcp
  option tcplog
  balance leastconn{{range service "$SERVICE"}}
  server {{.Node}} {{.Address}}:{{.Port}} check {{end}}

$SERVICE in the template file above is the service name that you will put as part of a ENV parameter in your Marathon json when you launch. It will register itself in Consul as that service name and anytime there is a change if will reflect the change to haproxy.

# cat /etc/haproxy/haproxy.cfg
global
  daemon
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  maxconn 4096

defaults
  log            global
  retries             3
  maxconn          2000
  timeout connect  5000
  timeout client  50000
  timeout server  50000

listen http-in
  bind *:80
  mode tcp
  option tcplog
  balance leastconn 


3) We can go ahead and start consul-template at this point. Run from command line or from systemd unit to make it permanent.

# consul-template -config /etc/consul-template/consul-haproxy.json

OR

# cat /etc/systemd/system/consul-template.service
[Unit]
Description=Consul Template HA Proxy
After=network.target

[Service]
User=root
Group=root
Environment="GOMAXPROCS=2"
ExecStart=/usr/local/bin/consul-template -config /etc/consul-template/consul-haproxy.json
ExecReload=/bin/kill -9 $MAINPID
KillSignal=SIGINT
Restart=on-failure

[Install]
WantedBy=multi-user.target


# systemctl enable consul-template && systemctl start consul-template


4) Registrator must be running on any host in the cluster that will need to have docker containers registered to consul and any host that is running as a consul agent. This watches the host on the docker socket and anytime there is a change, registers or deregisters from Consul. 

On each agent:

# docker run -d --name=registrator --net=host --volume=/var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://$IP:$PORT

Make it persistent after reboot with unit file (if using systemd):

# cat /etc/systemd/system/registrator.service
[Unit]
Description=Registrator Container
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
Restart=on-failure
ExecStart=/usr/bin/docker start registrator

[Install]
WantedBy=multi-user.target


# systemctl enable registrator



5) Now this is where the magic begins. Let's create a json for our Marathon service that will be launched. You are required to the service name defined in the env object. Launching an nginx app with alpine base below:

Name: alpine-nginx.json
{
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "docker-registry:5000/alpine-nginx",
    "network": "BRIDGE",
      "portMappings": [
        { "containerPort": 8050, "hostPort": 0, "servicePort": 8050, "protocol": "tcp" }
      ]
    }
  },
  "id": "alpine-nginx",
  "instances": 1,
  "env":
       { "SERVICE_NAME": "alpine", "SERVICE_TAGS": "alpine" },
  "cpus": 0.5,
  "mem": 100,
  "uris": []

}


6) After you launch the app and it starts on Marathon, check Consul to see if service is registered.



7) Now go back to consul-template server and check out the ha-proxy.cfg file. You service along with its port mappings on Mesos will be there as well.

# cat /etc/haproxy/haproxy.cfg
global
  daemon
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  maxconn 4096

defaults
  log            global
  retries             3
  maxconn          2000
  timeout connect  5000
  timeout client  50000
  timeout server  50000

listen http-in
  bind *:80
  mode tcp
  option tcplog
  balance leastconn

  server mesos-agent01 10.x.x.x:31239 check


Hit the consul-template server at port 80 and you will be routed to your nginx app.

# curl localhost:80
<!DOCTYPE html>
<html>
<body>
<h3>This container is actually running at: </h3>
<p id="demo"> </p>

<script>
var x = location.host;
document.getElementById("demo").innerHTML= x;

</script>

</body>
</html>


8) Scale the app in Marathon to 3 and watch consul-template automatically update your HAProxy config. Yellow is old, green is new.

# cat /etc/haproxy/haproxy.cfg
global
  daemon
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  maxconn 4096

defaults
  log            global
  retries             3
  maxconn          2000
  timeout connect  5000
  timeout client  50000
  timeout server  50000

listen http-in
  bind *:80
  mode tcp
  option tcplog
  balance leastconn
  server mesos-agent01 10.x.x.x:31743 check
  server mesos-agent01 10.x.x.x:31239 check
  server mesos-agent01 10.x.x.x:31577 check

9) Now kill one of the instances from Marathon, this simulates a failure scenario. Consul-template will update the change for the failed instance to the new! Yellow and green are the ones that have existed, blue is the new.

# cat /etc/haproxy/haproxy.cfg
global
  daemon
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  maxconn 4096

defaults
  log            global
  retries             3
  maxconn          2000
  timeout connect  5000
  timeout client  50000
  timeout server  50000

listen http-in
  bind *:80
  mode tcp
  option tcplog
  balance leastconn
  server mesos-agent01 10.x.x.x:31835 check
  server mesos-agent01 10.x.x.x:31239 check

  server mesos-agent01 10.x.x.x:31577 check


Feel free to use your consul-template server for as many other services as you need. All you need to do is add additional service parameters in your template file as before with a different port.

We are calling our consul-template servers "Edge Nodes" as they are actually outside of our Infrastructure and routing to the inside. These can live anywhere on your network as the only thing they need is access to read your Service Discovery. You should be able to dedicate very little resources to these machines as possible 1GB Mem 1 CPU. With the correct setup, you can also run these Edge Nodes in docker containers. You will just need to statically assigned IPs (Flannel, Weave, Calico, etc..) and port mappings for that container. 


Friday, May 6, 2016

Consul Server and Consul Agent Systemd Units

Consul Server and Consul Agent Systemd Units for RHEL/CentOS 7


Consul Server ->/etc/systemd/system/consul-server.service

[Unit]
Description=Consul Server
After=network.target

[Service]
User=root
Group=root
Environment="GOMAXPROCS=2"
ExecStart=/usr/local/bin/consul agent -config-dir /etc/consul.d/server
ExecReload=/bin/kill -9 $MAINPID
KillSignal=SIGINT
Restart=on-failure
RestartSec=1

[Install]

WantedBy=default.target


Consul Agent -> /etc/systemd/system/consul-client.service

[Unit]
Description=Consul Server
After=network.target

[Service]
User=root
Group=root
Environment="GOMAXPROCS=2"
ExecStart=/usr/local/bin/consul agent -config-dir /etc/consul.d/client
ExecReload=/bin/kill -9 $MAINPID
KillSignal=SIGINT
Restart=on-failure


[Install]
WantedBy=multi-user.target

Ultimate Container Sandbox | Isolating Containers in Containers

This was something fun I worked on for while to display how to give users a safe development box to do things like learn, play or test with docker. Its an extremely ephemeral environment and can be rebuilt in secs. It has been sitting in my drafts for a bit but wanted to write about it...... 

Anyone that has been involved in the docker ecosystem over the past several years has more than likely seen the following image below:





Running docker inside of docker. This is nothing new and in fact if you are using Docker universally to run virtually everything such as monitoring or service discovery, chances are you are most likely mounting the docker socket inside your container. I personally use docker in docker to build and push doing the same thing. 

This is where it gets hairy and you get into the inception aspect of this whole mess.

The cool thing with running docker in docker is the fact that you are able to give yourself a nice little test bed with no worries of destroying ready containers and also utilize docker command line at the same time. Building and push new images etc. The only issue with this is the fact that you are mounting the docker socket within the container itself. You are exposing the hosts images and containers to the docker in docker. If you run a '
docker images' inside the docker container, you are seeing the hosts images. If you run a 'docker rm|rmi' you will wipe the host you are running on. There is NO isolation in this. Not only would you wipe the host but anyone else that is running docker in docker on the host would be doing the same thing. 

One way I have figured out how to isolate docker running on the same host is to utilize docker's father project, LXC. By running docker inside of LXC, each LXC instance is completely isolated from the other and you are safely able to utilize docker without affecting anyone else. As with docker, LXC can also be spun up in a matter of seconds so in the event that you do something in LXC that you dont like, blow it away and spin up a new. Good read and another instance of this being used: Openstack Carina Project

Docker on LXC on Linux


Image provide by yours truly... You're Welcome!

Let us get this going. Ubuntu as the underlying host OS as I am starting to go back to my original Linux roots.

1) Install LXC:
apt-get update && apt-get install lxc

2) Create the LXC container and add the following lines to each containers configs /var/lib/lxc/$LXC_NAME/config:
lxc-create -t download -n meh-01 -- -d ubuntu -r trusty -a amd64      
  
Add below lines to /var/lib/lxc/meh-01/config  
lxc.aa_profile = unconfined
lxc.cgroup.devices.allow = a
lxc.cap.drop =


3) Start the LXC container, attach and install the needful to get docker installed in LXC:
# lxc-start -n meh-01 -d 
# lxc-attach -n meh-01

Inside LXC:
# apt-get update && apt-get install wget apparmor docker.io -y

4) Check it out!!! 

FROM LXC:
root@meh-01:~# docker version
Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.2.1
Git commit (client): 7c8fca2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.2.1
Git commit (server): 7c8fca2
OS/Arch (server): linux/amd64

root@meh-01:~# docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
golang                           latest              471e087e791d        2 weeks ago         744 MB


root@meh-01:~# docker run -it golang echo hello world
hello world


FROM HOST:
root@docker-builder:~# docker images
The program 'docker' is currently not installed. You can install it by typing:
apt-get install docker


Docker isn't even installed on the host so the host is not being affected... ***Modify changes to your docker options within LXC if you would like to add things like private registry etc...

Next: Create another LXC container and repeat the above steps and notice you get complete isolation and separate development environments with LXC. Add things into the LXC containers such as ssh and port forwarding on the host so you can SSH to it. 

LXC is the original container runtime that got me interested in containers (my blog from a couple years ago). I will continue to use alongside docker for different things because I think that LXC has some functionality the docker doesn't do as well. For example, running OS containers, LXC is much better. Docker still holds the belt for application containers in my opinion. Be sure to check out Rackspace's CaaS mentioned above. Awesome project and read. I will be following not only what they are doing but Openstack as well. 

CONTAINERIZE ALL THE THINGS



Friday, April 8, 2016

Setting up Consul Service Discovery for Mesos in 10 Minutes

This will be a short series on using Consul in your Microservices environment. Consul provides Service Discovery and many other nice features for Mircoservices which you can read more here. After you read it you will understand why it is such a popular choice for many people using any form of Microservice and anything else that requires Service Discovery for that matter. I have chosen to use Consul for my PaaS offering service backed with Apache Mesos with integration for a tool called consul-template and also for DNS for containers. Ill kick off a small series about different ways to utilize Consul for your Microservices architecture and how I have been utilizing it for Service Discovery and multiple other things for Docker. I wont talk much about it or try to explain how it works because it is best to read as much as possible on your own so for more information please see Consul Documentation:

More info on Consul: https://www.consul.io/

Documentation: https://www.consul.io/docs/index.html
Free Online Demo!! : http://demo.consul.io/ui/
MUST UNDERSTAND: https://www.consul.io/docs/guides/outage.html

We will start off by installing a cluster of 3 server nodes and 1 client with the UI and then end with creating systemd units for the entire cluster.


1) Pull down the Hashicorp Consul zip file to ALL nodes and unzip. The same package is used for server and client.

    cd /usr/local/bin/ && wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip
    unzip consul*



2) Pull down the UI Package for the node that will act serve the Web UI for the cluster. Can be any but I chose the client. Unzip in desired directory.

    wget -O /opt/consul/web-ui.zip https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_web_ui.zip && cd /opt/consul/ && unzip web-ui.zip



3) Focusing on the server config first, create the initial files/directories on all servers. One of them will act as the bootstrap server initially until we get the cluster in quorum. 

    /etc/consul.d/bootstrap/config.json  ### This only gets created on 1 of the servers
    {
        "bootstrap": true,
        "server": true,
        "datacenter": "your-dc",
        "data_dir": "/var/lib/consul",
        "log_level": "INFO",
        "advertise_addr": "$BSTRAP_LOCAL_IP",
        "enable_syslog": true
    }

    
    /etc/consul.d/server/config.json
    {
        "bootstrap": false,
        "advertise_addr": "$LOCAL_IP",
        "server": true,
        "datacenter": "your-dc",
        "data_dir": "/var/lib/consul",
        "log_level": "INFO",
        "enable_syslog": true,
        "start_join": ["server1", "server2","server3"]
    }

    mkdir -pv /var/lib/consul   ### Used as our data directory


Also we can go ahead and create out systemd unit files on each server and enable on boot.

    /etc/systemd/system/consul-server.service
    [Unit]
    Description=Consul Server
    After=network.target
    
    [Service]
    User=root
    Group=root
    Environment="GOMAXPROCS=2"
    ExecStart=/usr/local/bin/consul agent -config-dir /etc/consul.d/server
    ExecReload=/bin/kill -9 $MAINPID
    KillSignal=SIGINT
    Restart=on-failure
    
    
    [Install]
    WantedBy=multi-user.target

      

    # systemctl enable consul-server



4) Run the following commands in order on each of the servers to get quorum. You will need a bootstrap server to start with (server1). You will need lots of terminals here.

On Server1:
    # consul agent -config-dir /etc/consul.d/bootstrap -advertise $BSTRAP_LOCAL_IP

On Server2 (-bootstrap-expect defines the number of servers to connect):
    # consul agent -config-dir /etc/consul.d/server -advertise $LOCAL_IP -bootstrap-expect 3

On Server3:
    # consul agent -config-dir /etc/consul.d/server -advertise $LOCAL_IP -bootstrap-expect 3

Back on Server1, do a CTRL+C to kill the consul process and then start as server.
    CTRL+C 
    # consul agent -config-dir /etc/consul.d/server -advertise $LOCAL_IP -bootstrap-expect 3

The servers should select a leader and sync to quorum. Each time you lose quorum, this is how you will have to restart it. A few other methods will have to be used along with it, see Outage documentation above for more reference.




5) Lets go ahead and get our client with the Web UI up and running before we do step 6 so we can watch from the UI what Consul looks like during service failures.

    /etc/consul.d/client/config.json
    {
        "server": false,
        "datacenter": "your-dc",
        "advertise_addr": "$LOCAL_IP",
        "client_addr": "$LOCAL_IP",
        "data_dir": "/var/lib/consul",
        "ui_dir": "/opt/consul/",
        "log_level": "INFO",
        "enable_syslog": true,
        "start_join": ["server1", "server2", "server3"]
    }


Create the systemd unit file.
    /etc/systemd/system/consul-client.service
    [Unit]
    Description=Consul Server
    After=network.target

    [Service]
    User=root
    Group=root
    Environment="GOMAXPROCS=2"
    ExecStart=/usr/local/bin/consul agent -config-dir /etc/consul.d/client
    ExecReload=/bin/kill -9 $MAINPID
    KillSignal=SIGINT
    Restart=on-failure


    [Install]
    WantedBy=multi-user.target

Start the service:
    # systemctl start consul-client && systemctl status consul-client -l

You should see "agent: synced nod info" in the output of status. Go to the UI:
    http://client:8500/ui/




You should see the above image if it was successful. You will see 3 passing. vimWatch the UI during the next step to see how it interacts for health checks. 


6) In order to get consul to use a backgound process instead of the current window you are in, we will need to kill the current process and reboot each of the servers 1 at a time and let them rejoin 1 at a time so not to lose quorum. DO NOT CTRL+C the current process but KILL the process! See OUTAGE doc above about graceful leaves. Yes,  you will need yet another terminal for this. Run the following one server at a time:

    # ps -ef  |grep consul | grep -v grep  ## to get pid of current consul process
    # kill -9 $consul_pid

Go to your Consul UI and take a look at the nodes and consul service. You will see the consul service has 1 failure. Pretty cool?! No worries it will come back after you restart it.





    # reboot 
    OR
    # systemctl start consul-server && systemctl status consul-server -l

You should see that your consul server has rejoined and you didn't lose quorum because the other 2 stayed online. 

Rinse and Repeat Step 6 for all servers and you have a working Consul cluster. Next we will discuss how to register services there and show some of the things I have been doing with integration with Apache Mesos.