Sunday, December 21, 2014

The Container World | Part 6 Introduction to Docker


Docker. Docker. Docker. Docker is one of my favorite things to talk about. For those of you working in the Cloud space or working with any form of Cloud technologies, you probably encounter Docker talk and/or articles about Docker on a daily basis. Docker is an extremely interesting new cloud and container technology that I believe will change the way that people develop, deploy and scale. At the time of this article it is one of the most popular cloud and open source project on the market and it is still very early in its lifetime. In this post Ill talk about what the Docker technology is, how Docker containers are different that LXC containers. Ill also talk about the advantages of Docker over other container technologies. NOTE: All demonstrations will be done on a CentOS 7 server. If you are interested in Learning Docker or even looking to continue your knowledge, I highly recommend reading "The Docker Book" by James Turnbull (extremely intelligent open source author). I would also recommend following his blog


What is Docker?


Docker is an open source Linux container technology (originally based on the LXC project) that is used to build, ship and deploy distributed applications. Docker was built on the basis of providing developers with a simple way to build and quickly deploy lightweight applications from anywhere and run exactly the same in any environment in the development life cycle. As with LXC, the only thing you need in order to run these containers is a Linux kernel allowing Docker to be extremely portable. I would also argue that another important factor of Docker is that is it built to allow for developers to quickly scale their environments when needed. 

Docker consists of the following 4 main components in order to operate. Each explained short below:

  • Docker Daemon - The Docker daemon runs on a host server and does all the work of running, shipping and building containers. The Docker daemon runs as a service on the Linux host.
  • Docker Images - The underlying source code for the containers and tells the containers how to be built.
  • Registry - There are 2 types of registries in the Docker world, Public and Private. A registry is basically a storage repo of your Docker images that you build. You pull down images from here.
  • Docker Container - The final product from all other components from above. The image, the operations and the environment. 

Docker is built with the idea of making SysAdmin's and developer's lives easier!


How is it different from LXC?


The LXC project is not a new technology and has been around for several years whereas Docker has only been in the wild for about 1.5 years or so from the time of this post. I describe each technology as so: LXC is Linux container technology that essentially gives you a lightweight container in the form of a full blown Linux Operating System whereas Docker is a Linux container technology that containerizes simply single application processes.In short I think of LXC as being a containerized Linux OS and Docker as a containerized application. Both are awesome and lightweight. One should not be thought of as being better than the other. The technologies seem to be the same but there are situations where you would chose one of the other. 


Advantages/Features of Docker


As part of the introduction I would like talk about what I believe to be key features and advantages of using Docker. All of the advantages/features play nicely together which makes Docker such a monster.
  • Scalability - because containers are lightweight and minimal, Linux containers can be deployed in a matter of seconds. Due to the rapid deployment capabilities you are able to quickly scale your app during high load or heavy traffic occurrences.
  • Portability - since the container and its dependencies are not reliant on the host, the container can be "shipped" and run across any Linux host that operates Docker daemon.
  • Reproducible  I think this is important aspect of Docker and plays off of the portability factor as well. Docker allows for users to deploy an app on their laptop or in dev and move it to production and expect to have the same results. Since containers don’t rely on dependencies etc they will run the same anywhere.
  • Isolation – the use a cgroups and namespacing from the Linux kernel allows 100’s to 1000’s of Docker containers run on the same host or across a cluster of hosts without bumping into one another or affect the performance of other containers.
  • Sharing  whether deciding to use public or private registries for your images, you can share you development with virtually anyone on the planet and collaborate on projects.
  • Lightweight  one of the purposes of Docker is to be minimal and no overhead which in turn allows for extremely fast deployment and scaling abilities.
  • Version control  Docker is extremely “Git”-like. Docker registries keeps track of versions, differences, and allows for simple rollback.
  • Open Source and Community  Threw this one in last. Honestly, what is better than open source? Docker has a major backed open source community that is absolutely taking this technology to a revolutionary state that will in my opinion change the way we develop and run applications. Its incredible in my opinion how much attention this technology has gotten in its early stages and I am extremely excited for the future.


Common Commands


Here is a cheatsheet of common Docker commands that will frequently be used when first starting out.

Display system-wide information about your Docker environment.
    docker info

Pull an image from your repo to the host.
    docker pull IMAGE_NAME

List the images installed on your system.
    docker images

Remove an image.
    docker rmi IMAGE_ID

List all the containers.
    docker ps -a

Remove a container.
    docker rm CONTAINER_ID

Start/Stop a container. Tons and Tons of options that wont be mentioned. Also you can restart an already running container with "restart".
    docker start|stop|restart CONTAINER_ID

Run a container. Note that the following command will create a new container each time. If you just want to run a stopped container then use "docker start container". Also without the "-d" option at the end of the command you will be attached automatically.
    docker run -i -t BASE_IMAGE COMMAND -d

See additional information about a container or image. Tons more info that be presented such as IP addresses etc.
    docker inspect CONTAINER_ID|IMAGE_ID

List and see information on running containers.  
    docker ps



Installing Docker on CentOS 7 and "Hello World"


For CentOS 7, Docker comes as default. If it does not for some reason you can get it from the epel repository. Once you have the repo run the following:

    # yum install -y docker docker-registry

Since Docker is reliant on a Docker daemon to run, pull, ship etc., lets go ahead and start the Docker daemon and enable it to start at boot so its running default.

    # systemctl start docker
    # systemctl enable docker


Lets go ahead and pull down the centos images from the docker hub and verify its there.

    # docker pull centos
    # docker images

Let's create our very first container from the centos image. We will create that very first, everyone's favorite, "Hello World!" app.

    # docker run -i -t centos /bin/echo 'Hello World!' 

    OUTPUT:
    Hello World!


Ending Notes


This was an extremely short intro to Docker and probably a little boring if you have worked with Docker before. The upcoming blog posts will go much further in depth with things such as building your own images, private repositories, orchestration and so on. Cheers.


Blog Series on Linux Containers:
Previous Post: LXC Advanced Configuration
Next Post: Building Docker Images

Tuesday, November 11, 2014

The Container World | Part 5 Advanced Configuration


For this post I will be focusing on showing a few advanced configurations and cool things you can do with LXC. Ill show you how to add IPs to your containers so that you can get to them outside of the host, show you a couple different ways on how to deploy new containers and show you the safest way to incorporate LXC into production use by demonstrating unprivileged containers.



Adding IPs

Ensure that you have properly configured your hosts by bridging the interfaces (see Networking post) if you have not already done so. Remember that all of you containers operate from their own configuration file, /var/lib/lxc/container-name/config. This configuration file is where you will place all of the IP information. Open the file with your desired eidtor and add/fill in these parameters:


    lxc.network.type = veth
    lxc.network.flags = up
    lxc.network.link = br0
    lxc.network.hwaddr = Y0:UR:MA:CA:DR:ES
    lxc.network.name = eth0
    lxc.network.flags = up
    lxc.network.mtu = 1500
    lxc.network.ipv4 = 192.168.0.150/23
    lxc.network.ipv4.gateway = 192.168.0.1


Correct networking parameters will allow for you to have multiple containers that are able to communicate with one another and also allow for you to ssh to them from anywhere on the same network. I would also advise to set these same configs on the containers interfaces using ifconfig or modifying the interface config files themselves. Please note that setting an IP in the config file doesn't set the IP on the containers interface. So for example if the containers interface is using dhcp but the config file contains a static IP, you will have 2 IP addresses that will actually respond to ping and ssh. You can set the static IP on one or the other or both to keep from having more than 1 IP.

It is also possible to extend networking capabilities of the container to enable it to reach outside of the network that it sits on (outside of the bridge). One example of this would be to enable your container to also reach the internet. You can use iptables from the host to route requests outside of the bridge to other adapters that are available on the host based on specific IP addresses or based on whole subnets. Below, eth1 is assumed to be the adapter that is capable of talking to other networks like the internet etc. Also be sure to add IPv4 forwarding to /etc/sysctl.conf.


    # iptables -t nat -A POSTROUTING -s 192.168.0.150 -o eth1 -j MASQUERADE


Or based on subnet.


    # iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth1 -j MASQUERADE


Add following line in /etc/systctl.conf to add IPv4 forwarding followed by issuing "systctl -p" to take effect without reboot.

    net.ipv4.ip_forward = 1

    # systctl -p


NOTE: If using Red Hat 7/CentOS 7 you will need to check firewalld settings. I completely disabled firewalld and continued using iptables instead since I am more familiar with iptables.
    

  

Cloning Containers

Another cool feature of LXC is the ability to clone individual containers as you would a virtual machine. Cloning provides the capability of faster deployments for custom configurations and I would also think it would provide a way for developers to auto scale if needed. There are 2 types of clones: Copy and Snapshots. 

From lxc-clone manpage: "A copy clone copies the root filessytem from the original  container  to  the  new.  A snapshot  filesystem uses the backing store's snapshot functionality to create a very small copy-on-write snapshot of the original container. Snapshot clones require the new container backing  store  to  support snapshotting.  Currently  this includes only aufs, btrfs, lvm, overlayfs and zfs. LVM devices do not support snapshots of snapshots."    


For purposes of this post we will keep it simple and create a simple copy type clone. The "-o" flag defines original container name that will be cloned and the "-n" flag defines new container name. From my experience the rootfs, hostname and the mac will not duplicate but I haven't found a way to keep the IP from duplicating. So just remember if you are using a static IP that you will need to update the IP info on the new container to keep from conflicting. There is probably a way to script it but by default there is no way around it that I have found yet.


    # lxc-clone -o CONTAINER_TOBE_CLONED -n NEW_CONTAINER


Should get an output like "Created container NEW_CONTAINER as copy of fedora-master" when clone is completed.



Autostart

This is one of the coolest features of LXC in my opinion. Since each container is run as a process, each process (container) has the ability of being started at bootup. The LXC program can be started/run as a service (service lxc start | systemctl start lxc) and when this service is brought to life it can tell specfific containers to startup with the LXC service. This is done through passing the autostart parameter(s) to the containers config file. The autostart options support marking which containers should be auto-started and in what order and can be based on either a number or a groupSee man page for lxc.container.conf for extended details.

For the most basic autostart option, pass the following parameter to the contianers config file:

    lxc.start.auto = 1   # 0 value means off. 1 value means on.


Other parameters worth mentioning for autostart are the start order parameter and start delay. The start order along with the start delay can help bring up containers in a certain order and set a wait time before starting the next one. Can be helpful for multi container environments such as a database.

    lxc.start.order = N  # where N is a number
    lxc.start.delay = N  # where N is a number in seconds



Unprivileged Containers

Unprivileged containers are perhaps the safest way to deploy LXC especially in a production environment. LXC gets a bad rap for being unsecure at times and has actually allowed users to gain access to the root account on the host. This is possible because although containers all run in a separate namespace, uid 0 in your container is still equal to uid 0 outside of the container. Unprivileged containers run as a non root process on the host even though they can have root inside of the container itself. So at a high level we need to remap these namespaces and ensure that these processes are not running under root's uid. A little confusing but hopefully after you see a demo it will make sense. Stephane Graber who is one of the lead developers behind LXC sums this up really well in his blog on unprivileged containers.

"So how do those user namespaces work? Well, simply put, each user that’s allowed to use them on the system gets assigned a range of unused uids and gids, ideally a whole 65536 of them. You can then use those uids and gids with two standard tools called newuidmap and newgidmap which will let you map any of those uids and gids to virtual uids and gids in a user namespace." - Stephane Graber "LXC 1.0: Unprivileged Containers"

With the development of unprivileged containers we are able to allow users other than root to start an container although with unprivileged containers there are still limits on some things a user can do in namespace. I will demonstrate this on a Ubuntuxc 14.04 since my CentOS 7 box doesnt meet the prereq kernel features needed for this. The tools that we will need to configure this (newuidmap and newgidmap) require kernel verison 3.12 or higher. Let's Begin!

Ensure that you have uidmap package installed:

    # sudo apt-get install uidmap

Then assign your user subuids and subgids and give execution privileges to user home. 

    # sudo usermod --add-subuids 100000-165536 USER
    # sudo usermod --add-subgids 100000-165536 USER
    # sudo chmod +x /home/USER

Add the mappings as part of the container parameters in ~/.config/lxc/default.

    lxc.id_map = u 0 100000 65536

    lxc.id_map = g 0 100000 65536

Create and start the unprivileged container. Note this will take some time to complete.

    # lxc-create -t download -n ubuntu-unprived -- -d ubuntu -r trusty -a amd64
    # lxc-start -n ubuntu-unprived -d

So now lets compare what the processes look like from the host and from the container for this namespace.

From the container:
# lxc-attach -n ubuntu-unprived

root@ubuntu-unprived:/# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 04:48 ?        00:00:00 /sbin/init
root       157     1  0 04:48 ?        00:00:00 upstart-udev-bridge --daemon
root       189     1  0 04:48 ?        00:00:00 /lib/systemd/systemd-udevd --daemon
root       244     1  0 04:48 ?        00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pid
syslog     290     1  0 04:48 ?        00:00:00 rsyslogd
root       343     1  0 04:48 tty4     00:00:00 /sbin/getty -8 38400 tty4
root       345     1  0 04:48 tty2     00:00:00 /sbin/getty -8 38400 tty2
root       346     1  0 04:48 tty3     00:00:00 /sbin/getty -8 38400 tty3
root       359     1  0 04:48 ?        00:00:00 cron
root       386     1  0 04:48 console  00:00:00 /sbin/getty -8 38400 console
root       389     1  0 04:48 tty1     00:00:00 /sbin/getty -8 38400 tty1
root       408     1  0 04:48 ?        00:00:00 upstart-socket-bridge --daemon
root       409     1  0 04:48 ?        00:00:00 upstart-file-bridge --daemon
root       431     0  0 05:06 ?        00:00:00 /bin/bash

root       434   431  0 05:06 ?        00:00:00 ps -ef


From the host:

# lxc-info -Ssip --name ubuntu-unprived
State:          RUNNING
PID:            3104
IP:             10.1.0.107
CPU use:        2.27 seconds
BlkIO use:      680.00 KiB
Memory use:     7.24 MiB
Link:           vethJ1Y7TG
 TX bytes:      7.30 KiB
 RX bytes:      46.21 KiB
 Total bytes:   53.51 KiB

# ps -ef | grep 3104
100000    3104  3067  0 Nov11 ?        00:00:00 /sbin/init
100000    3330  3104  0 Nov11 ?        00:00:00 upstart-udev-bridge --daemon
100000    3362  3104  0 Nov11 ?        00:00:00 /lib/systemd/systemd-udevd --daemon
100000    3417  3104  0 Nov11 ?        00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
100102    3463  3104  0 Nov11 ?        00:00:00 rsyslogd
100000    3516  3104  0 Nov11 pts/8    00:00:00 /sbin/getty -8 38400 tty4
100000    3518  3104  0 Nov11 pts/6    00:00:00 /sbin/getty -8 38400 tty2
100000    3519  3104  0 Nov11 pts/7    00:00:00 /sbin/getty -8 38400 tty3
100000    3532  3104  0 Nov11 ?        00:00:00 cron
100000    3559  3104  0 Nov11 pts/9    00:00:00 /sbin/getty -8 38400 console
100000    3562  3104  0 Nov11 pts/5    00:00:00 /sbin/getty -8 38400 tty1
100000    3581  3104  0 Nov11 ?        00:00:00 upstart-socket-bridge --daemon
100000    3582  3104  0 Nov11 ?        00:00:00 upstart-file-bridge --daemon
lxc       3780  1518  0 00:10 pts/4    00:00:00 grep --color=auto 3104

As you can see processes are running inside the container as root but are not appearing as root but as 100000 from the host.


Hope you enjoyed this blog post. Next in the series will begin talking about the ever popular Docker and ease your into some really cool projects going on around it.

Blog Series on Linux Containers:
Previous Post: First Container
Next Post: Introduction to Docker

Monday, August 18, 2014

Vagrant Cheatsheet

Here is a good Vagrant command cheatsheet that some might find helpful if using Vagrant for various tasks. Recently started using Vagrant to help manage my virtualbox instances and it has quickly become one of my favorite tools in my Lab. Many companies like CoreOS are beginning to put their releases into Vagrant files to easily allow users to begin testing or playing with pre-configured environments in a matter of minutes. You can even create your own environment and put it out there for others to use. The cheatsheet is command plus a short description.  Be sure to Check it out. Also check out the Command-Line documentation on Vagrant's site for additional options.



Vagrant Commands: vagrant command options


 vagrant up  - This command is used to create and configure your guest environment/machines based on your Vagrantfile. Also multiple other options can be used.

 vagrant status  - This command is used to check the status of the Vagrant managed machines. 

 vagrant reload  - This command is used to do a complete reload on the Vagrantfile. Use this command anytime you make a change to the Vagrantfile. This command will do the same thing as running a halt command and then running an up command directly after.

 vagrant halt  - Executing this is self-explanatory, bring down the environment Vagrant is managing.

 vagrant suspend  - This command suspends the environment instead of shutting it down. Enables a quicker startup of the environment when brought back up later.

 vagrant resume  - Command is used after putting environment in a suspended state.

 vagrant destroy  - Beware. This command will bring down the environment if running and then destroys all of the resources that were created along with the initial creation.

 vagrant package  - This command is used to package a running virtualbox environment in a re-usable box. 

 vagrant ssh  - SSH into you vagrant running machines.



There are several other commands of course to explore. 

Thursday, July 31, 2014

The Container World | Part 4 First Container

Before we create our first container I would like to go over the architecture of LXC and also list some key commands used to manage your containers. I think its important to understand what the inside of a container looks like before diving right in. WIll allso provide a command line cheat sheet. Let's have a look.


Important Components of a Container

image: linuxadvocates.com

To save time let's just list out some important points:

  • All of your containers and  their configuration files are created under the /var/lib/lxc/container-name directory by default. You are able to modify this directory if you would like but Ill just be sticking with this because it can get messy as templates and other configuration files use this as default. One thing I will do though is mount that directory on its own BTRFS filesystem. 
  • Each container is assigned its own root filesystem (rootfs) and is maintained in a fstab file. This is one of the coolest things about LXC in my opinion. When you log into a container it basically feels like you are actually inside of a full Linux operating system with the normal filesystem structure. This makes the user feel right at home!
  • Containers run-time configurations are maintained in its config file. This can be modified as needed. This file maintains networking, cgroups, hostname, filesystems, etc.
  • The first linux container can take some time to create but each time a container is created from a template it is stored in /var/cache/lxc/. The next time you create from the same template, cache will allow it to create quickly. 
  • Containers are created from templates that are located /usr/share/lxc/templates/lxc-[name]. If you want to modify how the container is built you can modify the templates. These are just shell scripts. Templates contain things like root password, cache base, default path for container files, default container configs, etc. When you install LXC you are given default templates so check the directory to see what you can build.
  • Each container has its own log file under /var/log/lxc/container-name.log which may come in handy for troubleshooting. 



Common Commands



Here is a is a cheat sheet of common commands that we will be using to manage containers. Be sure to also check out each commands man page for extended options as these are very generic options.

List containers on the host. 
    lxc-ls
    lxc-ls --fancy    ***shows state and IP address.

Create a new container. 

    lxc-create -t TEMPLATE -n CONTAINER_NAME

Start a container. The "-d" option starts container without attaching.

    lxc-start -n CONTAINER_NAME
    lxc-start -n CONTAINER_NAME -d

Start a process inside a container. This is like sending a remote command to the container. If no command is given however the current default shell of the user will be looked up inside the container and executed. This will make it appear that you are inside the container when in fact you will not be.
    lxc-attach -n CONTAINER_NAME command

Launch a console for the container. To exit the container use the keystrokes ctrl+a and then hit q at any time. 

    lxc-console -n CONTAINER_NAME 

See specific processes running inside a container.

    lxc-ps -n CONTAINER_NAME

Stop a container.

    lxc-stop -n CONTAINER_NAME

Delete a container.

    lxc-destroy -n CONTAINER_NAME

Clone a container.

    lxc-clone CONTAINER_NAME NEW_CONTAINER_NAME



Let's Create our First Container!


1. The first thing I would suggest is check that the kernel is ready for LXC with the lxc-checkconfig command. As long as everything comes back enabled, we are ready to rock.

   # lxc-checkconfig 

   Kernel configuration not found at /proc/config.gz; searching...
   Kernel configuration found at /boot/config-3.15.6-200.fc20.x86_64
   --- Namespaces ---
   Namespaces: enabled
   Utsname namespace: enabled
   Ipc namespace: enabled
   Pid namespace: enabled
   User namespace: enabled
   Network namespace: enabled
   Multiple /dev/pts instances: enabled

   --- Control groups ---
   Cgroup: enabled
   Cgroup clone_children flag: enabled
   Cgroup device: enabled
   Cgroup sched: enabled
   Cgroup cpu account: enabled
   Cgroup memory controller: enabled
   Cgroup cpuset: enabled

   --- Misc ---
   Veth pair device: enabled
   Macvlan: enabled
   Vlan: enabled
   File capabilities: enabled

   Note : Before booting a new kernel, you can check its configuration
   usage : CONFIG=/path/to/config /bin/lxc-checkconfig



2. Create the container. We will be creating from a default container template. I can show ways to create custom containers in an advanced LXC demo. Remember also to check out lxc-create man page.

   # lxc-create -t TEMPLATE -n CONTAINER_NAME


Replace TEMPLATE with one container templates supplied in /usr/share/lxc/templates/lxc-[name] and also replace CONTAINER_NAME with your desired name. Example: replace TEMPLATE with fedora and CONTAINER_NAME with fedora-container to create a Fedora container named "fedora-container". 


NOTE: This will most likely take some time to complete.



3. Once this completes you can verify its complete and then start up the container. Remember the "-d" flag with the lxc-start command to not attach to container while starting it.


   # lxc-ls --fancy

   # lxc-start -n CONTAINER_NAME -d


If you check the status of your containers once again you should be able to see that your container is now running.


4. Start playing around with your container. You should start getting familiar with your container. Try sending some commands to it with lxc-attach and get a console session going with lxc-console. 

   Examples:

   # lxc-attach -n CONTAINER_NAME top
   # lxc-console -n CONTAINER_NAME

When you console into your container try running some normal Linux commands and read/create some files like you would on a normal Linux machine. This will help you get familiar and help you see some of the differences between a container and a full blown OS.


That covers creating your first container! After I created my first couple containers and started playing with the different commands, I was able to become familiar and comfortable very quickly. Once I started playing with LXC the whole Linux container technology started to "click" and make sense and that's why I started this blog series with plain ole LXC. Please check back soon for some advanced container configurations in the next blog post and some Docker tutorials in the near future. Will also be following up with a video on this soon that covers what I have gone over in the past couple of posts. I will add to this page. 




Blog Series on Linux Containers:
Previous Post: Control Groups
Next Post: Advanced Configuration

The Container World | Part 3 Control Groups

In part 3 of my Linux container series, I want to briefly talk about an important aspect of the LXC technology, cgroups. In this post I will explain the cgroup technology as it pertains to LXC and systemd. I will not actually implement cgroups in the tutorial but will show an example of how we will set it up when we do advanced container configuration. Let's get started.


Control Groups (cgroups)


image: access.redhat.com
As mentioned in my first post, control groups (cgroups) play an important role in the container game. Although implementing cgroups into container configs is not mandatory, I would highly recommend implementing the use especially if you are planning to deploy several containers. This will help keep your system stable when you start flooding yourself with containers. Cgroups are a feature of the Linux kernel that allows administrators to allocate and/or restrict resources to containers or processes such as CPU, memory, network bandwidth, and many more. The main purpose for cgroups is to be able to have more complete control over managing and monitoring the host’s system resources and enabling admins to divide up resources among applications and users thus allowing the system to operate more efficiently. Remember that containers are lightweight but we still want to get as much out of our system as we can.

Before the use of systemd style kernel, custom cgroups hierarchies were built using the libcgroup package with the cgconfig command. As systemd becomes the adopted standard Linux kernel, libcgroup is no longer applicable (most of the time although there are certain instances where it can be used). With systemd, cgroups are now managed and created using systemctl. Systemctl gives us the ability to set or modify parameters for a unit or application during runtime from the command line as well as allowing us to modify the unit files in /usr/lib/systemd/system/ and set cgroup parameters there which we wont get into in the post but is good to know.

Systemd by default creates default hierarchical controllers in the /sys/fs/cgroups directory from the automatically created hierarchy of slices, scopes and services. Here is a list of available controllers of interest for containers:

   blkio - Limits I/O access to block devices.
   cpu - Uses a scheduler for tasks.
   cpuacct - Reports on cpu resources used by tasks.
   cpuset - Assigns individual cpus for multicore systems.
   devices - Allows or denies access to devices.
   freezer - Freezes or resumes tasks.
   memory - Limits memory use and generates reports.


For LXC, we will implement these controllers and restrictions within the each container's configuration file. LXC integrates directly with systemd cgroups and is called from the container config file located in /var/lib/lxc/container/config. In order to specify a control group value you will add a line with with the following syntax: lxc.cgroup.[subsystem name]

Let's go ahead and take a quick look at an example container config file with cgroup controllers implemented just to get an idea. This is from a default fedora container I created several months ago. 

   [root@centos7-lxchost1]# grep -i cgroup /var/lib/lxc/fedoraContainer1/config 
   #cgroups
   lxc.cgroup.devices.deny = a
   lxc.cgroup.devices.allow = c 1:3 rwm
    lxc.cgroup.devices.allow = c 1:5 rwm
   lxc.cgroup.devices.allow = c 5:1 rwm
   lxc.cgroup.devices.allow = c 5:0 rwm
   lxc.cgroup.devices.allow = c 4:0 rwm
   lxc.cgroup.devices.allow = c 4:1 rwm
   lxc.cgroup.devices.allow = c 1:9 rwm
   lxc.cgroup.devices.allow = c 1:8 rwm
   lxc.cgroup.devices.allow = c 136:* rwm
   lxc.cgroup.devices.allow = c 5:2 rwm
   lxc.cgroup.devices.allow = c 254:0 rm



The example above should give a good overview of how to implement cgroup restrictions into Linux containers. Check out the man page for lxc.conf to get more examples. Check out the next post to start creating our first containers.


Blog Series on Linux Containers:
Previous Post: Host Network
Next Post: First Container

The Container World | Part 2 Networking

This is part 2 of a blog post series that I have started on Linux container and container-based technology. In part 1, I gave an overview of LXC technology and finished up with a short tutorial on installing the necessary packages. In this post I will give a short discussion on the host networking and how it works and then sum up with a quick tutorial (I know. Just get to creating the containers already!). Will once again be demoing on a CentOS 7 machine. Hope you enjoy!


Networking 


It is important to understand how networking works for LXC and understand your options. This is important because without correct network configuration on the host, you will not be able to do things such as ssh into your containers. Containers support several different virtual networking types in which the majority of these types require a configured a bridge device on the host for any network communication. So for the sake of majority and the sake of this tutorial we will be setting up a bridge on our host.

When it comes to networking, containers are just like regular operating systems or any other device on a network and are assigned their own IP addresses for communication. By setting up a bridge interface on the host, the host's interface will act similar to a switch and allow traffic to flow to and from the containers from other devices on the network . Here is a good illustration of a network bridge interface from Oracle if you are like me and need visuals.

Image: docs.oracle.com

This particular bridging method shown above is called a veth bridge (which we will be using when we create our containers in later tutorials). The networking aspect of LXC is not that difficult to grasp but I believe it is important to understand what is right for your environment. You should know what options you have for things like high availability and being able to access your container across the network. With that being said, lets begin our short demo on setting up a bridged adapter. 

NOTE: We will be setting up a single host with a single bridge on the subnet of 10.1.0.1/24. If using virtualbox make sure to create a host-only adapter (File > Preferences > Network > Host-Only Networks) if you plan to be able to access the containers from outside the host. Here are my virtualbox network configuration for Host-Only adapter as an example:






1. If you have not already done so please make sure that you have the network service enabled and started.

    # service network start
    # chkconfig network on

   OR for systemd 

    # systemctl start network.service
    # systemctl enable network.service


2. We will bridge eth0 to br0 so let's configure eth0 interface. Don't use HWADDR from below. Keep your original one for the device. 

    # vim /etc/sysconfig/network-scripts/ifcfg-eth0
    
    DEVICE=eth0
    TYPE=Ethernet
    HWADDR=YOUR_MAC_ADDRESS
    BOOTPROTO=none
    ONBOOT=yes
    NM_CONTROLLED=no
    BRIDGE=br0

3. Create the bridge device br0. Setup as static IP.

    # vim /etc/sysconfig/network-scripts/ifcfg-br0

    DEVICE=br0
    TYPE=Bridge
    IPADDR=10.1.0.103
    NETMASK=255.255.255.0
    ONBOOT=yes
    BOOTPROTO=static
    NM_CONTROLLED=no
    DELAY=0


4. Add the following if statement at the end of the ifup-post file right above exit 0.

    # vim /etc/sysconfig/network-scripts/ifup-post

    if [ $DEVNAME = "br0" ]; then
        /usr/sbin/brctl setfd br0 0
    fi


The if statement above executes a command to set the br0 device to a forwarding delay of 0 each time the interface is brought up. "Forwarding delay time is the time spent in each of the Listening and Learning states before the Forwarding state is entered. This delay is so that when a new bridge comes onto a busy network it looks at some traffic before participating (Linux Foundation)". Also note that anytime that you make a change to any network configurations that you must restart the network to take affect. 

This concludes Host networking setup. Please check out next post in the series cgroups. 

Blog Series on Linux Containers:
Previous Post: Overview
Next Post: Control Groups