Thursday, July 31, 2014

The Container World | Part 4 First Container

Before we create our first container I would like to go over the architecture of LXC and also list some key commands used to manage your containers. I think its important to understand what the inside of a container looks like before diving right in. WIll allso provide a command line cheat sheet. Let's have a look.


Important Components of a Container

image: linuxadvocates.com

To save time let's just list out some important points:

  • All of your containers and  their configuration files are created under the /var/lib/lxc/container-name directory by default. You are able to modify this directory if you would like but Ill just be sticking with this because it can get messy as templates and other configuration files use this as default. One thing I will do though is mount that directory on its own BTRFS filesystem. 
  • Each container is assigned its own root filesystem (rootfs) and is maintained in a fstab file. This is one of the coolest things about LXC in my opinion. When you log into a container it basically feels like you are actually inside of a full Linux operating system with the normal filesystem structure. This makes the user feel right at home!
  • Containers run-time configurations are maintained in its config file. This can be modified as needed. This file maintains networking, cgroups, hostname, filesystems, etc.
  • The first linux container can take some time to create but each time a container is created from a template it is stored in /var/cache/lxc/. The next time you create from the same template, cache will allow it to create quickly. 
  • Containers are created from templates that are located /usr/share/lxc/templates/lxc-[name]. If you want to modify how the container is built you can modify the templates. These are just shell scripts. Templates contain things like root password, cache base, default path for container files, default container configs, etc. When you install LXC you are given default templates so check the directory to see what you can build.
  • Each container has its own log file under /var/log/lxc/container-name.log which may come in handy for troubleshooting. 



Common Commands



Here is a is a cheat sheet of common commands that we will be using to manage containers. Be sure to also check out each commands man page for extended options as these are very generic options.

List containers on the host. 
    lxc-ls
    lxc-ls --fancy    ***shows state and IP address.

Create a new container. 

    lxc-create -t TEMPLATE -n CONTAINER_NAME

Start a container. The "-d" option starts container without attaching.

    lxc-start -n CONTAINER_NAME
    lxc-start -n CONTAINER_NAME -d

Start a process inside a container. This is like sending a remote command to the container. If no command is given however the current default shell of the user will be looked up inside the container and executed. This will make it appear that you are inside the container when in fact you will not be.
    lxc-attach -n CONTAINER_NAME command

Launch a console for the container. To exit the container use the keystrokes ctrl+a and then hit q at any time. 

    lxc-console -n CONTAINER_NAME 

See specific processes running inside a container.

    lxc-ps -n CONTAINER_NAME

Stop a container.

    lxc-stop -n CONTAINER_NAME

Delete a container.

    lxc-destroy -n CONTAINER_NAME

Clone a container.

    lxc-clone CONTAINER_NAME NEW_CONTAINER_NAME



Let's Create our First Container!


1. The first thing I would suggest is check that the kernel is ready for LXC with the lxc-checkconfig command. As long as everything comes back enabled, we are ready to rock.

   # lxc-checkconfig 

   Kernel configuration not found at /proc/config.gz; searching...
   Kernel configuration found at /boot/config-3.15.6-200.fc20.x86_64
   --- Namespaces ---
   Namespaces: enabled
   Utsname namespace: enabled
   Ipc namespace: enabled
   Pid namespace: enabled
   User namespace: enabled
   Network namespace: enabled
   Multiple /dev/pts instances: enabled

   --- Control groups ---
   Cgroup: enabled
   Cgroup clone_children flag: enabled
   Cgroup device: enabled
   Cgroup sched: enabled
   Cgroup cpu account: enabled
   Cgroup memory controller: enabled
   Cgroup cpuset: enabled

   --- Misc ---
   Veth pair device: enabled
   Macvlan: enabled
   Vlan: enabled
   File capabilities: enabled

   Note : Before booting a new kernel, you can check its configuration
   usage : CONFIG=/path/to/config /bin/lxc-checkconfig



2. Create the container. We will be creating from a default container template. I can show ways to create custom containers in an advanced LXC demo. Remember also to check out lxc-create man page.

   # lxc-create -t TEMPLATE -n CONTAINER_NAME


Replace TEMPLATE with one container templates supplied in /usr/share/lxc/templates/lxc-[name] and also replace CONTAINER_NAME with your desired name. Example: replace TEMPLATE with fedora and CONTAINER_NAME with fedora-container to create a Fedora container named "fedora-container". 


NOTE: This will most likely take some time to complete.



3. Once this completes you can verify its complete and then start up the container. Remember the "-d" flag with the lxc-start command to not attach to container while starting it.


   # lxc-ls --fancy

   # lxc-start -n CONTAINER_NAME -d


If you check the status of your containers once again you should be able to see that your container is now running.


4. Start playing around with your container. You should start getting familiar with your container. Try sending some commands to it with lxc-attach and get a console session going with lxc-console. 

   Examples:

   # lxc-attach -n CONTAINER_NAME top
   # lxc-console -n CONTAINER_NAME

When you console into your container try running some normal Linux commands and read/create some files like you would on a normal Linux machine. This will help you get familiar and help you see some of the differences between a container and a full blown OS.


That covers creating your first container! After I created my first couple containers and started playing with the different commands, I was able to become familiar and comfortable very quickly. Once I started playing with LXC the whole Linux container technology started to "click" and make sense and that's why I started this blog series with plain ole LXC. Please check back soon for some advanced container configurations in the next blog post and some Docker tutorials in the near future. Will also be following up with a video on this soon that covers what I have gone over in the past couple of posts. I will add to this page. 




Blog Series on Linux Containers:
Previous Post: Control Groups
Next Post: Advanced Configuration

The Container World | Part 3 Control Groups

In part 3 of my Linux container series, I want to briefly talk about an important aspect of the LXC technology, cgroups. In this post I will explain the cgroup technology as it pertains to LXC and systemd. I will not actually implement cgroups in the tutorial but will show an example of how we will set it up when we do advanced container configuration. Let's get started.


Control Groups (cgroups)


image: access.redhat.com
As mentioned in my first post, control groups (cgroups) play an important role in the container game. Although implementing cgroups into container configs is not mandatory, I would highly recommend implementing the use especially if you are planning to deploy several containers. This will help keep your system stable when you start flooding yourself with containers. Cgroups are a feature of the Linux kernel that allows administrators to allocate and/or restrict resources to containers or processes such as CPU, memory, network bandwidth, and many more. The main purpose for cgroups is to be able to have more complete control over managing and monitoring the host’s system resources and enabling admins to divide up resources among applications and users thus allowing the system to operate more efficiently. Remember that containers are lightweight but we still want to get as much out of our system as we can.

Before the use of systemd style kernel, custom cgroups hierarchies were built using the libcgroup package with the cgconfig command. As systemd becomes the adopted standard Linux kernel, libcgroup is no longer applicable (most of the time although there are certain instances where it can be used). With systemd, cgroups are now managed and created using systemctl. Systemctl gives us the ability to set or modify parameters for a unit or application during runtime from the command line as well as allowing us to modify the unit files in /usr/lib/systemd/system/ and set cgroup parameters there which we wont get into in the post but is good to know.

Systemd by default creates default hierarchical controllers in the /sys/fs/cgroups directory from the automatically created hierarchy of slices, scopes and services. Here is a list of available controllers of interest for containers:

   blkio - Limits I/O access to block devices.
   cpu - Uses a scheduler for tasks.
   cpuacct - Reports on cpu resources used by tasks.
   cpuset - Assigns individual cpus for multicore systems.
   devices - Allows or denies access to devices.
   freezer - Freezes or resumes tasks.
   memory - Limits memory use and generates reports.


For LXC, we will implement these controllers and restrictions within the each container's configuration file. LXC integrates directly with systemd cgroups and is called from the container config file located in /var/lib/lxc/container/config. In order to specify a control group value you will add a line with with the following syntax: lxc.cgroup.[subsystem name]

Let's go ahead and take a quick look at an example container config file with cgroup controllers implemented just to get an idea. This is from a default fedora container I created several months ago. 

   [root@centos7-lxchost1]# grep -i cgroup /var/lib/lxc/fedoraContainer1/config 
   #cgroups
   lxc.cgroup.devices.deny = a
   lxc.cgroup.devices.allow = c 1:3 rwm
    lxc.cgroup.devices.allow = c 1:5 rwm
   lxc.cgroup.devices.allow = c 5:1 rwm
   lxc.cgroup.devices.allow = c 5:0 rwm
   lxc.cgroup.devices.allow = c 4:0 rwm
   lxc.cgroup.devices.allow = c 4:1 rwm
   lxc.cgroup.devices.allow = c 1:9 rwm
   lxc.cgroup.devices.allow = c 1:8 rwm
   lxc.cgroup.devices.allow = c 136:* rwm
   lxc.cgroup.devices.allow = c 5:2 rwm
   lxc.cgroup.devices.allow = c 254:0 rm



The example above should give a good overview of how to implement cgroup restrictions into Linux containers. Check out the man page for lxc.conf to get more examples. Check out the next post to start creating our first containers.


Blog Series on Linux Containers:
Previous Post: Host Network
Next Post: First Container

The Container World | Part 2 Networking

This is part 2 of a blog post series that I have started on Linux container and container-based technology. In part 1, I gave an overview of LXC technology and finished up with a short tutorial on installing the necessary packages. In this post I will give a short discussion on the host networking and how it works and then sum up with a quick tutorial (I know. Just get to creating the containers already!). Will once again be demoing on a CentOS 7 machine. Hope you enjoy!


Networking 


It is important to understand how networking works for LXC and understand your options. This is important because without correct network configuration on the host, you will not be able to do things such as ssh into your containers. Containers support several different virtual networking types in which the majority of these types require a configured a bridge device on the host for any network communication. So for the sake of majority and the sake of this tutorial we will be setting up a bridge on our host.

When it comes to networking, containers are just like regular operating systems or any other device on a network and are assigned their own IP addresses for communication. By setting up a bridge interface on the host, the host's interface will act similar to a switch and allow traffic to flow to and from the containers from other devices on the network . Here is a good illustration of a network bridge interface from Oracle if you are like me and need visuals.

Image: docs.oracle.com

This particular bridging method shown above is called a veth bridge (which we will be using when we create our containers in later tutorials). The networking aspect of LXC is not that difficult to grasp but I believe it is important to understand what is right for your environment. You should know what options you have for things like high availability and being able to access your container across the network. With that being said, lets begin our short demo on setting up a bridged adapter. 

NOTE: We will be setting up a single host with a single bridge on the subnet of 10.1.0.1/24. If using virtualbox make sure to create a host-only adapter (File > Preferences > Network > Host-Only Networks) if you plan to be able to access the containers from outside the host. Here are my virtualbox network configuration for Host-Only adapter as an example:






1. If you have not already done so please make sure that you have the network service enabled and started.

    # service network start
    # chkconfig network on

   OR for systemd 

    # systemctl start network.service
    # systemctl enable network.service


2. We will bridge eth0 to br0 so let's configure eth0 interface. Don't use HWADDR from below. Keep your original one for the device. 

    # vim /etc/sysconfig/network-scripts/ifcfg-eth0
    
    DEVICE=eth0
    TYPE=Ethernet
    HWADDR=YOUR_MAC_ADDRESS
    BOOTPROTO=none
    ONBOOT=yes
    NM_CONTROLLED=no
    BRIDGE=br0

3. Create the bridge device br0. Setup as static IP.

    # vim /etc/sysconfig/network-scripts/ifcfg-br0

    DEVICE=br0
    TYPE=Bridge
    IPADDR=10.1.0.103
    NETMASK=255.255.255.0
    ONBOOT=yes
    BOOTPROTO=static
    NM_CONTROLLED=no
    DELAY=0


4. Add the following if statement at the end of the ifup-post file right above exit 0.

    # vim /etc/sysconfig/network-scripts/ifup-post

    if [ $DEVNAME = "br0" ]; then
        /usr/sbin/brctl setfd br0 0
    fi


The if statement above executes a command to set the br0 device to a forwarding delay of 0 each time the interface is brought up. "Forwarding delay time is the time spent in each of the Listening and Learning states before the Forwarding state is entered. This delay is so that when a new bridge comes onto a busy network it looks at some traffic before participating (Linux Foundation)". Also note that anytime that you make a change to any network configurations that you must restart the network to take affect. 

This concludes Host networking setup. Please check out next post in the series cgroups. 

Blog Series on Linux Containers:
Previous Post: Overview
Next Post: Control Groups

Monday, July 14, 2014

The Container World | Part 1 Overview

Due to the popularity of Linux containers and container-based technology, I'll be starting a series of blog posts on containers and popular container-based technologies that have been gaining attention over the past year or so. I think it's important to have a basic understanding of containers in order for the other technologies to make sense so Ill start off the series by giving an overview of LXC and explain some of its features and advantages. Once there is a basic understanding of containers then move into tutorials on how to build, deploy and manage before reviewing other container-based technologies like CoreOS, Project Atomic, Docker, OpenShift and many more. Hope you enjoy.


Linux Containers (LXC) Explained


Before jumping into the world of containers and container-based technology, I believe that it is important to have atleast a basic understanding of Linux Containers (LXC) since it is the "backbone" for the majority of the projects. Once you understand the basics of LXC, it will make a container-based technology like Docker much easier to grasp from the start. 

A Linux Container, in its most basic definition, is an operating system level virtualization method for running one or multiple isolated Linux systems on a single host. These isolated Linux systems are called "containers" and utilize control groups (cgroups) for resource isolation. Cgroups became part of kernel functionality with release version 2.6.24 and allow for namespace isolation to provide complete isolation of applications view of the OS which allows it be given its own PID space, file system structure and network interfaces. Although each container is provided its own space and can be constrained to specified resource allocation, all containers share the host's kernel. You can imagine containers as processes in a box in that containers run as Linux processes on top of the Linux kernel. See below image for a visual representation of the layers. 


Figure 1. Containers all share the same kernel and host OS and may also share the host's binaries and libraries as well.



Advantages / Disadvantages


The advantages of containers depends solely on the environment needs. There are several benefits to utilizing this technology but there are also disadvantages as well. Let's list out some of the advantages of containers and then list out the disadvantages. Please note these pros and cons are a matter of opinion so where I see it as an advantage some might not.

Advantages 
  • Lightweight - As mentioned above, Linux containers are extremely lightweight due to the fact that they are not full fledged operating systems and take advantage of being a running process on the Linux kernel. 
  • Open Source - Who doesn't like open source? Lots of enhancements and features being added all the time and also a backed community of people out there. Companies like Red Hat also have paid support in newer releases of their OS.  
  • API - LXC is written in C, python3, shell and lua but has several language bindings including python, lua, ruby and Go. This would give the ability to program automate as far as your heart desires. 
  • BTRFS - If you haven't read or heard about BTRFS yet then I would suggest taking a look at its features. I won't get into discussing BTRFS here but note that container technology is a great case to take advantage of some of its features.
  • Isolation - Cgroups give admins the advantage of running multiple systems and applications inside containers on the same host without any interference to other containers. This reduces overhead an in return can help you better utilize resources like CPU and memory which will also in turn save rack space. A great way to get maximum utilization out of your hardware and return on investment.
  • Fast Deployment - One of the best advantages in my opinion and what I believe to be one of the main inspirations to a lot of the container-based technologies like Docker. You can create container templates, setup a repository, clone new instances from templates and be up and running in a matter of minutes. 
  • Runs Linux - Linux is totally wicked awesome but that's not what I'm getting at. You can run several different flavors of Linux on the same host as long as they both share the same kernel. So for example you can run CentOS containers alongside Ubuntu containers on a Fedora host.

Disadvantages

  • Only Linux - Although you are able to run many different Debian and rpm based containers on the same host regardless of host OS, you are strictly limited to running Linux. You cannot run Windows, BSD, or OSX since containers utilize the Linux Kernel. 
  • Configuration - I have found, especially in the beginning, configuring containers can be a bit of a task and a little frustrating at times. But if you play around with them for awhile it will start to click.
  • All command line - To me this is an advantage but to some this might be a disadvantage. There is no GUI program that comes with LXC for configuration or management of containers or repos. 
  • Security - There are many people that do not believe that containers are secure. Security has come a long way through the integration of SELinux so I would say that this point could and probably should be argued. 




Getting Started


With this being part 1 of the series, let's go ahead and move forward with getting LXC installed on a system. LXC works on multiple flavors of Linux but for this demo and the rest of my demos I will be using CentOS 7. In the next post, we will dig deeper into LXC and start configuration.


1. The first thing you are going to need to do is install the EPEL Repository for CentOS 7 if you have not already done so. This repository will contain the necessary packages for LXC. You can use wget like below to download. If you already have that repo on your machine skip to step 2.

  # wget http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm

   # rpm -ivh epel-release-7-0.2.noarch.rpm


2. Install the main LXC package along with the bridging utilities package for ethernet bridging.

  yum install -y lxc bridge-utils


That's it! Extremely simple. You can watch the video below for a visual on getting LXC installed. Please check back soon for Part 2 where I will walk us through setting up the host for networking.  






Blog Series on Linux Containers:
Next Post: Networking

Tuesday, July 8, 2014

Managing Systemd Targets (Runlevels)

If you are a Linux geek then you are probably aware of the adoption of systemd in the majority of Linux distros.  One of the differences that we will see is how the “new init” handles running in different modes (mult-user, single-user, graphical , etc). As you probably know, the older init versions such as SysV, used runlevels 0-6 to define the operating system’s mode of operation in which those modes would define which services would be run at the specified mode. Systemd uses targets which are represented by what is called target units that group together other system units through a chain of dependencies to define which services to run. Will be using Fedora 20 machine to demo a few commands on how to manage systemd targets. Also note that the majority of the older init commands still work for now but I would highly suggest learning the systemctl commands because from what I have been reading, all the older commands will slowly go away. 

Systemd’s target units end in a “.target” file extension. So if we wanted to take a look at which mode you are currently running you could execute the following command:


root@localhost ~]# systemctl list-units --type=target
UNIT                LOAD   ACTIVE SUB    DESCRIPTION
basic.target        loaded active active Basic System
cryptsetup.target   loaded active active Encrypted Volumes
getty.target        loaded active active Login Prompts
graphical.target    loaded active active Graphical Interface
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target     loaded active active Local File Systems
multi-user.target   loaded active active Multi-User System
network.target      loaded active active Network
paths.target        loaded active active Paths
remote-fs.target    loaded active active Remote File Systems
slices.target       loaded active active Slices
sockets.target      loaded active active Sockets
sound.target        loaded active active Sound Card
swap.target         loaded active active Swap
sysinit.target      loaded active active System Initialization
timers.target       loaded active active Timers

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

16 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.


This command will tell you the targets name, if it has been loaded, high/low level activation state and a description. 

The default target units for modes of operation are:

poweroff.target - which is used to poweroff the system (Runlevel 0).
rescue.target - used to setup rescue mode (Runlevel 1).
multi-user.target - used as mult-user non-graphical (Runlevels 2,3, and 4).  
graphical.target - graphical multi-user (Runlevel 5).
reboot.target - reboot (Runlevel 6).

If you would like to see what the default target unit is for your system, execute:


    [root@localhost ~]# systemctl get-default 
    graphical.target 


You can also change the default target by using the set-deafult flag:

    [root@localhost ~]# systemctl set-default multi-user.target 
    rm '/etc/systemd/system/default.target'
    ln -s '/usr/lib/systemd/system/multi-user.target'     

    '/etc/systemd/system/default.target'
    [root@localhost ~]# systemctl get-default
    multi-user.target


One thing that I am sure you will have to do at some point in time as a Sys Admin is to switch from one of the target units to another. An example would be in the event a filesystem goes read-only and the system needs to be taken into rescue mode to run an fsck


    [root@localhost ~]# systemctl isolate rescue.target 


In this case "systemctl rescue" would work as well. 

There are tons and tons of material out there on managing targets. If you are looking for more information on systemd, I would suggest checking with your Linux Distros systemd page. Please check back soon for more tutorials on systemd. 

Sunday, July 6, 2014

What is CoreOS?

I'm sure that many of you, especially those in the cloud realm, have been hearing a whole lot of buzz around the new operating system referred to as "CoreOS". I have been doing a lot of reading and have been following CoreOS over the past 5-6 months and I have to admit that it has really grabbed my attention. In this post Ill give an overview of what CoreOS is and also provide some insight on features and capabilities on why I think CoreOS is a game changer. Also please make sure you check the CoreOS site to get more information on the products offered, latest news, documentation and great tutorials.


Overview: CoreOS Explained

In the most simplest definition, CoreOS is a minimized lightweight Linux based operating system that's purpose is to provide the ability to deploy and run mass amounts of software containers on a single host or across a cluster of hosts. It is basically a Linux kernel running a few utilities and nothing else. To give a visual on how minimal the operating system is, I have read that the entire OS consists of 100 megabytes of code or less and has the ability to be booted in less than two seconds. The boot I would have to see for myself but that is crazy small for a server style OS. CoreOS takes advantage of a service called "Docker" which is used to build, deploy and manage containers (watch for posts on Docker in the near future). Image below depicts a good visual representation of the fundamental layout of CoreOS and its utilities. CoreOS eliminates the need of a hypervisor to deploy full-fledged virtual machines running full-fledged operating systems and instead focuses on providing applications. It is completely open source under Apache license 2.0 and now also offering new support option released as of June 30th.


Image: https://coreos.com/


Some Features

One of the most surprising or interesting things about CoreOS is the fact that the OS is not a traditional full-fledged Linux OS like Red Hat or Ubuntu but instead is based on Google's Chrome OS. This is the reason why CoreOS is so lightweight and in fact so lightweight that it only requires just a little over 100meg of memory to boot which is less than half of what it takes to boot traditional Linux flavors. CoreOS is also able to run on both virtualized infrastructure such as KVM, Google Compute and other hypervisors or on plain ole bare metal machines. 

Another surprising or interesting feature of CoreOS is the way that you patch the OS and the applications. CoreOS does not come packaged with any software packaging tools such as yum, apt or Zypper. They provide a web gui dashboard application, called CoreUpdate, that is used to manage all of your machines and applications. This application can give detailed information such as number of machines, versions, health of your clusters and more. The dashboard leverages FastPatch, an active-passive root partition scheme, which patches the entire OS as a single "unit" instead of package by package like traditional Linux. When the OS is patched it creates a completely new root partition as passive and then once the OS is rebooted it places the newly created partition as active and places the older partition as passive. There are several benefits to doing this but most importantly, especially if you are a Sys Admin, is still having the ability to rollback your update if needed. Pretty freaking brilliant.

Lets talk about one of the biggest key factors to CoreOS, Docker. As mentioned above, Docker is what CoreOS uses in order to run, build and deploy what is sitting on the OS which are essentially Linux containers. Containers are extremely lightweight virtual machines that's purpose is to simply serve applications. There is almost no overheard to run containers due to the fact that docker containers all share the host's kernel and run as an isolated processes in userspace. There are a lot of benefits of running containers (which will be discussed in other posts). You can deploy a new container within a matter of seconds, start and stop them even faster and share it across an entire cluster which will bring me to the next fundamental feature to cover, CoreOS clustering.

CoreOS comes with built-in clustering of hosts which can range from just a couple to entire data centers. CoreOS uses etcd and systemd as the backbone of its clustering while fleet manages the containers and decides which host the containers should reside on based on application. According to the CoreOS team fleet creates seamless integration of clustered hosts into a shared pool of resources. Not sure I have a complete understanding of how their whole clustering architecture works but fleet definitely looks legit. Fleet has the capabilities of maintaining all of the individual containers and ensures that the containers maintain high availability in the event you have system updates or system failure.I would compare to fleet to VMware's HA solution. It also allows for containers that share the same application structure to run on separate hosts if needed or together.


Closing

CoreOS has definitely made its presence known over the past year or so. Whether or not it will become an adopted technology for most of the Cloud world, only time will tell. One would think that any "Big-Time" data center tech company would be taking a hard look as there are too many acclaimed benefits to using it. Regardless, CoreOS is definitely worth talking about and playing with. Check out their site to get a copy and try it out for yourself. Will follow up with a video overview of my CoreOS lab as soon as I find a desktop recording application.