04 January 2014

Get working with LXC on Oracle Linux is very easy today.

    * yum -y update
    * yum install lxc
    * reboot

Now we have the required software and we are on the latest kernel, let's have a look on how lxc works.

Basics

According to Wikipedia, LXC is:

    Overview

    LXC provides operating system-level virtualization not via a virtual machine, but rather provides a virtual environment that has its own process and network space. LXC relies on the Linux kernel cgroups functionality that was released in version 2.6.24. It also relies on other kinds of namespace-isolation functionality, which were developed and integrated into the mainline Linux kernel. It is used by Heroku to provide separation between their dynos.

From a sysadmin point of view, LXC is a very nice to create isolated, lightweight machines.

If you are new to LXC, I suggest read Oracle LXC documentation

Examples

Oracle Linux 6 provide today a very convenient way to create containers.

Included with the lxc package, came oracle template.

    [root@vagrant-oracle65 ~]# lxc-create -n lxc1 -t oracle -- -h
    usage: lxc-create -n NAME [-f CONFIG_FILE] [-t TEMPLATE] [FS_OPTIONS] --
         [-P lxcpath] [TEMPLATE_OPTIONS]

    where FS_OPTIONS is one of:
      -B none
      -B dir [--dir rootfs_dir]
      -B lvm [--lvname LV_NAME] [--vgname VG_NAME] [--fstype FS_TYPE]
        [--fssize FS_SIZE]
      -B btrfs

    Create a new container on the system.

    Options:
      -n NAME            specify the name of the container
      -f CONFIG_FILE     use an existing configuration file
      -t TEMPLATE        use an accessible template script
      -B BACKING_STORE   alter the container backing store (default: none)
      --lxcpath path     specify an alternate container patch (default: /container)
      --lvname LV_NAME   specify the LVM logical volume name
                  (default: container name)
      --dir ROOTFS_DIR   specify path for custom rootfs directory location
      --vgname VG_NAME   specify the LVM volume group name (default: lxc)
      --fstype FS_TYPE   specify the filesystem type (default: ext4)
      --fssize FS_SIZE   specify the filesystem size (default: 500M)


    Template-specific options (TEMPLATE_OPTIONS):
      -a|--arch=<arch>        architecture (ie. i386, x86_64)
      -R|--release=<release>  release to download for the new container
      -r|--rpms=<rpm name>    additional rpms to install into container
      -u|--url=<url>          replace yum repo url (ie. local yum mirror)
      -t|--templatefs=<path>  copy/clone rootfs at path instead of downloading
      -h|--help

    Release is of the format "major.minor", for example "5.8", "6.3", or "6.latest"

Let's say you want to create a container that is OL6 with the latest updates:

    lxc-create -n lxc1 -t oracle -- -R 6.latest

Same as before, but 32bit:

    lxc-create -n lxc1 -t oracle -- -R 6.latest -a i386

OL5 with updates:

    lxc-create -n lxc1 -t oracle -- -R 5.latest

Or an specific version of 4, say the oldest one posible, 4.6:

    lxc-create -n lxc1 -t oracle -- -R 4.6

Other useful options of the template, are --url, to point to your own repo of files.

Let's build one OL 4.latest:

    lxc-create -n lxc1 -t oracle -- -R 4.latest --url http://mini.home.kikitux.net/stage

    Complete!
    Fixing (downgrading) rpm database from version 9
    Rebuilding rpm database
    Configuring container for Oracle Linux 4.9
    chcon: can't apply partial context to unlabeled file `/container/lxc1/rootfs/dev'
    Added container user:oracle password:oracle
    Added container user:root password:root
    Container : /container/lxc1/rootfs
    Config    : /container/lxc1/config
    Network   : eth0 (veth) on virbr0
    'oracle' template installed
    'lxc1' created

Filesystem

In order the get the best experience possible, LXC is very integrated with btrfs.

If you have an /container on btrfs, lxc will create a new subvol automatically, and will be able to perform quick clones if required.

Let's see some numbers.

    [root@ml330 container]# time lxc-create -B btrfs -n lxcguest1 -t oracle -- --url http://ml330.home.kikitux.net/stage

    Complete!
    Rebuilding rpm database
    Configuring container for Oracle Linux 6.5
    chcon: can't apply partial context to unlabeled file `/container/lxcguest1/rootfs/dev'
    Added container user:oracle password:oracle
    Added container user:root password:root
    Container : /container/lxcguest1/rootfs
    Config    : /container/lxcguest1/config
    Network   : eth0 (veth) on virbr0
    'oracle' template installed
    'lxcguest1' created

    real    1m26.409s
    user    0m36.572s
    sys     0m5.728s

Around 1 Minute 30 seconds, pretty impressive.

Check the new btrfs volume

    [root@ml330 container]# btrfs su list /container/
    ID 319 gen 4558 top level 287 path lxcguest1/rootfs
    [root@ml330 container]#

Let's clone lxcguest1 into lxcguest2

    [root@ml330 container]# time lxc-clone -o lxcguest1  -n lxcguest2 -s -t btrfs
    Tweaking configuration
    Copying rootfs...
    Create a snapshot of '/container/lxcguest1/rootfs' in '/container/lxcguest2/rootfs'
    Updating rootfs...
    'lxcguest2' created

    real    0m0.185s
    user    0m0.020s
    sys     0m0.020s
    [root@ml330 container]#

Let's than a second

The real value will came after, when you need to clone a machine that you have install some applications, that require more time to build, then this fast clone will came very handy.

Let's check the btrfs subvolumes now:

    [root@ml330 container]# btrfs su list /container/
    ID 319 gen 4619 top level 287 path lxcguest1/rootfs
    ID 321 gen 4620 top level 287 path lxcguest2/rootfs

Let's check some information fo the lxcguest1/rootfs subvolume:

    [root@ml330 container]# btrfs su show /container/lxcguest1/rootfs
    /container/lxcguest1/rootfs
        Name:                   rootfs
        uuid:                   0a99c1d7-b8f4-804a-b0be-42ab3969d3a0
        Parent uuid:            -
        Creation time:          2014-01-04 04:23:55
        Object ID:              319
        Generation (Gen):       4619
        Gen at creation:        4553
        Parent:                 287
        Top Level:              259
        Flags:                  -
        Snapshot(s):
                            install/container/lxcguest2/rootfs
    [root@ml330 container]#

Perfect, we can see we have 1 snapshot that is now on /container/lxcguest2/rootfs

Mountpoints

In my particular case, on the host server I have this filesystem structure.

    [root@ml330 /]# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda2        40G   15G   25G  37% /
    /dev/sda4        77G   23G   51G  31% /u01
    /dev/sdb        233G  3.5G  230G   2% /u02
    /dev/sdc        1.9T   29G  1.8T   2% /u03
    [root@ml330 /]#

The filesystem I am using are:

    [root@ml330 /]# mount
    /dev/sda2 on / type btrfs (rw)
    /dev/sda4 on /u01 type btrfs (rw)
    /dev/sdb on /u02 type ocfs2 (rw,heartbeat=none)
    /dev/sdc on /u03 type ocfs2 (rw,heartbeat=none)
    [root@ml330 /]#

I will like to have the same structure on my container, let's see what we got in a vanilla installation:

    Oracle Linux Server release 6.5
    Kernel 3.12.6-3.12.y.20131224.ol6.x86_64 on an x86_64

    lxcguest1 login: root
    Password:
    [root@lxcguest1 ~]# df -h / 
    Filesystem      Size  Used Avail Use% Mounted on
    -                40G   15G   25G  37% /
    [root@lxcguest1 ~]#

This came from the configuration file of the container /container/lxcguest1/config

    lxc.rootfs = /container/lxcguest1/rootfs

Checking lxc documentation, we have the option lxc.mount.entry, so if we put something like this, will work:

    lxc.mount.entry=/u03/lxcguest1 /container/lxcguest1/rootfs/u03 none rw,bind 0 0
    lxc.mount.entry=/u02/lxcguest1 /container/lxcguest1/rootfs/u02 none rw,bind 0 0
    lxc.mount.entry=/u01/lxcguest1 /container/lxcguest1/rootfs/u01 none rw,bind 0 0

However, checking the oracle documentation for lxc we have:

    9.2.4 About the lxc-oracle Template Script

    Note
    If you amend a template script, you alter the configuration files of all containers that you subsequently create from that script. If you amend the config file for a container, you alter the configuration of that container and all containers that you subsequently clone from it.

    The lxc-oracle template script defines system settings and resources that are assigned to a running container, including:
    the default passwords for the oracle and root users, which are set to oracle and root respectively
    the host name (lxc.utsname), which is set to the name of the container
    the number of available terminals (lxc.tty), which is set to 4
    the location of the container's root file system on the host (lxc.rootfs)
    the location of the fstab mount configuration file (lxc.mount)
    all system capabilities that are not available to the container (lxc.cap.drop)
    the local network interface configuration (lxc.network)
    all whitelisted cgroup devices (lxc.cgroup.devices.allow)

So better put the information on /container/lxcguest1/fstab

From:

    proc    /container/lxcguest1/rootfs/proc     proc   nodev,noexec,nosuid 0 0
    devpts  /container/lxcguest1/rootfs/dev/pts  devpts defaults 0 0
    sysfs   /container/lxcguest1/rootfs/sys      sysfs  defaults  0 0

To:

    proc    /container/lxcguest1/rootfs/proc     proc   nodev,noexec,nosuid 0 0
    devpts  /container/lxcguest1/rootfs/dev/pts  devpts defaults 0 0
    sysfs   /container/lxcguest1/rootfs/sys      sysfs  defaults  0 0
    /u03/lxcguest1 /container/lxcguest1/rootfs/u03 none rw,bind 0 0
    /u02/lxcguest1 /container/lxcguest1/rootfs/u02 none rw,bind 0 0
    /u01/lxcguest1 /container/lxcguest1/rootfs/u01 none rw,bind 0 0

Let's create the directories:

    mkdir -p /u03/lxcguest1 /container/lxcguest1/rootfs/u03
    mkdir -p /u02/lxcguest1 /container/lxcguest1/rootfs/u02
    mkdir -p /u01/lxcguest1 /container/lxcguest1/rootfs/u01

Let's start the container and check

    Oracle Linux Server release 6.5
    Kernel 3.12.6-3.12.y.20131224.ol6.x86_64 on an x86_64

    lxcguest1 login: root
    Password:
    Last login: Fri Jan  3 23:32:21 on lxc/tty1
    [root@lxcguest1 ~]# df -h / /u01 /u02 /u03
    Filesystem      Size  Used Avail Use% Mounted on
    -                40G   15G   25G  37% /
    -                77G   23G   51G  31% /u01
    -               233G  3.5G  230G   2% /u02
    -               1.9T   29G  1.8T   2% /u03
    [root@lxcguest1 ~]#

Perfect, now we have plenty of space, and with our very own structure.

Just note we are mapping /u01/lxcguest1 into /container/lxcguest1/rootfs/u01, so at the container level, I can do:

    [root@lxcguest1 ~]# touch /u01/from_container_lxcguest1

And on my physical box, it will become:

    [root@ml330 container]# ls -al /u01/lxcguest1/from_container_lxcguest1
    -rw-r--r-- 1 root root 0 Jan  4 04:41 /u01/lxcguest1/from_container_lxcguest1

IMPORTANT NOTE

    As we are modifying the configuration files, please be aware if you clone these machines, this configuration will go to the new lxc, so you need to review and adjust the files as needed.

Networking

After we restart our machine, in order to have a clean boot with new updates and lxc, we will notice a new network interface is available:

    virbr0    Link encap:Ethernet  HWaddr 52:54:00:94:9D:63
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1376 (1.3 KiB)  TX bytes:1242 (1.2 KiB)

Let's check the network inside our container:

    Oracle Linux Server release 6.5
    Kernel 3.12.6-3.12.y.20131224.ol6.x86_64 on an x86_64

    lxcguest1 login: root
    Password:
    Last login: Fri Jan  3 23:39:39 on lxc/console
    [root@lxcguest1 ~]# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr FE:CD:8B:0D:75:48
          inet addr:192.168.122.29  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::fccd:8bff:fe0d:7548/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1399 (1.3 KiB)  TX bytes:1458 (1.4 KiB)

    [root@lxcguest1 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.122.1   0.0.0.0         UG    0      0        0 eth0
    169.254.0.0     0.0.0.0         255.255.0.0     U     1011   0        0 eth0
    192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
    [root@lxcguest1 ~]# cat /etc/resolv.conf
    ; generated by /sbin/dhclient-script
    nameserver 192.168.122.1
    [root@lxcguest1 ~]#

So, our container by default is on this isolated network, which is good for safety reasons.

Let's check if we have connectivity to outside of the host:

[root@lxcguest1 ~]# ping google.com
PING google.com (101.98.9.59) 56(84) bytes of data.
^C
--- google.com ping statistics ---
121 packets transmitted, 0 received, 100% packet loss, time 120423ms

[root@lxcguest1 ~]#

If for some reason, you don't have network connectivity in the guest, relax, is just 1 line fix enable this.

If you are not using iptables, you will notice, that after libvirtd is installed, after the fresh boot, some iptables rules exists on our system:

[root@ml330 container]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     udp  --  anywhere             anywhere            udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere            udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:bootps

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             192.168.122.0/24    state RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     anywhere
ACCEPT     all  --  anywhere             anywhere
REJECT     all  --  anywhere             anywhere            reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere            reject-with icmp-port-unreachable

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
[root@ml330 container]#

Let's check the net forward option of our system:

    [root@ml330 container]# cat /proc/sys/net/ipv4/ip_forward
    0

Let's allow our host to forward traffic:

    [root@ml330 container]# echo 1 > /proc/sys/net/ipv4/ip_forward
    [root@ml330 container]# cat /proc/sys/net/ipv4/ip_forward
    1

Let's test again, I will ping the computer I have on my living room:

    [root@lxcguest1 ~]# ping mini.home.kikitux.net
    PING mini.home.kikitux.net (192.168.1.2) 56(84) bytes of data.
    64 bytes from alvaros-Mini.home.kikitux.net (192.168.1.2): icmp_seq=1 ttl=63 time=3.66 ms
    64 bytes from alvaros-Mini.home.kikitux.net (192.168.1.2): icmp_seq=2 ttl=63 time=3.42 ms
    64 bytes from alvaros-Mini.home.kikitux.net (192.168.1.2): icmp_seq=3 ttl=63 time=2.59 ms

Cool.

Our lxc guest got this ip:

    [root@lxcguest1 ~]# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr FE:CD:8B:0D:75:48
          inet addr:192.168.122.29  Bcast:192.168.122.255  Mask:255.255.255.0

If you will like to have a fixed IP, this is very easy.

In the configuration file, change from:

    lxc.network.type = veth
    lxc.network.link = virbr0
    lxc.network.flags = up

to:

    lxc.network.type = veth
    lxc.network.link = virbr0
    lxc.network.flags = up
    lxc.network.ipv4 = 192.168.122.11/24
    lxc.network.ipv4.gateway = 192.168.122.1

Let's poweroff and start out container again:

    Oracle Linux Server release 6.5
    Kernel 3.12.6-3.12.y.20131224.ol6.x86_64 on an x86_64

    lxcguest1 login: root
    Password:
    Last login: Fri Jan  3 23:45:59 on lxc/console
    [root@lxcguest1 ~]# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr FE:CD:8B:0D:75:48
          inet addr:192.168.122.11  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::fccd:8bff:fe0d:7548/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:19 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1503 (1.4 KiB)  TX bytes:1800 (1.7 KiB)

    [root@lxcguest1 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.122.1   0.0.0.0         UG    0      0        0 eth0
    169.254.0.0     0.0.0.0         255.255.0.0     U     1013   0        0 eth0
    192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
    [root@lxcguest1 ~]#

Perfect. Test the ping again:

    [root@lxcguest1 ~]# ping mini.home.kikitux.net
    PING mini.home.kikitux.net (192.168.1.2) 56(84) bytes of data.
    64 bytes from alvaros-Mini.home.kikitux.net (192.168.1.2): icmp_seq=1 ttl=63 time=5.48 ms
    64 bytes from alvaros-Mini.home.kikitux.net (192.168.1.2): icmp_seq=2 ttl=63 time=3.29 ms
    ^C
    --- mini.home.kikitux.net ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1664ms
    rtt min/avg/max/mdev = 3.294/4.387/5.480/1.093 ms
    [root@lxcguest1 ~]#

IMPORTANT NOTE

    As we are modifying the configuration files, please be aware if you clone these machines, this configuration will go to the new lxc, so you need to review and adjust the files as needed.

Ok, what about if we require our container on our same network.

You have 2 options, one is use macvlan.

From Oracle Linux LXC documentation we have:

    9.2.5 About Veth and Macvlan

    By default, the lxc-oracle template script sets up networking by setting up a veth bridge. In this mode, a container obtains its IP address from the dnsmasq server that libvirtd runs on the private virtual bridge network (virbr0) between the container and the host. The host allows a container to connect to the rest of the network by using NAT rules in iptables, but these rules do not allow incoming connections to the container. Both the host and other containers on the veth bridge have network access to the container via the bridge.

    If you want to allow network connections from outside the host to be able to connect to the container, the container needs to have an IP address on the same network as the host. One way to achieve this configuration is to use a macvlan bridge to create an independent logical network for the container. This network is effectively an extension of the local network that is connected the host's network interface. External systems can access the container as though it were an independent system on the network, and the container has network access to other containers that are configured on the bridge and to external systems. The container can also obtain its IP address from an external DHCP server on your local network. However, unlike a veth bridge, the host system does not have network access to the container.

Ok, I see macvlan as a very good feature if you want the lxc guest to talk to the network, and be independent to the host.

But in a tipical PoC/dev/test scenario, I will like to have access to the guests over the network from the host.

Enter, an ethernet bridge. As we are using at this moment libvirtd bridge, let's create a bridge and connect our container to this bridge.

Let's do it on lxcguest2:

Modify ifcfg-eth0 (or the network you want to modify) and create a bridge.

    [root@ml330 container]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
    DEVICE=eth0
    TYPE=Ethernet
    ONBOOT=yes
    NM_CONTROLLED=no
    BOOTPROTO=none
    BRIDGE=br0

    [root@ml330 container]# cat /etc/sysconfig/network-scripts/ifcfg-br0
    DEVICE=br0
    TYPE=Bridge
    ONBOOT=yes
    NM_CONTROLLED=no
    BOOTPROTO=none
    IPADDR=192.168.1.11
    NETMASK=255.255.255.0
    GATEWAY=192.168.1.1

After a service network restart you will get:

    [root@ml330 container]# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr 00:26:55:F4:26:28
          inet addr:192.168.1.11  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::226:55ff:fef4:2628/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17180 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15268 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2123201 (2.0 MiB)  TX bytes:3804512 (3.6 MiB)
          Interrupt:16

    [root@ml330 container]# ifconfig br0
    br0       Link encap:Ethernet  HWaddr 00:26:55:F4:26:28
          inet addr:192.168.1.11  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::226:55ff:fef4:2628/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16872 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14736 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1767572 (1.6 MiB)  TX bytes:3687511 (3.5 MiB)

    [root@ml330 container]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 eth0
    169.254.0.0     0.0.0.0         255.255.0.0     U     1004   0        0 br0
    192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 br0
    192.168.1.0     0.0.0.0         255.255.255.0   U     1      0        0 eth0
    192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
    [root@ml330 container]#

Let's modify lxcguest2 config file.

From:

    lxc.network.type = veth
    lxc.network.link = virbr0
    lxc.network.flags = up

To:

    lxc.network.type = veth
    lxc.network.link = br0
    lxc.network.flags = up

Start the container lxc-start -n lxcguest2

    Oracle Linux Server release 6.5
    Kernel 3.12.6-3.12.y.20131224.ol6.x86_64 on an x86_64

    lxcguest2 login: root
    Password:
    [root@lxcguest2 ~]# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr 00:16:3E:DA:76:46
          inet addr:192.168.1.207  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:feda:7646/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:82 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:28637 (27.9 KiB)  TX bytes:2144 (2.0 KiB)

    [root@lxcguest2 ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 eth0
    169.254.0.0     0.0.0.0         255.255.0.0     U     1015   0        0 eth0
    192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
    [root@lxcguest2 ~]#

Sweet.

Ping from the host:

    [root@ml330 container]# ping -c 2 192.168.1.207
    PING 192.168.1.207 (192.168.1.207) 56(84) bytes of data.
    64 bytes from 192.168.1.207: icmp_seq=1 ttl=64 time=0.069 ms
    64 bytes from 192.168.1.207: icmp_seq=2 ttl=64 time=0.033 ms

    --- 192.168.1.207 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 999ms
    rtt min/avg/max/mdev = 0.033/0.051/0.069/0.018 ms
    [root@ml330 container]#

Ping from another machine in the network:

    alvaros-mini:~ alvarom$ ping -c 2 192.168.1.207
    PING 192.168.1.207 (192.168.1.207): 56 data bytes
    64 bytes from 192.168.1.207: icmp_seq=0 ttl=64 time=264.480 ms
    64 bytes from 192.168.1.207: icmp_seq=1 ttl=64 time=4.265 ms

    --- 192.168.1.207 ping statistics ---
    2 packets transmitted, 2 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 4.265/134.373/264.480/130.107 ms
    alvaros-mini:~ alvarom$

Good, easy and very fast.

  • oracle 6
  • linux 7
  • lxc 1