Thursday, March 24, 2016

CGroup Case 2 – Device Whitelisting

The devices subsystem in Cgroups provides a fine grained control for the system devices. An admin can define Cgroups that can restrict access to particular devices and define what users or groups can access those devices thus providing security and data protection.

In this article we will see how we can whitelist a device using Cgroups.

Add the Configurations
The first thing we need to do is to add the configuration to the /etc/cgconfig.conf file with

group blockDevice {
   devices {
    # Deny access to /dev/sda2
      devices.deny="b 8:2 mrw ";
     }
}
 
In the above snippet we have blocked the access to device /dev/sda2.  The devices sub system accepts a parameter “devices.deny” which takes the major and monitor numbers of the devices as values. Let’s see what the value provides above tells,

B – Type of the device. There are 3 types
  • a — applies to all devices, both character devices and block devices
  • b — specifies a block device
  • c — specifies a character device

8:2 – Major and Minor versions. These can found using

[root@vx111a dev]# ls -l sda2
brw-rw---- 1 root disk 8, 2 Mar 15 14:24 sda2

[Note: Major for /dev/sda2 is 8 and minor is 2]

Mrw- access is a sequence of one or more of the following letters:
  • r — allows tasks to read from the specified device
  • w — allows tasks to write to the specified device
  • m — allows tasks to create device files that do not yet exist

devices.deny - specifies devices that tasks in a cgroup cannot access. The syntax of entries is identical with devices.allow

Now once the cgconfig file is configured we will move to the cgred configuration which will add the process to the subsystems.


[root@vx111a tmp]# cat /etc/cgrules.conf 
*:bash           devices      blockDevice/

Now I have added the bash to the cgrules files. This makes that commands that are run from the bash prompt which are trying to access the /dev/sda2 will be restricted.

Start the Service
Start both the services using the commands,
Service cgconfig restart
Service cgred restart

Testing
In order to test the Cgroups we need to first make sure that the PID for the bash prompt is available in the cgroups created.

find the PID for the current bash Prompt using
[root@vx111a docker]# ps -p $$
  PID TTY          TIME CMD
 8966 pts/2    00:00:00 bash

Check the lscgroup and make sure that the devices subsystem is active as
[root@vx111a docker]# lscgroup | grep block
devices:/blockDevice

Check the PID – Now once the sub system is active we need to check the PID obtained above is available in tasks file. This is a special file which contains all the PID that are connected to the sub system. For checking the PID go to the location “/sys/fs/cgroup/devices/blockDevice” and check the tasks file as,

[root@vx111a blockDevice]# cat tasks
8966
9096
9638

We can see that 8966 is available. Check the drive that /dev/sda2 is mounted

[root@vx111a tmp]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda2      xfs        49G  207M   49G   1% /test

So we have the device mounted on /test. Now from the bash prompt if we run a command that access the /test drive the cgroups should not allow that. We can check using

[root@vx111a tmp]# dd if=/dev/sda2 of=/dev/null bs=512 count=1
dd: failed to open ‘/dev/sda2’: Operation not permitted

We can see that the current bash prompt does have the access on the /dev/sda2.

More to come, Happy learning J
Read More

CGroup Case 1 - I/O throttling

In the first use case we will see how we can manage the I/O Operations on the Disk devices. This is done by the blkio sub system available in the CGroups. The blkio sub system moderates I/O operations to the specified block devices.  In this example we will see how we can restrict the read operations performed on a drive

For this we use the “blkio.throttle.read_bps_device” parameter which specifies a upper limit on the number of read operations a device can perform. The rate of the read operations are specifired in bytes per second. The values for this accepts majorminor, and bytes_per_second.

Major& Minor - device types and node numbers specified in Linux
Bytes_per_second is the upper limit rate at which read operations can be performed.

Now lets block the read Operations on the device /dev/sda to 10MB. For this we need to first find the major and minor values for the device. These can be found by using the

[root@vx111a dev]# ls -l sd*
brw-rw---- 1 root disk 8, 0 Mar 15 14:24 sda
brw-rw---- 1 root disk 8, 1 Mar 15 14:24 sda1
brw-rw---- 1 root disk 8, 2 Mar 15 14:24 sda2
brw-rw---- 1 root disk 8, 3 Mar 15 14:24 sda3
brw-rw---- 1 root disk 8, 4 Mar 15 14:24 sda4
brw-rw---- 1 root disk 8, 5 Mar 15 14:24 sda5
brw-rw---- 1 root disk 8, 6 Mar 15 14:24 sda6
[Note: Major for /dev/sda1 is 8 and minor is 1]

OR

[root@vx111a ~]# cat /proc/partitions | grep sda
   8        0  488386584 sda
   8        1   81920000 sda1
   8        2   51200000 sda2
   8        3   51200000 sda3
   8        4          1 sda4
   8        5   10240000 sda5
   8        6   10240000 sda6

In this case for /sda, we have the major number as 8 and minor number as 0. Lets run the hdparm command first with out the cgroups and see what is the disk read rate for the drive  /dev/sda.

NOTE - Hdparm is the tool to use when it comes to tuning your hard disk or DVD drive, but it can also measure read speed, deliver valuable information about the device, change important drive settings, and even erase SSDs securely

[root@vx111a ~]# hdparm --direct -t /dev/sda
/dev/sda:
 Timing O_DIRECT disk reads: 368 MB in  3.00 seconds = 122.42 MB/sec

We can see that disk read rate is 122MB per second. Now we want to restrict the value to 10MB.

Now create a group in the /etc/cgconfig.conf file as

group limitIO{
    blkio {
              blkio.throttle.read_bps_device = "8:0   1048576";
    }
}

We have defined a limitIO group with taking the blkio subsystem.  Now lets configure the cgrules.conf files by adding the below line to the end of the file,

*:hdparm      blkio    limitIO/

This tells that operations performed by hdparm command needs to be added to blkio sub sytem and limited by the group limitIO.

Now restart both the services and run the lssubsys command to check the configuration,

[root@vx111a /]#lssubsys
cpuset:/
perf_event:/
hugetlb:/
blkio:/
blkio:/limitIO
memory:/
memory:/testOOM
net_cls:/

We can see the limitIO group is associated with the blkio subsystem.Once the serices are restarted run the hdparm again and see the values.

Testing
Now test the cgroup using the hdparm command as,

[root@vx111a ~]# hdparm --direct -t /dev/sda
/dev/sda:
 Timing O_DIRECT disk reads:   4 MB in  4.00 seconds = 1023.38 kB/sec

We can see that the value is limited to 1MB which is under 10MB.

More to Come, Happy learning J
Read More

CGroups

Resource exhaustion is one of the common issues while running production machines. There are cases where running servers crash due to other process using high memory or any other process running a high CPU intensive code. It is always good if we have a way to control resource usage. On larger systems kernel resource controllers (also called as Control groups (CGroups)) can be usefull to help priority applications to get the necessary resources thus limiting resources for other applications.

In this article we will see how we can use Cgroups to manage resources. According to kernel documentation, Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behavior.

CGroups is a linux kernel feature to limit, account and isolate resource (CPU, memory, disk, I/O) usages of process groups. Using this we can get a control over allocating, prioritizing, managing and monitoring system resources. The Cgroup can also be taught as a generic framework where resource controller can be plugged in which is then used to manage different resources of the system.

The resource controller can be a memory, CPU, Network or a Disk I/O controller. As the name suggests each of this controller performs the necessary functions like memory controller managing the memory of the processes.

The type of the resources that can be managed by CGroups include the following,
·         Blkio (Storage) - Limits total input and output access to storage devices (such as hard disks, USB drives, and so on)
·         CPU (Processor Scheduling) - Assigns the amount of access a cgroup has to be scheduled for processing power
·         Memory - Limits memory usage by task. It also creates reports on memory resources used.
·         Cpuacct (Process accounting) – Reports on CPU usage. This information can be leveraged to charge clients for the amount of processing power they use
·         Cpuset (CPU assignment) - On systems with multiple CPU cores, assigns a task to a particular set of processors and associated memory
·         Freezer (Suspend/resume) - Suspends and resumes cgroup tasks
·         net_cls (Network bandwidth) - Limits network access to selected cgroup tasks

There are some other resources that are even managed by CGroups. Check the docs for more details.

Installation
The easiest way to work with CGroups is to install the libcgroup package which contains the necessary packages and utilities for using CGroups.libcgroup is a library that abstracts the control group file system in Linux.

[root@vx111a work]# yum list installed | grep libcgroup
libcgroup.x86_64                       0.41-8.el7                      @anaconda
libcgroup-tools.x86_64               0.41-8.el7                      @anaconda

Install the libcgroup library and we can start from there.

Configuration
Cgroups are implemented using a file system-based model—just as you can traverse the /proc tree to view process-related information, you can examine the hierarchy at /cgroup to examine current control group hierarchies, parameter assignments, and associated tasks.

Once the libcgroup package is installed we get 2 services
Cgconfig
Cgred

Cgconfig – The cgconfig service installed with the libcgroup packages provides a convenient way to create hierarchies, attach sub systems to the hierarchies and manage cgroups with in thise hierarchies. The service is not started by default.  The service reads the file /etc/cgconfig.conf and depending on the contents of the file it creates hierarchies, mounts necessary files systems, creates cgroups and sets the sub system parameters.

The default /etc/cgconfig.conf file installed with the libcgroup package creates and mounts an individual hierarchy for each subsystem, and attaches the subsystems to these hierarchies. In other words this is used to define control groups, their parameters and also mount points.

Once the cgconfig service is started, a virtual file system is mounted. This can be either /cgroup or /sys/fs/cgroup.

[root@vx111a conf.d]# ll /sys/fs/cgroup/
total 0
drwxr-xr-x 2 root root  0 Mar 16 13:35 blkio
lrwxrwxrwx 1 root root 11 Mar 15 14:24 cpu -> cpu,cpuacct
lrwxrwxrwx 1 root root 11 Mar 15 14:24 cpuacct -> cpu,cpuacct
drwxr-xr-x 5 root root  0 Mar 16 13:36 cpu,cpuacct
drwxr-xr-x 2 root root  0 Mar 15 14:24 cpuset
drwxr-xr-x 4 root root  0 Mar 15 14:24 devices
drwxr-xr-x 2 root root  0 Mar 15 14:24 freezer
drwxr-xr-x 2 root root  0 Mar 15 14:24 hugetlb
drwxr-xr-x 2 root root  0 Mar 16 13:36 memory
drwxr-xr-x 2 root root  0 Mar 15 14:24 net_cls
drwxr-xr-x 2 root root  0 Mar 15 14:24 perf_event
drwxr-xr-x 4 root root  0 Mar 15 14:24 systemd

The configuration file contains the group elements. The resource that needs to be managed is defined in the configuration file. A simple configuration looks as

group group1 {
 cpu {
        cpu.shares="800";
    }
    cpuacct {
        cpuacct.usage="0";
    }
    memory {
        memory.limit_in_bytes="4G";
        memory.memsw.limit_in_bytes="6G";
    }
}

In the above snippet we defined a group with the name group1 in which we defined the sus systems that we want to manage. We defined the CPU and Memory Sub systems.

The cpu.shares parameter determines the share of CPU resources available to each process in all cgroups

The memory.limit_in_bytes parameter tells the amount of memory that this group has access to. The processes associated to this group will be given with 4GB limit of memory.
The memory.memsw.limit_in_bytes parameter specifies the total amount of memory and swap space processes may use. 

Cgred is a service that moves tasks into cgroups according to parameters set in the /etc/cgrules.conf file. This file contains list of rules which assign to a defined group/user a control group in a subsystem.

The configuration file contains the form
user    subsystems    control_group

a sample example would be like
*:java    memory  group1

In the above snippet we have defined a rule such that all java Process that starts will be added to the memory system under the group1. So all process started by java will have the limit of 4G memory as we defined in the /etc/cgconfig.conf file.

Start the Service
Start both the services using the commands,
Service cgconfig restart
Service cgred restart

Once the services are started we can check our configuration using the lscgroup command as

[root@vx111a conf.d]# lscgroup -g cpu:/
cpu,cpuacct:/
cpu,cpuacct:/group1
cpu,cpuacct:/browsers
memory:/group1
devices:/blockDevice

lssubsys - The command lssubsys -am lists all subsystems which are present in the system, mounted ones will be shown with their mount point:

[root@vx111a docker]# lssubsys
cpuset
cpu,cpuacct
memory
devices
freezer
net_cls
blkio
perf_event
hugetlb

In the next article we will see some of the use cases using the Cgroups. More to come. Happy Learning
Read More