Wednesday, May 7, 2014

Salt Stack – a Infrastructure management tool

Managing infrastructure is always complex when dealing with large number of systems and high speed communication between them is always a problem. So what is a “salt Stack?”.

Consider we have a couple of machines that we manage. If we need perform  a couple of operations on them like patching , or perform some command execution on these machine we need to login to each and every machine and then do the appropriate action on them . But what if the machines we handle are large ( may be more than 1000 ) . Managing them is always complex.

Salt Stack comes in here. The salt Stack is a configuration management tool which helps the administrators in performing these sort of operations very easily. Salt Stack also provides us with high speed communications between the infrastructures.

We have other tools like puppet and chef which provide us the same facilities. What makes Salt different is that it is written in Phyton and is light-weight as far as resources and requirements. The implementation is also very simple. Salt uses “ZeroMQ” in its communication layer which is really fast.

All the above tools allow us to perform command executions on multiple machines at once, install and configure software etc.

In this article we will see how we can configure and use salt Stack to perform remote execution. For the article purpose I will use only one system as both master and slave. We can also configure multiple machines and use them as slaves.

One important thing is that Salt tool is a command line tool.

Installation & Configuration

Installing salt is very easy. The salt documentation tells us ways to install salt on various distributions. Check the installation docs ( http://docs.saltstack.com/en/latest/topics/installation/index.html )  on how to install salt on RHEL.

On RHEL, execute


and get the packages necessary for the installation.

Once the packages are available and installed, we can now see a configuration directory in /etc/salt. This location contains 2 files “master” and “minion”.

Now once the files are available, we need to fist do some configuration changes to both the files. The terms master and minion are commonly referred to the controller and the controlled. The master is the center controller for all the minions running. This is much like a master-slave configuration.

Once we confirm these files are available, execute the command “salt-master“ and keep it running in the back ground. lets configure minion.

The first thing we need to configure is a way for minion ( slave ) to communicate with the master. This can be configured in minion configuration file ,
Here are the changes that we need to do in the minion configuration file, uncomment these lines and provide the necessary date,

master: 172.16.101.68 <IP address of the master system >
id: testminion  <Name of the minion >

Once the changes are done , save them and restart the minion using “salt-minion –d” command. The –d flag demonizes the process and starts the minion in the back ground.

The next step is to accept the minion keys. From the above configuration the minion knows where the master is. Salt uses public key encryption to secure the communication between master and minion. We need to notify the master and minion that they can trust each other by accepting minion keys on the master.

[root@vx111a salt]# salt-key -L
Accepted Keys:
Unaccepted Keys:
testminion
Rejected Keys:

Use the “salt-key –L” command to get a list of all pending , accepting and rejected minions information. When I ran the command I see that there is unaccepted keys from testminion which we configured as a minion in our article.

For accepting testminion keys , execute “salt-key -a testminion

[root@vx111a salt]# salt-key -a testminion
The following keys are going to be accepted:
Unaccepted Keys:
testminion
Proceed? [n/Y] y
Key for minion testminion accepted.

Once we accept the keys we can now test the communication using “salt '*' test.ping

[root@vx111a salt]# salt '*' test.ping
testminion:
    True

We can use the command “salt ‘*’ test.ping” to test all the available minions. The wild-card “*” targets every minion and since we have only one minion “testminion” , it gets the status of that. The response is “True” saying that the communication is happened successfully.

The salt command contains the command , targets and action. Now if we want to execute a command on a available minions we can use

Salt ‘*’ cmd.run “service httpd restart”
Salt ‘*’ cmd.run “uptime”

All the commands should be available on minions. In the above case, the httpd should be available if we run the restart command on that. In the next article, we will see the salt stack configuration management options.

Happy Learning, More to come.
Read More

Monday, May 5, 2014

Perl : File Locking using flock

As a application server admin, it do have to write scripts for various automation operations in weblogic. One such task to do deploy application in weblogic. As we know that Weblogic domain does not allow multiple operations to be performed at same time. We have to take a lock first and then perform action, if a lock is already been taken by a different operation a message is sent back saying “lock already taken“.

In my case ,we have our own scripts which will do a deployments in weblogic for us . we have used WLST for doing the actual deployment. Now my task is make sure that only 1 deployment is being performed on a domain and every second one should wait until the first one is completed. This is much like a locking stuff.

Perl provide us with many modules that help many of the administrative tasks easy. One such module is perl::FLOCK. This module allows one to take a lock on a file and hold it. Lets write a sample script and see how it works

Here is a Sample Script that i wrote for my testing purpose.

#!/usr/bin/perl
#  --------
#  PROGRAM:  lock.pl  (Sample Script for Flock Test)
#  --------

$LOCK_EXCLUSIVE                   = 2;
$UNLOCK                      = 8;
$LOCK_SHARED                        = 1;
$LOCK_NONBLOCKING = 4;

# open the file, lock the file, sleep, then write, then unlock the file, then close the file.

open (FILE, ">> test.dat") || die "problem opening test.dat\n"; 
flock FILE, $LOCK_EXCLUSIVE;
sleep 10;
print FILE "this line printed by lock.pl\n";
flock FILE, $UNLOCK;
close(FILE);

In the above case , we are taking a lock on the test.dat file and sleeping for a 10 second , then writing content to it and then un locking.

There are 4 locks available

shared lock : Shared lock can be shared with other process too. If a process takes a lock for reading a file which has a Shared lock , another process can take the lock on the same file for reading .This lock is normally applied when you just want to read the file

exclusive lock : This is the lock iam planning to use in the code, this lock is used when you want to make changes to the file. Only one exclusive lock can be on a file, so that only one process at a time can make changes. 

non-blocking : non-blocking lock request so that the process does not have to wait if an incompatible lock is held by another process; instead the process can take some other action.

unlock: unlock the lock. normally call to this is not use full. coz once a process releases the lock , the unlock is called automatically.

Now this sample also checks whether the file is already been locked. This is a sample logic

#!/usr/bin/perl

$LOCK_EXCLUSIVE         = 2;
$UNLOCK                     = 8;
$LOCK_SHARED            = 1;
$LOCK_NONBLOCKING    = 4;

# -----------------------
# What's about to happen:
# -----------------------
# open the file, try to lock the file, then write, then unlock the
# file, then close the file.

open (FILE, ">> test.dat") || die "problem opening test.dat\n";

if ( is_file_locked($FILE) ){ print "locked\n";} else { print "not locked\n";}

flock FILE, $LOCK_EXCLUSIVE;
print FILE "this line printed by try.pl\n";
flock FILE, $UNLOCK;
close(FILE);

sub is_file_locked
{

  my $theFile;
  my $theRC;

  ($theFile) = @_;
  $theRC = open(my $HANDLE, ">>", $theFile);
  $theRC = flock($HANDLE, LOCK_EX|LOCK_NB);
  close($HANDLE);
  return !$theRC;

}

The above script will also check whether the lock is being taken or not

Now for my actual task , I have to use the same lock technology and make sure my second process checks the lock continuously and when it is available we need to perform some action.

Here is the sample code for that

#!/usr/bin/perl

#  --------
#  PROGRAM:  try.pl  (Sample Flock Script)
#  --------

$LOCK_EXCLUSIVE         = 2;
$UNLOCK                 = 8;
$LOCK_SHARED            = 1;
$LOCK_NONBLOCKING       = 4;


#Get the Domain Name for the Cluster Passed
my @values = split('-', $ARGV[0]);
my $cluster= $values[0];
my $lock_file="/logs/jas/jagadish/$cluster.lock";


for (;;)
{
   # See if lock Status has changed.
   my $lock_status=check_lock_exists();


   if ($lock_status == 0)
   {
       print "-----Lock  exist,sleeping for 10------\n";
      # lock Status Changed.
      # Sleep for 10 to Check the Lock status again
       sleep(10);
   }
   else
   {
      get_lock();
      exit 0;
   }
}


#Sub Routine to Check Whether the Lock File Exists
sub check_lock_exists() {
  
   my $status;

   if (-e $lock_file) {
        $status = 0;
    } else {
        $status  = 1;
    }
  
   return $status;
 }


sub get_lock()    {
     open (FILE, ">>$lock_file") || die "problem opening lock File\n";
     flock FILE, $LOCK_EXCLUSIVE;
     print "----Obtained Lock----\n";
     sleep 20;
     flock FILE, $UNLOCK;
     close(FILE);
     unlink($lock_file) or die "can't remove lockfile: $lockfile ($!)";
}


This script works in such a way ,

1. For the First time deployment , the script will checks for the Domain and create a lock file in the specified location with the domain name , like MyDomain.lock. For every domain , the lock file is created with domainName.lock so that we can check if multiple deployments are going to same domain.

2. The second  deployment will check the domain and if a domain lock file exists in the location ( the deployment is happening or lock was taken ) , it will continuously wait until the lock is released ( The lock file is deleted at the end and the second deployment will create the same lock file for the deployment ).Here are the testing


First Window
[djas999@vx181d jagadish]$ perl getlock.pl MyDomain
----Obtained Lock----

Second Window

[djas999@vx181d jagadish]$ perl getlock.pl MyDomain
-----Lock  exist,sleeping for 10------
-----Lock  exist,sleeping for 10------
----Obtained Lock----

Thrid Window
[djas999@vx181d jagadish]$ perl getlock.pl MyDomain
-----Lock  exist,sleeping for 10------
-----Lock  exist,sleeping for 10------
-----Lock  exist,sleeping for 10------
-----Lock  exist,sleeping for 10------
----Obtained Lock----


In the above case ( all ran at same time ) , the domain i have chosen is MyDomain and the  script will continuously check for the lock until it is released or not available.
Read More