Thursday, July 25, 2013

Recover Accidently Deleted Files from Linux

While working as an admin, there are some times where I would remove some important log files or configuration files accidently. If we are working on a GUI based Linux Environment we can recover the file form the Trash but when we are working on a Command line Linux mode its is a little complex. This article tells you how to recover a file that is accidently deleted in the linux command line mode.

Consider a Tomcat process which is sending the log details to a file named jboss.log .Consider this file was deleted.

One important thing to keep in mind is that we need to keep the process running which is using the deleted file or else the file will be completely deleted. so in the above case   ,if the file jboss.log is deleted and even then the tomcat process is running then we can recover the file.

Lsof (List of Open Files) is a command available in linux by which we can see what are the files that are opened currently by a process and with various other options. This command helps us in here in recovering the file.

If you just run the lsof

root@hunter-tmp $ lsof | head
COMMAND     PID      USER   FD      TYPE      DEVICE   SIZE/OFF       NODE NAME
init                1         root   cwd     unknown                                 /proc/1/cwd
init                1         root   rtd       unknown                                /proc/1/root

The output says
The Command, PID, and User columns represent the name of a process, process identifier (PID), and owner's name, respectively. The Device, SIZE/OFF, Node, and Name columns refer to the file itself, specifying the name of the disk, size of the file, inode (the file's identification on the disk), and actual name of the file

The FD and Type columns are the most important ones and provide more information on  how the file is being used. The FD column represents the file descriptor, which is how the application sees the file. The Type column gives more description about what form the file takes.

The cwd value refers to the application's current working directory, which is the directory that the application was started from. A number refers to the application's file descriptor, which is an integer returned upon opening the file.

So when we execute the losf and grep for the deleted file

root@hunter - $ lsof | grep jboss.log
java       1786   ds002   85w      REG    253,1    63186    1015840 /software/jboss/6.0/logs/ABC-A2/jboss.log (deleted)
java       4566   ds002   64w      REG    253,1        0       1015847 /software/jboss/6.0/logs/DEF-A2/jboss.log

We can see two files with the same name but the first says it was deleted. Now this is the file we need to recover.

As I said early a Integer digits refers to the application's file descriptor, which is an integer returned upon opening the file. Each application is initially opened with three file descriptors, 0 through 2, for the standard input, output, and error streams, respectively.

The u means the file has been opened in read/write mode, rather than read-only (r) or write-only (w). As such, most opened files from the application start at FD 3.

Now in the above case we can a Integer digit 85w. Thus, the data is available by looking at /proc/<PID>/fd/


When we go to this location we can see a lot of integer digits which are actually a file descriptor. Now we can also see a 85 file which is a symlink for the file that is deleted. We can just copy the contents of the 85 file (which are actually the contents of the file that we deleted) to another files and by this we have the contents of the file that is deleted.

Happy Learning

Read More

Friday, July 19, 2013

Netcat

Netcat is a versatile tool that is able to read and write data across TCP and UDP network.It's often used for testing and debugging network connections. In its most basic usage, netcat allows you to feed a stream of data to a specific port on a specific host

What exactly netcat does is it opens the connection between two machines and give back two streams. More advanced used of this command is that You can build a server, transfer files, chat with friends, stream media or use it as a stand alone client for some other protocols.

A Few Basic examples would be

Find Out the Open Ports On A Remote Machine

(root)-(jobs:0)-(~) -> /usr/bin/nc -z -v -n xxx.xxx.xxx.xx 10000-10020
nc: connect to xxx.xxx.xxx.xx port 10000 (tcp) failed: Connection refused
Connection to xxx.xxx.xxx.xx 10001 port [tcp/*] succeeded!
Connection to xxx.xxx.xxx.xx 10002 port [tcp/*] succeeded!
nc: connect to xxx.xxx.xxx.xx port 10003 (tcp) failed: Connection refused
nc: connect to xxx.xxx.xxx.xx port 10004 (tcp) failed: Connection refused
nc: connect to xxx.xxx.xxx.xx port 10005 (tcp) failed: Connection refused
nc: connect to xxx.xxx.xxx.xx port 10006 (tcp) failed: Connection refused
Connection to xxx.xxx.xxx.xx 10007 port [tcp/*] succeeded!
nc: connect to xxx.xxx.xxx.xx port 10008 (tcp) failed: Connection refused
Connection to xxx.xxx.xxx.xx 10009 port [tcp/*] succeeded!
nc: connect to xxx.xxx.xxx.xx port 10010 (tcp) failed: Connection refused
Connection to xxx.xxx.xxx.xx 10011 port [tcp/*] succeeded!
Connection to xxx.xxx.xxx.xx 10012 port [tcp/*] succeeded!
nc: connect to xxx.xxx.xxx.xx port 10013 (tcp) failed: Connection refused
nc: connect to xxx.xxx.xxx.xx port 10014 (tcp) failed: Connection refused
nc: connect to xxx.xxx.xxx.xx port 10015 (tcp) failed: Connection refused
nc: connect to xxx.xxx.xxx.xx port 10016 (tcp) failed: Connection refused
nc: connect to xxx.xxx.xxx.xx port 10017 (tcp) failed: Connection refused
nc: connect to xxx.xxx.xxx.xx port 10018 (tcp) failed: Connection refused
Connection to xxx.xxx.xxx.xx 10019 port [tcp/*] succeeded!
nc: connect to xxx.xxx.xxx.xx port 10020 (tcp) failed: Connection refused

z option tell netcat to use zero IO .i.e the connection is closed as soon as it opens and no actual data exchange take place.
v option is used for verbose option.
n option tell netcat not to use the DNS lookup for the address.

This command will print all the open ports between 10000 to 10020.

Find Whether a Port is Open are Not
(root)-(jobs:0)-(~)-> nc -v xxx.xxx.xxx.xx 10001
Connection to xxx.xxx.xxx.xx 10001 port [tcp/scp-config] succeeded!

A Basic Chat Example

A chat can be turned to make two processes talk to each other, thus making netcat do I/O over network. Here is how we do that

Dev:vx1000:root-~ $ nc -l 1567 ( On machine A)
this is jagadish
smooth is good

Dev:vx1001:root-~ $ nc xxx.xxx.xxx.xx 1567 (On Machine B , you were connecting to the Machine A on the Same IP address , Machine A IP adress that we started)
this is jagadish
smooth is good

You can see the Chat sort of application in here

The connectivity between the server and client can be tested to see if a rule in iptables is blocking the connection to a socket, or whether there's any other network problems. All things will be written on the Client Side will be mirrored to the server in plain text making it insecure.

Sending Files
Similarly netcat command can also be used to send files over the wire like,

nc -l 1567 < cleaner-logs.log ( On machine A)

And on machine B we can connect the Machine A ( A Ip Address )on Same port like

nc xxx.xxx.xxx.xx 1567 > jas-clean-logs.log

Suppose if we want to perform the reverse like
B as server

$nc -l 1567 > file.txt ( On Machine A)

$nc xxx.xx.xx.xx 1567 < file.txt ( On Machine B)

We can Also Transfer Directories like

$tar -cvf – dir_name | nc -l 1567 ( On Machine A)

$nc -n xxx.xx.xx.xx 1567 | tar -xvf - ( On Machine B)

Here at server A we are creating the tar archive and redirecting its outout at the console through -. Then we are piping it to netcat which is used to send it over network.

At Client we are just downloading the archive file from the server using the netcat and piping its output tar tool to extract the files.

Specify Source Address

Suppose you have more than one addresses for your machine and you want to explicitly tell which address to use for outgoing data. Then we can specify the ip address with -s option in netcat
Server

$nc -u -l 1567 < file.txt
Client
$nc -u 172.31.100.7 1567 -s 172.31.100.5 > file.txt

Telnet-like Usage
Netcat can be used in order to talk to servers like telnet does.

(! 1023)-> nc dict.org 2628
220 pan.alephnull.com dictd 1.12.0/rf on Linux 3.0.0-14-server <auth.mime> <19096070.16921.1373876364@pan.alephnull.com>
DEFINE wn server
150 1 definitions retrieved
151 "server" wn "WordNet (r) 3.0 (2006)"
server
n 1: a person whose occupation is to serve at table (as in a
restaurant) [syn: {waiter}, {server}]
2: (court games) the player who serves to start a point
3: (computer science) a computer that provides client stations
with access to files and printers as shared resources to a
computer network [syn: {server}, {host}]
4: utensil used in serving food or drink
.
250 ok [d/m/c = 1/0/17; 0.000r 0.000u 0.000s]

Nectat can also be used to set up a telnet server in a matter of seconds. You can specify the shell (or for that matter any executable) you want netcat to run at a successful connection with the -e parameter:

nc -lp 1337 -e /bin/bash
Read More

Wednesday, July 10, 2013

Syslog-ng : log Consolidation -Tailing Files

Consider a case where we need to use the syslog-ng for files that dont use any log4j configuration. Consider if you want the garbage logs to be sent to the syslog server , but these logs are being updated by the Java JVM , so there is no way for using any sort of log4j configuration for this. In this case we can use the syslog-ng in this way ,

1.Configure the /syslog-ng/etc/syslog-ng/syslog-ng.conf with

#----------------------------------------------------------------------
# Sources
#----------------------------------------------------------------------
source s_gc { file ( "/logs/jboss/ews/domains/abc/hello" follow_freq(1) flags(no-parse) ); };

#In the above case we are going to tail the file named “hello” for any content change and send the updated content to the syslog server

#----------------------------------------------------------------------
# Destinations
#----------------------------------------------------------------------
destination d_gc { udp ( "198.12.34.22" port(59503) ); };

#configure the Destination like

#----------------------------------------------------------------------
# Logging
#----------------------------------------------------------------------
log { source ( s_gc ); destination ( d_gc ); };

#configure the Log Source

The above configuration is done on the Sender side since we dont have any log4j sort of configuration here.

Once the configuration is done start the process using
/syslog-ng/sbin/syslog-ng -f /syslog-ng/etc/syslog-ng/syslog-ng.conf

We are starting the process here because , since we want to tail logs and send content to the syslog server . We don't have any sort of log4j configuration here to automatically send the content to the syslog server. In this case we start the process which will tail the logs continuously for the content change and push the modified content to the syslog server

Make sure the process is Up and running.

2.On the Receiver Side configure the syslog like

source s_jboss_abc { udp(ip(0.0.0.0) port(59503)); };

destination d_abc-GCserver { file("/logs/chipper/WEBINF/dev/jas/ews/jas/hello.log.$DAY"); };

filter f_abcCGCserver { ( match("hello") ); };
# we are using the filter names hello since the content will be coming from the file hello from Sender

log { source(s_jboss_abc); filter(f_abcCGCserver); destination(d_abc-GCserver); flags(final); };

Once the above configuration is done the receiver side , run the syslog process.

3.We can test this using
echo “some thing” >> hello

we are sending some content to the hello file which inturn checks by the syslog process running in sender side and will send that to the receiver.

The receiver will collect the log content and save them in the log file that we specified.
Read More

Syslog-ng : log Consolidation

Application teams some times require to save their logs for a longer period so that they can analyze them after some time. In most cases the server that are running the application holds the logs files , but in production environment there is always a issues of Disk Space.

How can we send log files to a different location which can be used as a Log server?

Syslog-ng is a pacakge available in linux which can be used to send logs for a log server for storing them.

From WIKI

Syslog-ng is an open source implementation of the Syslog protocol for Unix and Unix-like systems. It extends the original syslogd model with content-based filtering, rich filtering capabilities.

So how can we use the syslog-ng.

1.Download the package from http://www.balabit.com/

2.For a Web Application ( On the Sender Side )
For a web application , we can use the log4j to do the Sysnog configuration which send a content to the syslog server. The configuration looks like this

<!-- ====================================== -->
<!-- Append messages to the a remote syslog -->
<!-- ====================================== -->

<appender name="ABC-A1_SYS" class="org.productivity.java.syslog4j.impl.log4j.Syslog4jAppender">
<param name="Facility" value="user"/>
<param name="Protocol" value="tcp"/>
<param name="Host" value="198.12.34.22"/>
<param name="port" value="59503" />
<param name="threshold" value="ALL"/>
<param name="ident" value="abc" />
<param name="maxMessageLength" value="1000000"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="[%d{ISO8601}] [das] [$] [%p] [%c{3}] %m%n"/>
</layout>

<filter class="org.apache.log4j.varia.LevelRangeFilter">
<param name="LevelMin" value="DEBUG" />
<param name="LevelMax" value="FATAL" />
</filter>
</appender>

So in this log4j configuration we configured the syslog which can be used to send the log content to 198.12.34.22 IP address on port 59503.Ini the above configuration the ident value is important as we use that to send content to the Receiver.

Now configure your application logger using

<logger name="com.sample.app..common" additivity="false">
<level value="info" />
<appender-ref ref="file" />
<appender-ref ref="ABC-A1_SYS" />
</logger>
<!-- Scheduled Jobs Logs →


We need to add the line in root logger in the log4j configuration.
<root>
<level value="ERROR" />
<appender-ref ref="file" />
<appender-ref ref="ABC-A1_SYS" />
</root>

Once this is done , deploy you application with the above log4j configuration.

3.Now we need to configure the Receiver side where the logs are to be saved , we need to configure the Syslog-ng configuration like

#----------------------------------------------------------------------
# Options Which tells about the Owner , port information
#----------------------------------------------------------------------
options
{
owner(root);
group(root);
log_fifo_size(8192);
perm(0664);
sync(0);
use_dns(no);
};

#--------------------------------------------------------------------------------
# Sources from where the Content can come or the receiver should read
#--------------------------------------------------------------------------------
source source(s_crpchipper) { udp(ip(0.0.0.0) port(59503)); };

In the above line we use the same port as the one used in our web application log4j configuration
and next , we have

#Filters for event handlers
filter f_abc {match('\[abc\]');}; # Filter the content coming on the Port using the identi value
We use the identi value in here

#destinations
destination d_abc { file("/logs/syslog/conf/dev/abc/abc-$MONTH-$DAY.log");}; # the location of the file where the content is to be pushed

#Logging
log { source(s_crpchipper); filter(f_abc); destination(d_abc); flags(final); };
Configure these in the file
/syslog-ng/etc/syslog-ng/syslog-ng.conf

Once the Configuration is done , just start the Process using
/syslog-ng/sbin/syslog-ng -f /syslog-ng/etc/syslog-ng/syslog-ng.conf

Now the Process is up and running , so when ever the web application generates log content , the log will also be saved on the 198.12.34.22 server at location logs/syslog/conf/dev/abc/abc-$MONTH-$DAY.log.


There may be a small latency for the logs being updated.
Read More

Thursday, July 4, 2013

JBoss Operations Network CLI (JON CLI)

I recenlty faced a task where i need to change all the userNames and Password of the JBoss EWP Instances.Even though i was aware of how to do this , there is another step that needs to be done after this.I need to update the JON ( JBoss Operations Network) too with the Updated Credentials.But since there are more than 50 Ewp Instances , i cant login to JON for every Instances and Change.

JBoss Operations Network provides a solution by using JON CLI.

The JON CLI is a stand alone java application which uses the Java Scripting API to interact with JON programmatically.Since it use Java Scripting API , we require Java 6 and later versions.The CLI allows developers and administrators to connect to JON servers and perform various actions like retriving metrics, changing credendital, enabling and disabling metrics e.t.c.Java 6 continas Rhino Java Script Engine and hence Java Script is supported Scripting Language in the CLI.

here is how it goes

1.Download the JON CLI package ( Login to JON server -> Administration -> Downloads -> select the rhq-remoting zip to download)

2.Change the RHQ_CLI_JAVA_HOME in the rhq-cli-env.sh file pointing to JAVA_HOME location like
RHQ_CLI_JAVA_HOME ="/software/java32/jdk1.6.0_29"

3.Connect to the server using 
 ./rhq-cli.sh -u rhqadmin -p <Password>  --host <Host> --port <Port>

4.Now run the below command

rhq-cli.sh -u rhqadmin -p <Password>  --host <Host> --port <Port> -c "pretty.print(ResourceTypeManager.findResourceTypesByCriteria(new ResourceTypeCriteria()))" > resource.txt

This connects to the CLI ,logs into your RHQ server running on host specified and executes the command in quotoes.It finally redirects the output to the file resource.txt.

The command says

ResourceTypeManager.findResourceTypesByCriteria(new ResourceTypeCriteria()). This invokes the findResourceTypesByCriteria operation on ResourceTypeManager. A new ResourceTypeCriteria object is passed as the argument. Because nothing has been specified on the criteria object, all resource types will be returned. 

now the pretty.print(...) portion. pretty is an implicit object made available to commands and scripts by the CLI. It is very useful for outputting objects in a readable, tabular format, designed with enhanced capabilities for domain objects. In review, this single command gives us a nicely formatted,text-based report of the resource types in our inventory.

Now to my task in order to get all the list of Ewp Instances available run the Below command one by one

var critria = ResourceCriteria();
criteria.addFilterResourceCategories([ResourceCategory.valueOf("SERVER")]);
criteria.addFilterPluginName("JBossAS5");
criteria.addFilterResourceTypeName("JBossAS Server");
var resources = ResourceManager.findResourcesByCriteria(criteria);
pretty.print(resources);

We can see the Response as

id    name             version      currentAvailabil resourceType
-------------------------------------------------------------------------------
47305 ABC-1   EWP 5.0.0.GA     DOWN             JBossAS Server
26923 DEC-2            EWP 5.0.0.GA UP               JBossAS Server
75270 ZXC-1   EWP 5.0.0.GA UNKNOWN          JBossAS Server
3 rows

Now we can get what are the available Plugin Options for the resource like

ConfigurationManager.getPluginConfiguration(31683);

Configuration [178486] - null
  homeDir = /software/jboss/ewp32/5.0
  shutdownMBeanName = jboss.system:type=Server
  scriptPrefix = null
  shutdownScript = xxxxxxxxxx
  serverHomeDir = xxxxxxxxxx
  serverName = ABC-1
  shutdownMethod = SCRIPT
  startScript = xxxxxxxxx
  javaHome = /software/java32/jdk1.6.0_16
  namingURL = xxxxxxxxxxx
  principal = admin
  childJmxServerName = JVM
  shutdownMBeanOperation = shutdown
  availabilityCheckPeriod = null
  bindAddress = xxxxxxxx
  credentials = _._._[MaSKeD]_._._
  logEventSources [0] {
  }

Now to change the credentials we can use

//For each resource, get it's Plugin Configuration, make change for Credentials and Password and update the Plugin Configuration

for (var i=0; i < resources.size(); i++) {
    var myConfig = ConfigurationManager.getPluginConfiguration(resources.get(i).id);
    var property = new PropertySimple("principal","myAdmin");
    myConfig.put(property);
    property = new PropertySimple("credentials","myPassword");
    myConfig.put(property); 
    ConfigurationManager.updatePluginConfiguration(resources.get(i).id, myConfig);
}

The whole thing can be copied to a file and the entire file can be executed using 
rhq-cli.sh -u rhqadmin -p <Password>  --host <Host> --port <Port>  -f /full/path/changePassword.js

Happy learning :-) , More To Come

Read More