UnixServerAdmin

Server Administration & Management

How to run a file system check on your next boot

The empty file /forcefsck causes the file system check fsck to be run next time you boot up, after which it will be removed.

# touch /forcefsck

March 31, 2012 Posted by | Tips & Tricks, Unix/Linux | , , , | Leave a comment

How to clean up cache memory of unnecessary things

First run sync first to flush useful things out to disk ! ! !

To free pagecache:

# echo 1 > /proc/sys/vm/drop_caches

To free dentries and inodes:

# echo 2 > /proc/sys/vm/drop_caches

To free pagecache, dentries and inodes:

# echo 3 > /proc/sys/vm/drop_caches

March 29, 2012 Posted by | Tips & Tricks, Unix/Linux | , , , | Leave a comment

network_scan.sh

#!/bin/bash
ip=$( ifconfig | grep “192.168” | cut -d: -f2  | awk ‘{print $1}’ )
echo
echo “Current IP Address of the box $ip”
sub=$( echo $ip | awk -F. ‘{print $3}’ )
echo
echo “Subnet used is 192.168.’$sub’.”
echo
echo “Checking for Computers Online on Local Network”
nmap -sP 192.168.$sub.0-255 | grep -v ‘MAC’ | awk ‘{print $2,$3}’

March 27, 2012 Posted by | Shell Script | | Leave a comment

How to remove spaces from file names with Linux

Usually Windows users like to add spaces in the file names, I prefer dashes (-) or under scores (_) instead, they are easy to manage in the console.

# for file in *; do mv “$file” `echo $file | sed -e ‘s/  */_/g’ -e ‘s/_-_/-/g’`; done

March 25, 2012 Posted by | Tips & Tricks, Unix/Linux | , | Leave a comment

index.jsp for tomcat cluster with HA

<%@ page language=”java” %>
<HTML>
<HEAD>
<TITLE>Login using jsp</TITLE>
</HEAD>
<BODY>
<h1><font color=”red”>Index Page by Tomcat-2 Node-2</font></h1>
<h2><font color=”blue”>This is test page of Tomcat-2 of NODE-2</font></h2>
<table> align=”centre” border=”1″
<h2>></h2>
<tr>
<td>Session ID –> </td>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on –> </td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</BODY>
</HTML>

March 23, 2012 Posted by | Apache, Cluster, Tomcat | , , | Leave a comment

How to history command file united

Introduction

If you open a lot of terminal screens at once while working on Linux, you may have noticed that the commands you type in one terminal are not available in the other, and usually once you close all terminals, and open another again,
you will see that only the commands written in one of them (The first one opened before I think) are available in history.

That is not good if you need that command written in the terminal that got not saved in the History file, so, how to solve that?

Linux history file united

You will prefer to have just one history file, with all commands you typed in any ever opened terminal window,
to accomplish this just enter this command in your $HOME/.bashrc file.

# shopt -s histappend

That is it.

March 21, 2012 Posted by | Tips & Tricks, Unix/Linux | , , | Leave a comment

worker.properties_cluster

##############################################################
# workers to contact, that’s what you have in your httpd.conf
worker.list=loadbalancer

# setup tomcat1
worker.tomcat1.port=8109
worker.tomcat1.host=localhost
worker.tomcat1.type=ajp13
worker.tomcat1.lbfactor=1

# setup tomcat2
worker.tomcat2.port=8209
worker.tomcat2.host=localhost
worker.tomcat2.type=ajp13
worker.tomcat2.lbfactor=1

# setup tomcat3
worker.tomcat3.port=8309
worker.tomcat3.host=localhost
worker.tomcat3.type=ajp13
worker.tomcat3.lbfactor=1

# setup the load-balancer
worker.loadbalancer.type=lb
worker.loadbalancer.method=R/S/T/B
worker.loadbalancer.balance_workers=tomcat1,tomcat2,tomcat3
worker.loadbalancer.sticky_session=True
#worker.loadbalancer.sticky_session_force=True

# Status worker for managing load balancer
worker.status.type=status
##################################################################

worker.list –> Describe the workers that are available to Apache via a list

ajp13 –> This type of worker represents a running Tomcat instance

lb –> used for load balancing

status –> display useful information about how the load among the various Tomcat workers is distributed

Sticky sessions are an important feature if you rely on jSessionIDs and are not using any session-replication layer. If sticky_session is True a request always gets routed back to the node which assigned this jSessionID.
If that host should get disconnected, crash or become unreachable otherwise the request will be forwarded to another host in the cluster.

R :- Request –> If method is set to Request the balancer will use number of requests to find the best worker. Accesses will be distributed according to the lbfactor in a sliding time window. This is the default value and should be working well for most applications.

S :- Session –> If method is set to Session the balancer will use number of sessions to find the best worker. Accesses will be distributed according to the lbfactor in a sliding time window. Because the balancer does not keep any state, it actually does not know the number of sessions. Instead it counts each request without a session cookie or URL encoding as a new session. This method will neither know, when a session is being invalidated, nor will it correct its load numbers according to session timeouts or worker failover. This method should be used, if sessions are your limiting resource, e.g. when you only have limited memory and your sessions need a lot of memory.

T :- Traffic –> If set to Traffic the balancer will use the network traffic between JK and Tomcat to find the best worker. Accesses will be distributed according to the lbfactor in a sliding time window. This method should be used, if network to and from the backends is your limiting resource.

B :- Busyness –> If set to Busyness the balancer will pick the worker with the lowest current load, based on how many requests the worker is currently serving. This number is divided by the workers lbfactor, and the lowest value (least busy) worker is picked. This method is especially interesting, if your request take a long time to process, like for a download application.
##############################################################

March 19, 2012 Posted by | Apache, Cluster, Tomcat | , , | Leave a comment

tomcat_ssl.conf_cluster

# vim /etc/httpd/conf.d/ssl.conf

Line No. 83 to 93

##################################################################
# LoadModule jk_module modules/mod_jk.so
# JkWorkersFile /etc/httpd/conf/worker.properties
JkLogFile /var/log/httpd/mod_jk.log
JkLogLevel info
JkLogStampFormat “[%a %b %d %H:%M:%S %Y] ”
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat “%w %V %T”
JkEnvVar SSL_CLIENT_V_START
Jkmount /* loadbalancer
# JkMount /examples/*.jsp worker1
##################################################################

March 17, 2012 Posted by | Apache, Cluster, Tomcat | , , | Leave a comment

tomcat_httpd.conf_cluster

# vim /etc/httpd/conf/httpd.conf

Line No. 201 to 212

##################################################################
LoadModule jk_module modules/mod_jk.so
JkWorkersFile /etc/httpd/conf/worker.properties
JkLogFile /var/log/httpd/mod_jk.log
JkLogLevel info
JkLogStampFormat “[%a %b %d %H:%M:%S %Y] “
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat “%w %V %T”
JkEnvVar SSL_CLIENT_V_START
Jkmount /* loadbalancer
# JkMount /examples/*.jsp worker1
##################################################################

March 15, 2012 Posted by | Apache, Cluster, Tomcat | , , | Leave a comment

Tomcat Configuration Cluster

###################################################################
Horizontal Tomcat Clustering ( on Multiple Physical Machine, single Tomcat)

Browser <—> Hardware Server-1(tomcat1), Hardware Server-2(tomcat2), Hardware Server-3(tomcat3)

Grouping multiple physical servers into a cluster.

A horizontal cluster consists of a cluster of servers that are exposed to browser clients as a single virtual server.
Horizontal clusters help to increase application scalability, performance, and robustness

###################################################################
Vertical Tomcat Clustering ( on Single Physical Machine, Multiple Tomcat)

Browser <—> Hardware Server (tomcat1, tomcat2, tomcat3)

A vertical cluster is like a horizontal cluster, except that rather than use several server machines linked together, vertical clusters use a single machine with multiple CPUs. Vertical clusters help to increase scalability on multiprocessor computers since they distribute work to several processes. Each process runs on a different CPU.

Clustering Tomcat Servlet Engines is interesting for two reasons: load balancing and failover.

###################################################################
Redirection requests to the mod_jk load balancer

###################################################################
Tomcat Configuration

The Tomcat instance must listen on the same port as is specified in the corresponding worker’s section in worker.properties.

The Engine jvmRoute property should correspond to the worker name in the worker.properties file or
the load balancer will not be able to handle stickyness.

<Engine name=”Catalina” defaultHost=”localhost” jvmRoute=”tomcat1″>

###################################################################

———————————————————————————-
Topics ————————– Tomcat1 —- Tomcat2 —- Tomcat3 —-
———————————————————————————-
Server Shutdown Port —–   8105 ——– 8205 ———– 8305 ——-
———————————————————————————-
Server Connector Port —– 8081 ——- 8082 ————- 8083 ——-
———————————————————————————-
AJP Connector Port ——– 8109 ——- 8209 ————- 8309 ——-
———————————————————————————-
jvmRoute ——————- tomcat1 —- tomcat2 —– tomcat3 —-
———————————————————————————-

—————————————————————————————————————–
# vim /usr/local/apache-tomcat/conf/server.xml

Line-22 :- <Server port=”8205″ shutdown=”SHUTDOWN”> # Server Shutdown Port, By Default 8005
Line-70 :- <Connector port=”8082″ protocol=”HTTP/1.1″ # Server Connector Port, By Default 8080
Line-91 :- <Connector port=”8209″ protocol=”AJP/1.3″ redirectPort=”8443″ /> # AJP Connector Port, By Default 8009
Line-103 :- <Engine name=”Catalina” defaultHost=”localhost” jvmRoute=”node1″> #
—————————————————————————————————————–

—————————————————————————————————————–
# vim /usr/local/apache-tomcat/conf/web.xml

Line-23 :- <Context distributable=”true” />

To indicate to a Servlet Container that the application can be clustered, a Servlet 2.4 standard <distributable/> element is placed into the applications deployment descriptor (web.xml). If this element is not added, the session maintained by this application across the three Tomcat instances will not be shared. You can also add it to the Context element.
—————————————————————————————————————–

March 13, 2012 Posted by | Cluster, Tomcat | , | Leave a comment

How to install and configure LVS to allow Load Balancing between Clusters/Nodes

The Linux Virtual Server Project (LVS) allows load balancing of networked services such as web and mail servers using Layer 4 Switching. It is extremely fast and allows such services to be scaled to service 10s or 100s of thousands of simultaneous connections. Now configure ipvsadm in both node

node-1 :- 192.168.3.201 :- node-1.unixserveradmin.com
node-2 :- 192.168.3.202 :- node-2.unixserveradmin.com

Virtual IP (VIP) :- 192.168.3.135

on node-1

# yum install ipvsadm

# ipvsadm-save

# ipvsadm-restore

# ipvsadm -C (Flush)

# ipvsadm-save > ipvsadm_rules.txt

# ipvsadm-restore < ipvsadm_rules.txt

# ipvsadm -A -t 192.168.3.135:80 (# ipvsadm -A -t 192.168.3.135:80 -s rr)

# ipvsadm -a -t 192.168.3.135:80 -r 192.168.3.201:80 -m

# ipvsadm -a -t 192.168.3.135:80 -r 192.168.3.202:80 -m

# /etc/init.d/ipvsadm start

# chkconfig ipvsadm save

# chkconfig ipvsadm on

# tcpdump -n -i any port 80 (for testing)

# ipvsadm -L -n (To show the number of active connections)

-A, –add-service
-L, -l, –list
-t, –tcp-service service-address
-s, –scheduler scheduling-method
-w, –weight weight

–stats
Output of statistics information. The  list  command  with  this
option  will  display the statistics information of services and
their servers.

–rate
Output of rate information. The list command  with  this  option
will  display  the rate information (such as connections/second,
bytes/second and packets/second) of services and their  servers.

Scheduling Method :-

1. Round-Robin :- Distributes Job Equally (rr)
2. WRR :- Weighted Round Robin
3. LC :- Least Connection
4. WLC :- Weighted Least Connection
5. LBLC :- Locality bases Least Connection
6. LBLCR :- Locality Bases least Connection with Replication
7. DH :- Destination Hashing, use statically assigned hash table
8. SH :- Source Hashing, Use Statically assigned hash table
9. SED :- Shortest Expected Delay
10. NG :- Never Queue

March 11, 2012 Posted by | Apache, Cluster, LVS | , , , , | Leave a comment

Hearbeat Configration File Options

——————————————————————————————————————————-
logfacility local0 –> Facility to use for syslog()/logger
——————————————————————————————————————————-
keepalive 2 –> how long between heartbeats

A note on specifying “how long” times below, The default time unit is seconds 10 means ten seconds You can also specify them in milliseconds 1500ms means 1.5 seconds
——————————————————————————————————————————-
deadtime 30 –> how long-to-declare-host-dead?

If you set this too low you will get the problematic split-brain (or cluster partition) problem.
——————————————————————————————————————————-
warntime 10 –> how long before issuing “late heartbeat” warning?
——————————————————————————————————————————-
initdead 120 –> Very first dead time (initdead)

On some machines/OSes, etc. the network takes a while to come up and start working right after you’ve been rebooted. As a result we have a separate dead time for when things first come up. It should be at least twice the normal dead time.
——————————————————————————————————————————-
udpport 694 –> What UDP port to use for bcast/ucast communication?
——————————————————————————————————————————-
bcast etho –> What interfaces to broadcast heartbeats over?
——————————————————————————————————————————-
auto_failback on
on        –> enable automatic failbacks
off        –> disable automatic failbacks
legacy    –> enable automatic failbacks in systems where all nodes do not yet support the auto_failback option.
——————————————————————————————————————————-
node –> Tell what machines are in the cluster
——————————————————————————————————————————-

March 9, 2012 Posted by | Apache, Cluster | , , | Leave a comment

How to install and configure Failover “OR” High Availability (HA) Cluster with heartbeat in Apache

Heartbeat is a High Availably cluster software in linux platform. Here is following steps to  install and configure Heartbeat in RHEL/CentOS configure web server using Apache.

Heartbeat Version is : heartbeat-3.0

===========================================
Requirements :-

2 linux nodes, RHEL 5.x/CentOS 5.x
LAN & Internet connection.
A yum server.

Node-1: 192.168.3.201
Node-2: 192.168.3.202

Virtaul IP Address (VIP) :- 192.168.3.135
===========================================

1. Set the fully qualified hostnames and give corresponding entries in /etc/hosts and /etc/sysconfig/network

node-1 :- 192.168.3.201 :- node-1.unixserveradmin.com
node-2 :- 192.168.3.202 :- node-2.unixserveradmin.com

2. Configuring Apache on both node

# yum install httpd mod_ssl

On node1

# vim /var/www/html/index.html
This is test page of node 1 of Heartbeat HA cluster

On node2

# vim /var/www/html/index.html
This is test page of node 2 of Heartbeat HA cluster

On both nodes:

# vim /etc/httpd/conf/httpd.conf
Listen 192.168.3.135:80

3. Now start the service in both nodes.

# /etc/init.d/httpd restart  

# chkconfig httpd on

Note:- It won’t work until heartbeat is started. So don’t worry

4. Confirm them from broswer.

5. Install the following packages in both nodes:
(These below packages are not necessary, but you can install it.)

# yum install glibc*

# yum install gcc*

# yum install lib*

# yum install flex*

# yum install net-snmp*

# yum install OpenIPMI*

# yum install python-devel

# yum install perl*

# yum install openhpi*

6. Save the repo file for clusterlabs online repository in both node. Its available in http://www.clusterlabs.org/rpm/epel-5/clusterlabs.repo

# cd /etc/yum.repos.d/

# wget http://www.clusterlabs.org/rpm/epel-5/clusterlabs.repo

it is as follows:
————————————————————–
[clusterlabs]
name=High Availability/Clustering server technologies (epel-5)
baseurl=http://www.clusterlabs.org/rpm/epel-5
type=rpm-md
gpgcheck=0
enabled=1
————————————————————–

7. After that install heartbeat packages on both nodes:

# yum install cluster-glue* heartbeat* resource-agents*

8. Setting Configuration files:

We can do all configuration in one system and copy the /etc/ha.d to both nodes.

#cd /etc/ha.d

#cat README.config

9. The details about configuration files are explained in this file. We have to copy three
configuration files to this directory from samples in documentation.

# cp /usr/share/doc/heartbeat-3.0.3/authkeys /etc/ha.d/
# cp /usr/share/doc/heartbeat-3.0.3/ha.cf /etc/ha.d/
# cp /usr/share/doc/heartbeat-3.0.3/haresources /etc/ha.d/

10. We have to edit the authkeys file on both nodes:

We are using sha1 algorithm:

# vim /etc/ha.d/authkeys
—————
auth 2
#1 crc
2 sha1 test-ha
#3 md5 Hello!
—————

11. Change the permission of authkeys to 600 to both nodes:

# chmod 600 authkeys

12. We have to edit the ha.cf file on both nodes:

# vim /etc/ha.d/ha.cf

uncomment following lines and make edits
—————————————–
logfile /var/log/ha-log
logfacility local0
keepalive 1
deadtime 15
warntime 10
initdead 120
udpport 694
bcast eth0
auto_failback on
node node-1.unixserveradmin.com # in both nodes command “uname -n” should give the these hostnames
node node-2.unixserveradmin.com
—————————————–

13. We have to edit the haresources file on both nodes separately:

on node-1 :-

# vim /etc/ha.d/haresources
node-1.unixserveradmin.com    192.168.3.135 httpd

on node-2 :-

# vim /etc/ha.d/haresources
node-2.unixserveradmin.com    192.168.3.135 httpd

Note:- You dont have to create an interface and set this IP or make a IP alias. Heartbeat will take care of it Automatically.

14. Now exchange and save authorized keys between node1 and node2

node-1# ssh-keygen -t rsa

node-1# ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.3.202

node-2# ssh-keygen -t rsa

node-2# ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.3.201

15. Start Heartbeat service on both nodes:

# /etc/init.d/heartbeat start

# chkconfig heartbeat on

March 7, 2012 Posted by | Apache, Cluster | , , , | Leave a comment

How to detect domain being Attacked or Attacking Out in cPanel

What we can do to find out which domain being attacked or attacking out from/to the server. Its no matter how this could happen, we need to stop that from happenning and turn our server stable. Its better to do this process in real-time within the  time frame of server being attacked or the server  others to make sure we can gather enough information, prove and logs. Its also recommended to document  your process of troubleshooting for your reference. Believe me you will need it in future.

As for me, I will do basic checking as below:

1. Check overall server load summary using top command:

# top -c

2. Using the same command, we can monitor which process has taken high resource usage by sorting memory (Shift+M) or sorting CPU usage (Shift+P)

3. Check the network and analyse which connection flooding your server. Following command might be useful:

3.1 Check and sort number of network statistics connected to the server:

# netstat -anp |grep ‘tcp|udp’ | awk ‘{print $5}’ | cut -d: -f1 | sort | uniq -c | sort -n

3.2 If you have APFinstalled and using kernel older than 2.6.20, you can check the connection tracking table:

# cat /proc/net/ip_conntrack | cut -d ‘ ‘ -f 10 | cut -d ‘=’ -f 2 | sort | uniq -c | sort -nr | head -n 10

3.3 Do tcpdump to analyse packet that transmitted from/to your server. Following command might help to analyse any connection to eth0interface port 53 (DNS):

# tcpdump -vvxXlnni eth0 port 53 | grep A? | awk -F? ‘{print $2}’

4. Analyse Apache status page at WHM –> Server Status –> Apache Status. To do this via command line, you can run following command:

# service httpd fullstatus

5. Analyse Daily process logs at WHM –> Server Status –> Daily Process Logs. Find any top 5 users which consume most CPU percentage, memory and SQL process

After that, we should see some suspected account/process/user which occupied much resources either on CPU, memory or network connections.
Up until this part, we should shorlist any suspected account.

Then from the suspected account, we should do any step advised as below:

6. Scan the public_html directory of suspected user with anti virus. We can use clamav, but make sure the virus definition is updated before we do this:

6.1 Update clamavvirus definition:

# freshclam

6.2 Scan the public_html directory of the suspected user recursively with scan result logged to scanlog.txt:

# cd /home/user/public_html

# clamscan -i -r -l scanlog.txt &

6.3 Analyse any suspected files found by clamav and quarantine them. Make sure the file cannot be executed by chmod it to 600

7. Find any PHP files which contain suspicious characteristic like base64 encoded and store it into text file called scan_base64.txt.
Following command might help:

# cd /home/user/public_html

# grep -lir “eval(base64” *.php >  scan_base64.txt

8. Scan the Apacheaccess log from raw log for any suspicious activities. Following command might help to find any scripting
activities happened in all domains via Apache:

# find /usr/local/apache/domlogs -exec egrep -iH ‘(wget|curl|lynx|gcc|perl|sh|cd|mkdir|touch)%20’ {} ;

9. Analysing AWstats and bandwidth usage also get more clues. Go to cPanel > suspected domain > Logs > Awstats.
In the AWstats page, check the Hosts, Pages-URL or any related section. Example as below:

There are various way to help you in executing this task. As for me, above said steps should be enough to detect any domain/account
which attacking out or being attacked. Different administrator might using different approach in order to produce same result.

March 5, 2012 Posted by | cPanel, Security | , , | Leave a comment

How to Backup and Restore large MySQL Database with Compression Method

If you have very large mysql database then it is very hard to backup and restore using the conventional phpmyadmin or any other programs.

To Backup MySQL Database

# mysqldump -u [username] -p [password] [dbname] > [backup.sql]

If your mysql database is very big, you might want to compress the output of mysql dump.

Just use the mysql backup command below and pipe the output to gzip, then you will get the output as gzip file.

# mysqldump -u [username] -p [password] [dbname] | gzip -9 > [backup.sql.gz]

To Restore MySQL Database, you need to create the database in target machine then use this command

# mysql -u [username] -p [password] [dbname] < [backup.sql]

Restore Compressed MySQL Database

# gunzip < [backup.sql.gz] | mysql -u [username] -p [password] [dbname]

March 3, 2012 Posted by | MySQL | , | Leave a comment

How to send a message to all users on linux system

To send a message to all login users on linux system, you can use wall command, which sends a message to everybody logged in with their message  permission set to yes. The message can be given as an argument to wall, or it can be sent to wall’s standard input. When using the standard input from a terminal, the message should be terminated with the EOF key (usually Control-D).

Examples
To send a message “Alert, Please disconnect from Server !”, type the following command

# wall
Alert, Please disconnect from Server !

When the message content is complete, press Control-D (CTRL+D) to send message to all users.

To display message “Alert, Please disconnect from Server !” to all admin group member use wall command with “–g” option as follows:

# wall -g admin
Alert, Please disconnect from Server !

When the message content is complete, press Control-D (CTRL+D) to send message to all users.

March 1, 2012 Posted by | Tips & Tricks, Unix/Linux | , , | Leave a comment