UnixServerAdmin

Server Administration & Management

index.jsp for tomcat cluster with HA

<%@ page language=”java” %>
<HTML>
<HEAD>
<TITLE>Login using jsp</TITLE>
</HEAD>
<BODY>
<h1><font color=”red”>Index Page by Tomcat-2 Node-2</font></h1>
<h2><font color=”blue”>This is test page of Tomcat-2 of NODE-2</font></h2>
<table> align=”centre” border=”1″
<h2>></h2>
<tr>
<td>Session ID –> </td>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on –> </td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</BODY>
</HTML>

Advertisements

March 23, 2012 Posted by | Apache, Cluster, Tomcat | , , | Leave a comment

worker.properties_cluster

##############################################################
# workers to contact, that’s what you have in your httpd.conf
worker.list=loadbalancer

# setup tomcat1
worker.tomcat1.port=8109
worker.tomcat1.host=localhost
worker.tomcat1.type=ajp13
worker.tomcat1.lbfactor=1

# setup tomcat2
worker.tomcat2.port=8209
worker.tomcat2.host=localhost
worker.tomcat2.type=ajp13
worker.tomcat2.lbfactor=1

# setup tomcat3
worker.tomcat3.port=8309
worker.tomcat3.host=localhost
worker.tomcat3.type=ajp13
worker.tomcat3.lbfactor=1

# setup the load-balancer
worker.loadbalancer.type=lb
worker.loadbalancer.method=R/S/T/B
worker.loadbalancer.balance_workers=tomcat1,tomcat2,tomcat3
worker.loadbalancer.sticky_session=True
#worker.loadbalancer.sticky_session_force=True

# Status worker for managing load balancer
worker.status.type=status
##################################################################

worker.list –> Describe the workers that are available to Apache via a list

ajp13 –> This type of worker represents a running Tomcat instance

lb –> used for load balancing

status –> display useful information about how the load among the various Tomcat workers is distributed

Sticky sessions are an important feature if you rely on jSessionIDs and are not using any session-replication layer. If sticky_session is True a request always gets routed back to the node which assigned this jSessionID.
If that host should get disconnected, crash or become unreachable otherwise the request will be forwarded to another host in the cluster.

R :- Request –> If method is set to Request the balancer will use number of requests to find the best worker. Accesses will be distributed according to the lbfactor in a sliding time window. This is the default value and should be working well for most applications.

S :- Session –> If method is set to Session the balancer will use number of sessions to find the best worker. Accesses will be distributed according to the lbfactor in a sliding time window. Because the balancer does not keep any state, it actually does not know the number of sessions. Instead it counts each request without a session cookie or URL encoding as a new session. This method will neither know, when a session is being invalidated, nor will it correct its load numbers according to session timeouts or worker failover. This method should be used, if sessions are your limiting resource, e.g. when you only have limited memory and your sessions need a lot of memory.

T :- Traffic –> If set to Traffic the balancer will use the network traffic between JK and Tomcat to find the best worker. Accesses will be distributed according to the lbfactor in a sliding time window. This method should be used, if network to and from the backends is your limiting resource.

B :- Busyness –> If set to Busyness the balancer will pick the worker with the lowest current load, based on how many requests the worker is currently serving. This number is divided by the workers lbfactor, and the lowest value (least busy) worker is picked. This method is especially interesting, if your request take a long time to process, like for a download application.
##############################################################

March 19, 2012 Posted by | Apache, Cluster, Tomcat | , , | Leave a comment

tomcat_ssl.conf_cluster

# vim /etc/httpd/conf.d/ssl.conf

Line No. 83 to 93

##################################################################
# LoadModule jk_module modules/mod_jk.so
# JkWorkersFile /etc/httpd/conf/worker.properties
JkLogFile /var/log/httpd/mod_jk.log
JkLogLevel info
JkLogStampFormat “[%a %b %d %H:%M:%S %Y] ”
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat “%w %V %T”
JkEnvVar SSL_CLIENT_V_START
Jkmount /* loadbalancer
# JkMount /examples/*.jsp worker1
##################################################################

March 17, 2012 Posted by | Apache, Cluster, Tomcat | , , | Leave a comment

tomcat_httpd.conf_cluster

# vim /etc/httpd/conf/httpd.conf

Line No. 201 to 212

##################################################################
LoadModule jk_module modules/mod_jk.so
JkWorkersFile /etc/httpd/conf/worker.properties
JkLogFile /var/log/httpd/mod_jk.log
JkLogLevel info
JkLogStampFormat “[%a %b %d %H:%M:%S %Y] “
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat “%w %V %T”
JkEnvVar SSL_CLIENT_V_START
Jkmount /* loadbalancer
# JkMount /examples/*.jsp worker1
##################################################################

March 15, 2012 Posted by | Apache, Cluster, Tomcat | , , | Leave a comment

Tomcat Configuration Cluster

###################################################################
Horizontal Tomcat Clustering ( on Multiple Physical Machine, single Tomcat)

Browser <—> Hardware Server-1(tomcat1), Hardware Server-2(tomcat2), Hardware Server-3(tomcat3)

Grouping multiple physical servers into a cluster.

A horizontal cluster consists of a cluster of servers that are exposed to browser clients as a single virtual server.
Horizontal clusters help to increase application scalability, performance, and robustness

###################################################################
Vertical Tomcat Clustering ( on Single Physical Machine, Multiple Tomcat)

Browser <—> Hardware Server (tomcat1, tomcat2, tomcat3)

A vertical cluster is like a horizontal cluster, except that rather than use several server machines linked together, vertical clusters use a single machine with multiple CPUs. Vertical clusters help to increase scalability on multiprocessor computers since they distribute work to several processes. Each process runs on a different CPU.

Clustering Tomcat Servlet Engines is interesting for two reasons: load balancing and failover.

###################################################################
Redirection requests to the mod_jk load balancer

###################################################################
Tomcat Configuration

The Tomcat instance must listen on the same port as is specified in the corresponding worker’s section in worker.properties.

The Engine jvmRoute property should correspond to the worker name in the worker.properties file or
the load balancer will not be able to handle stickyness.

<Engine name=”Catalina” defaultHost=”localhost” jvmRoute=”tomcat1″>

###################################################################

———————————————————————————-
Topics ————————– Tomcat1 —- Tomcat2 —- Tomcat3 —-
———————————————————————————-
Server Shutdown Port —–   8105 ——– 8205 ———– 8305 ——-
———————————————————————————-
Server Connector Port —– 8081 ——- 8082 ————- 8083 ——-
———————————————————————————-
AJP Connector Port ——– 8109 ——- 8209 ————- 8309 ——-
———————————————————————————-
jvmRoute ——————- tomcat1 —- tomcat2 —– tomcat3 —-
———————————————————————————-

—————————————————————————————————————–
# vim /usr/local/apache-tomcat/conf/server.xml

Line-22 :- <Server port=”8205″ shutdown=”SHUTDOWN”> # Server Shutdown Port, By Default 8005
Line-70 :- <Connector port=”8082″ protocol=”HTTP/1.1″ # Server Connector Port, By Default 8080
Line-91 :- <Connector port=”8209″ protocol=”AJP/1.3″ redirectPort=”8443″ /> # AJP Connector Port, By Default 8009
Line-103 :- <Engine name=”Catalina” defaultHost=”localhost” jvmRoute=”node1″> #
—————————————————————————————————————–

—————————————————————————————————————–
# vim /usr/local/apache-tomcat/conf/web.xml

Line-23 :- <Context distributable=”true” />

To indicate to a Servlet Container that the application can be clustered, a Servlet 2.4 standard <distributable/> element is placed into the applications deployment descriptor (web.xml). If this element is not added, the session maintained by this application across the three Tomcat instances will not be shared. You can also add it to the Context element.
—————————————————————————————————————–

March 13, 2012 Posted by | Cluster, Tomcat | , | Leave a comment

How to install and configure LVS to allow Load Balancing between Clusters/Nodes

The Linux Virtual Server Project (LVS) allows load balancing of networked services such as web and mail servers using Layer 4 Switching. It is extremely fast and allows such services to be scaled to service 10s or 100s of thousands of simultaneous connections. Now configure ipvsadm in both node

node-1 :- 192.168.3.201 :- node-1.unixserveradmin.com
node-2 :- 192.168.3.202 :- node-2.unixserveradmin.com

Virtual IP (VIP) :- 192.168.3.135

on node-1

# yum install ipvsadm

# ipvsadm-save

# ipvsadm-restore

# ipvsadm -C (Flush)

# ipvsadm-save > ipvsadm_rules.txt

# ipvsadm-restore < ipvsadm_rules.txt

# ipvsadm -A -t 192.168.3.135:80 (# ipvsadm -A -t 192.168.3.135:80 -s rr)

# ipvsadm -a -t 192.168.3.135:80 -r 192.168.3.201:80 -m

# ipvsadm -a -t 192.168.3.135:80 -r 192.168.3.202:80 -m

# /etc/init.d/ipvsadm start

# chkconfig ipvsadm save

# chkconfig ipvsadm on

# tcpdump -n -i any port 80 (for testing)

# ipvsadm -L -n (To show the number of active connections)

-A, –add-service
-L, -l, –list
-t, –tcp-service service-address
-s, –scheduler scheduling-method
-w, –weight weight

–stats
Output of statistics information. The  list  command  with  this
option  will  display the statistics information of services and
their servers.

–rate
Output of rate information. The list command  with  this  option
will  display  the rate information (such as connections/second,
bytes/second and packets/second) of services and their  servers.

Scheduling Method :-

1. Round-Robin :- Distributes Job Equally (rr)
2. WRR :- Weighted Round Robin
3. LC :- Least Connection
4. WLC :- Weighted Least Connection
5. LBLC :- Locality bases Least Connection
6. LBLCR :- Locality Bases least Connection with Replication
7. DH :- Destination Hashing, use statically assigned hash table
8. SH :- Source Hashing, Use Statically assigned hash table
9. SED :- Shortest Expected Delay
10. NG :- Never Queue

March 11, 2012 Posted by | Apache, Cluster, LVS | , , , , | Leave a comment

Hearbeat Configration File Options

——————————————————————————————————————————-
logfacility local0 –> Facility to use for syslog()/logger
——————————————————————————————————————————-
keepalive 2 –> how long between heartbeats

A note on specifying “how long” times below, The default time unit is seconds 10 means ten seconds You can also specify them in milliseconds 1500ms means 1.5 seconds
——————————————————————————————————————————-
deadtime 30 –> how long-to-declare-host-dead?

If you set this too low you will get the problematic split-brain (or cluster partition) problem.
——————————————————————————————————————————-
warntime 10 –> how long before issuing “late heartbeat” warning?
——————————————————————————————————————————-
initdead 120 –> Very first dead time (initdead)

On some machines/OSes, etc. the network takes a while to come up and start working right after you’ve been rebooted. As a result we have a separate dead time for when things first come up. It should be at least twice the normal dead time.
——————————————————————————————————————————-
udpport 694 –> What UDP port to use for bcast/ucast communication?
——————————————————————————————————————————-
bcast etho –> What interfaces to broadcast heartbeats over?
——————————————————————————————————————————-
auto_failback on
on        –> enable automatic failbacks
off        –> disable automatic failbacks
legacy    –> enable automatic failbacks in systems where all nodes do not yet support the auto_failback option.
——————————————————————————————————————————-
node –> Tell what machines are in the cluster
——————————————————————————————————————————-

March 9, 2012 Posted by | Apache, Cluster | , , | Leave a comment

How to install and configure Failover “OR” High Availability (HA) Cluster with heartbeat in Apache

Heartbeat is a High Availably cluster software in linux platform. Here is following steps to  install and configure Heartbeat in RHEL/CentOS configure web server using Apache.

Heartbeat Version is : heartbeat-3.0

===========================================
Requirements :-

2 linux nodes, RHEL 5.x/CentOS 5.x
LAN & Internet connection.
A yum server.

Node-1: 192.168.3.201
Node-2: 192.168.3.202

Virtaul IP Address (VIP) :- 192.168.3.135
===========================================

1. Set the fully qualified hostnames and give corresponding entries in /etc/hosts and /etc/sysconfig/network

node-1 :- 192.168.3.201 :- node-1.unixserveradmin.com
node-2 :- 192.168.3.202 :- node-2.unixserveradmin.com

2. Configuring Apache on both node

# yum install httpd mod_ssl

On node1

# vim /var/www/html/index.html
This is test page of node 1 of Heartbeat HA cluster

On node2

# vim /var/www/html/index.html
This is test page of node 2 of Heartbeat HA cluster

On both nodes:

# vim /etc/httpd/conf/httpd.conf
Listen 192.168.3.135:80

3. Now start the service in both nodes.

# /etc/init.d/httpd restart  

# chkconfig httpd on

Note:- It won’t work until heartbeat is started. So don’t worry

4. Confirm them from broswer.

5. Install the following packages in both nodes:
(These below packages are not necessary, but you can install it.)

# yum install glibc*

# yum install gcc*

# yum install lib*

# yum install flex*

# yum install net-snmp*

# yum install OpenIPMI*

# yum install python-devel

# yum install perl*

# yum install openhpi*

6. Save the repo file for clusterlabs online repository in both node. Its available in http://www.clusterlabs.org/rpm/epel-5/clusterlabs.repo

# cd /etc/yum.repos.d/

# wget http://www.clusterlabs.org/rpm/epel-5/clusterlabs.repo

it is as follows:
————————————————————–
[clusterlabs]
name=High Availability/Clustering server technologies (epel-5)
baseurl=http://www.clusterlabs.org/rpm/epel-5
type=rpm-md
gpgcheck=0
enabled=1
————————————————————–

7. After that install heartbeat packages on both nodes:

# yum install cluster-glue* heartbeat* resource-agents*

8. Setting Configuration files:

We can do all configuration in one system and copy the /etc/ha.d to both nodes.

#cd /etc/ha.d

#cat README.config

9. The details about configuration files are explained in this file. We have to copy three
configuration files to this directory from samples in documentation.

# cp /usr/share/doc/heartbeat-3.0.3/authkeys /etc/ha.d/
# cp /usr/share/doc/heartbeat-3.0.3/ha.cf /etc/ha.d/
# cp /usr/share/doc/heartbeat-3.0.3/haresources /etc/ha.d/

10. We have to edit the authkeys file on both nodes:

We are using sha1 algorithm:

# vim /etc/ha.d/authkeys
—————
auth 2
#1 crc
2 sha1 test-ha
#3 md5 Hello!
—————

11. Change the permission of authkeys to 600 to both nodes:

# chmod 600 authkeys

12. We have to edit the ha.cf file on both nodes:

# vim /etc/ha.d/ha.cf

uncomment following lines and make edits
—————————————–
logfile /var/log/ha-log
logfacility local0
keepalive 1
deadtime 15
warntime 10
initdead 120
udpport 694
bcast eth0
auto_failback on
node node-1.unixserveradmin.com # in both nodes command “uname -n” should give the these hostnames
node node-2.unixserveradmin.com
—————————————–

13. We have to edit the haresources file on both nodes separately:

on node-1 :-

# vim /etc/ha.d/haresources
node-1.unixserveradmin.com    192.168.3.135 httpd

on node-2 :-

# vim /etc/ha.d/haresources
node-2.unixserveradmin.com    192.168.3.135 httpd

Note:- You dont have to create an interface and set this IP or make a IP alias. Heartbeat will take care of it Automatically.

14. Now exchange and save authorized keys between node1 and node2

node-1# ssh-keygen -t rsa

node-1# ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.3.202

node-2# ssh-keygen -t rsa

node-2# ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.3.201

15. Start Heartbeat service on both nodes:

# /etc/init.d/heartbeat start

# chkconfig heartbeat on

March 7, 2012 Posted by | Apache, Cluster | , , , | Leave a comment