How to install Tomcat6 + Mod_JK

เริ่มต้นติดตั้ง Java และ jsvc
apt-get update
apt-get install sun-java6-jdk jsvc

ดาวน์โหลด Tomcat6
cd /usr/src

แตกไฟล์ และย้ายไปที่ /usr/share
tar zxvf apache-tomcat-6.0.29.tar.gz
cd apache-tomcat-6.0.29
apache-tomcat-6.0.29 /usr/share/tomcat6

เพิ่ม user และกำหนดสิทธิ์
useradd tomcat6
chown -R tomcat6: /usr/share/tomcat6

vi /etc/init.d/tomcat6
export JAVA_HOME=/usr/lib/jvm/java-6-sun

case $1 in

sh /usr/share/tomcat6/bin/


sh /usr/share/tomcat6/bin/


sh /usr/share/tomcat6/bin/
sh /usr/share/tomcat6/bin/


exit 0

chmod +x /etc/init.d/tomcat6

update-rc.d tomcat6 defaults

/etc/init.d/tomcat6 start

wget http://localhost:8080

ตั้งค่าในไฟล์ server.xml
<Server port="8005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListen
er" />
  <Listener className="org.apache.catalina.storeconfig.StoreConfigLifecycleListe
  <!– Global JNDI resources –>
    <!– Test entry for demonstration purposes –>
    <Environment name="simpleValue" type="java.lang.Integer" value="30"/>
    <!– Editable user database that can also be used by
         UserDatabaseRealm to authenticate users –>
    <Resource name="UserDatabase" auth="Container"
       description="User database that can be updated and saved"
          pathname="conf/tomcat-users.xml" />
  <!– Define the Tomcat Stand-Alone Service –>
  <Service name="Catalina">
    <!– A "Connector" represents an endpoint by which requests are received
         and responses are returned.  Each Connector passes requests on to the
         associated "Container" (normally an Engine) for processing.
    <!– Define a non-SSL HTTP/1.1 Connector on port 2117 (default 8080) –>
    <!– <Connector port="8080" maxHttpHeaderSize="8192"
               maxThreads="150" minSpareThreads="5" maxSpareThreads="75"
               enableLookups="false" redirectPort="8443" acceptCount="100"
               connectionTimeout="20000" disableUploadTimeout="true" />–>
    <!– Define an AJP 1.3 Connector on port 8009 –>
    <Connector port="8009"
               enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />
    <!– Define a Proxied HTTP/1.1 Connector on port 8082 –>
    <!– See proxy documentation for more information about using this. –>
    <Connector port="8082"
               maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
               enableLookups="false" acceptCount="100" connectionTimeout="20000"
               proxyPort="80" disableUploadTimeout="true" />
    <!– An Engine represents the entry point (within Catalina) that processes
         every request.  The Engine implementation for Tomcat stand alone
         analyzes the HTTP headers included with the request, and passes them
         on to the appropriate Host (virtual host). –>
    <!– Define the top level container in our container hierarchy –>
    <Engine name="Catalina" defaultHost="localhost">
      <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
      <!– Define the default virtual host –>
      <Host name="" appBase="/var/www/"
        unpackWARs="true" autoDeploy="true">
        <Context path="" docBase="appname" debug="0" reloadable="true"/>
        <Valve className="org.apache.catalina.valves.AccessLogValve"
                 directory="logs"  prefix="example.com_access_log." suffix=".txt"
                 pattern="common" resolveHosts="false"/>
ติดตั้ง mod_jk
apt-get install libapache2-mod-jk


.htaccess rule to prevent iframe attack


RewriteCond %{QUERY_STRING}
RewriteRule .* – [F]


Setting Up A Highly Available NFS Server


Setting Up A Highly Available NFS Server
credit by falko (

In this tutorial shows how to set up a highly available NFS server that can be used as storage solution for other high-availability services like, for example, a cluster of web servers that are being loadbalanced. If you have a web server cluster with two or more nodes that serve the same web site(s), than these nodes must access the same pool of data so that every node serves the same data, no matter if the loadbalancer directs the user to node 1 or node n. This can be achieved with an NFS share on an NFS server that all web server nodes (the NFS clients) can access.

As we do not want the NFS server to become another "Single Point of Failure", we have to make it highly available. In fact, in this tutorial I will create two NFS servers that mirror their data to each other in realtime using DRBD and that monitor each other using heartbeat, and if one NFS server fails, the other takes over silently. To the outside (e.g. the web server nodes) these two NFS servers will appear as a single NFS server.

In this setup I will use Ubuntu7.10 (Gusty Gibbon) for the two NFS servers as well as for the NFS client (which represents a node of the web server cluster).

I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you.


1 My Setup

In this document I use the following systems:

  • NFS server 1:, IP address:; I will refer to this one as server1.
  • NFS server 2:, IP address:; I will refer to this one as server2.
  • Virtual IP address: I use as the virtual IP address that represents the NFS cluster to the outside.
  • NFS client (e.g. a node from the web server cluster):, IP address:; I will refer to the NFS client as client.
  • The /data directory will be mirrored by DRBD between server1 and server2. It will contain the NFS share /data/export.


2 Basic Installation Of server1 and server2

First we set up two basic Ubuntu systems for server1 and server2.

Regarding the partitioning, I use the following partition scheme:

/dev/sda1 — 100 MB /boot (primary, ext3, Bootable flag: on)
/dev/sda5 — 5000 MB / (logical, ext3)
/dev/sda6 — 1000 MB swap (logical)

/dev/sda7 — 150 MB unmounted (logical, ext3)
(will contain DRBD’s meta data)
/dev/sda8 — 26 GB unmounted (logical, ext3)
(will contain the /data directory)

You can vary the sizes of the partitions depending on your hard disk size, and the names of your partition might also vary, depending on your hardware (e.g. you might have /dev/hda1 instead of /dev/sda1 and so on). However, it is important that /dev/sda7 has a little more than 128 MB because we will use this partition for DRBD’s meta data which uses 128 MB. Also, make sure /dev/sda7 as well as /dev/sda8 are identical in size on server1 and server2, and please do not mount them (when the installer asks you:

No mount point is assigned for the ext3 file system in partition #7 of SCSI1 (0,0,0) (sda).
Do you want to return to the partitioning menu?

please answer No)! /dev/sda8 is going to be our data partition (i.e., our NFS share).

After the basic installation make sure that you give server1 and server2 static IP addresses.

Afterwards, you should check /etc/fstab on both systems. Mine looks like this on both systems:

# /etc/fstab: static file system information.
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
# /dev/hdc5
UUID=d2c6bd54-9952-4236-b597-72bf0d4849ec /               ext3    defaults,errors=remount-ro 0       1
# /dev/hdc1
UUID=e152973d-60da-4b18-a885-7b94109f4d31 /boot           ext3    defaults        0       2
# /dev/hdc6
UUID=d2262fad-6141-45b3-842e-eae876770ec6 none            swap    sw              0       0
/dev/hdb        /media/cdrom0   udf,iso9660 user,noauto,exec 0       0

Also make sure that /dev/sda7 (or /dev/hda7) and /dev/sda8 (or /dev/hda8…) are not listed in /etc/fstab!

2.1 Install Some Software

Now we install a few packages that are needed later on. Run

apt-get install binutils cpp fetchmail flex gcc libarchive-zip-perl libc6-dev libcompress-zlib-perl libdb4.3-dev libpcre3 libpopt-dev lynx m4 make ncftp nmap openssl perl perl-modules unzip zip zlib1g-dev autoconf automake1.9 libtool bison autotools-dev g++ build-essential 

3 Synchronize System Time

It’s important that both server1 and server2 have the same system time. Therefore we install an NTP client on both:


apt-get install ntp ntpdate

Afterwards you can check that both have the same time by running




4 Install NFS Server

Next we install the NFS server on both server1 and server2:


apt-get install nfs-kernel-server

Then we remove the system bootup links for NFS because NFS will be started and controlled by heartbeat in our setup:


update-rc.d -f nfs-kernel-server remove
update-rc.d -f nfs-common remove

We want to export the directory /data/export (i.e., this will be our NFS share that our web server cluster nodes will use to serve web content), so we edit /etc/exports on server1 and server2. It should contain only the following line:


vi /etc/exports


This means that /data/export will be accessible by all systems from the 192.168.0.x subnet. You can limit access to a single system by using instead of, for example. See

man 5 exports

to learn more about this.
Later in this tutorial we will create /data/exports on our empty (and still unmounted!) partition /dev/sda8.


5 Install DRBD

Next we install DRBD on both server1 and server2:


apt-get install linux-headers-2.6.22-14-server drbd8-module-source drbd8-utils
cd /usr/src/
tar xvfz drbd0.7.tar.gz
cd modules/drbd/drbd
make install

Then edit /etc/drbd.conf on server1 and server2. It must be identical on both systems and looks like this:


vi /etc/drbd.conf

global {
    usage-count yes;

common {
  syncer { rate 10M; }

resource r0 {
  protocol C;
  handlers {
    pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
    pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
    local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
    outdate-peer "/usr/sbin/drbd-peer-outdater";
  startup { wfc-timeout 0; degr-wfc-timeout     120; }
  disk { on-io-error detach; }
  net {
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
  syncer {
  on server1 {
    device      /dev/drbd0;
    disk        /dev/hdc8;
    meta-disk   /dev/hdc7[0];
  on server2 {
    device      /dev/drbd0;
    disk        /dev/sda8;
    meta-disk   /dev/sda7[0];

As resource name you can use whatever you like. Here it’s r0. Please make sure you put the correct hostnames of server1 and server2 into /etc/drbd.conf. DRBD expects the hostnames as they are shown by the command

uname -n

If you have set server1 and server2 respectively as hostnames during the basic Debian installation, then the output of uname -n should be server1 and server2.

Also make sure you replace the IP addresses and the disks appropriately. If you use /dev/hda instead of /dev/sda, please put /dev/hda8 instead of /dev/sda8 into /etc/drbd.conf (the same goes for the meta-disk where DRBD stores its meta data). /dev/sda8 (or /dev/hda8…) will be used as our NFS share later on.


6 Configure DRBD

Now we load the DRBD kernel module on both server1 and server2. We need to do this only now because afterwards it will be loaded by the DRBD init script.


modprobe drbd

Let’s configure DRBD:


create-md all


drbdadm up all
cat /proc/drbd

The last command should show something like this (on both server1 and server2):

version: 0.7.10 (api:77/proto:74)
SVN Revision: 1743 build by phil@mescal, 2005-01-31 12:22:07
0: cs:Connected st:Secondary/Secondary ld:Inconsistent
ns:0 nr:0 dw:0 dr:0 al:0 bm:1548 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

You see that both NFS servers say that they are secondary and that the data is inconsistant. This is because no initial sync has been made yet.

I want to make server1 the primary NFS server and server2 the "hot-standby", If server1 fails, server2 takes over, and if server1 comes back then all data that has changed in the meantime is mirrored back from server2 to server1 so that data is always consistent.

This next step has to be done only on server1!

server1 (Only):

drbdadm — –overwrite-data-of-peer primary all

Now we start the initial sync between server1 and server2 so that the data on both servers becomes consistent.

The initial sync is going to take a few hours (depending on the size of /dev/sda8 (/dev/hda8…)) so please be patient.

You can see the progress of the initial sync like this on server1 or server2:


cat /proc/drbd

The output should look like this:

version: 0.7.10 (api:77/proto:74)
SVN Revision: 1743 build by phil@mescal, 2005-01-31 12:22:07
0: cs:SyncSource st:Primary/Secondary ld:Consistent
ns:13441632 nr:0 dw:0 dr:13467108 al:0 bm:2369 lo:0 pe:23 ua:226 ap:0
[==========>.........] sync'ed: 53.1% (11606/24733)M
finish: 1:14:16 speed: 2,644 (2,204) K/sec
1: cs:Unconfigured

When the initial sync is finished, the output should look like this:

SVN Revision: 1743 build by phil@mescal, 2005-01-31 12:22:07
0: cs:Connected st:Primary/Secondary ld:Consistent
ns:37139 nr:0 dw:0 dr:49035 al:0 bm:6 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

7 Some Further NFS Configuration

NFS stores some important information (e.g. information about file locks, etc.) in /var/lib/nfs. Now what happens if server1 goes down? server2 takes over, but its information in /var/lib/nfs will be different from the information in server1‘s /var/lib/nfs directory. Therefore we do some tweaking so that these details will be stored on our /data partition (/dev/sda8 or /dev/hda8…) which is mirrored by DRBD between server1 and server2. So if server1 goes down server2 can use the NFS details of server1.


mkdir /data


tar cvfz /var/lib/nfs.tar.gz /var/lib/nfs
mount -t ext3 /dev/drbd0 /data
mv /var/lib/nfs/ /data/
ln -s /data/nfs/ /var/lib/nfs
mkdir /data/export
umount /data


tar cvfz /var/lib/nfs.tar.gz /var/lib/nfs
rm -fr /var/lib/nfs/
ln -s /data/nfs/ /var/lib/nfs


8 Install And Configure heartbeat

heartbeat is the control instance of this whole setup. It is going to be installed on server1 and server2, and it monitors the other server. For example, if server1 goes down, heartbeat on server2 detects this and makes server2 take over. heartbeat also starts and stops the NFS server on both server1 and server2. It also provides NFS as a virtual service via the IP address so that the web server cluster nodes see only one NFS server.

First we install heartbeat:


apt-get install heartbeat

Now we have to create three configuration files for heartbeat. They must be identical on server1 and server2!


vi /etc/heartbeat/

logfacility daemon
keepalive 2
deadtime 10
udpport 694
bcast eth0
node server1
node server2

As nodenames we must use the output of uname -n on server1 and server2.


vi /etc/heartbeat/haresources

server1 IPaddr:: drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 nfs-kernel-server

The first word is the output of uname -n on server1, Use server1 on both server, After IPaddr we put our virtual IP address, and after drbddisk we use the resource name of our DRBD resource which is r0 here (remember, that is the resource name we use in /etc/drbd.conf – if you use another one, you must use it here, too).


vi /etc/heartbeat/authkeys

auth 3
3 md5 somerandomstring

somerandomstring is a password which the two heartbeat daemons on server1 and server2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.

/etc/heartbeat/authkeys should be readable by root only, therefore we do this:


chmod 600 /etc/heartbeat/authkeys  

Heartbeat Filesystem can not run in Ubuntu Gutsy.
Filesystem ::/dev/drdb0::/data:: ext3 ::default probably not LSB-compliant
To fix it, Edit this file:


vi /usr/lib/ocf/resource.d/heartbeat/Filesystem

@@ -475,7 +475,7 @@ Filesystem_notify() {
# already on the active list, confusing the
# script later on:
for UNAME in "$n_active"; do
- n_start="${n_start//$UNAME/}"
+ n_start=`echo ${n_start} | sed s/$UNAME//`
# Merge pruned lists again:
n_active="$n_active $n_start"
@@ -488,7 +488,7 @@ Filesystem_notify() {
# remove unames from notify_stop_uname; these have been
# stopped and can no longer be considered active.
for UNAME in "$n_stop"; do
- n_active="${n_active//$UNAME/}"
+ n_active=`echo ${n_active} | sed s/$UNAME//`

Finally we start DRBD and heartbeat on server1 and server2:


/etc/init.d/drbd start
/etc/init.d/heartbeat start

9 First Tests

Now we can do our first tests. On server1, run



In the output, the virtual IP address should show up:

Also, run


df -h

on server1. You should see /data listed there now:

If you do the same


df -h

on server2, you shouldn’t see and /data.

Now we create a test file in /data/export on server1 and then simulate a server failure of server1 (by stopping heartbeat):


touch /data/export/test1
/etc/init.d/heartbeat stop

If you run ifconfig and df -h on server2 now, you should see the IP address and the /data partition, and


ls -l /data/export

should list the file test1 which you created on server1 before. So it has been mirrored to server2!

Now we create another test file on server2 and see if it gets mirrored to server1 when it comes up again:


touch /data/export/test2


/etc/init.d/heartbeat start

(Wait a few seconds.)

df -h
ls -l /data/export

You should see and /data again on server1 which means it has taken over again (because we defined it as primary), and you should also see the file /data/export/test2!


10 Configure The NFS Client

Now we install NFS on our client (

apt-get install nfs-common

Next we create the /data directory and mount our NFS share into it:

mkdir /data
mount /data is the virtual IP address we configured before. You must make sure that the forward and the reverse DNS record for match each other, otherwise you get a "Permission denied" error on the client, and on the server you’ll find this in /var/log/syslog:

#Mar  2 04:19:09 localhost rpc.mountd: Fake hostname localhost for – forward lookup doesn’t match reverse

If you do not have proper DNS records (or do not have a DNS server for your local network) you must change this now, otherwise you cannot mount the NFS share!

If it works you can now create further test files in /data on the client and then simulate failures of server1 and server2 (but not both at a time!) and check if the test files are replicated. On the client you shouldn’t notice at all if server1 or server2 fails – the data in the /data directory should always be available (unless server1 and server2 fail at the same time…).

To unmount the /data directory, run

umount /data

If you want to automatically mount the NFS share at boot time, put the following line into /etc/fstab:  /data    nfs          rw            0    0


Speed up a slow Proftpd connection

Speed up a slow Proftpd connection

When you make a connection to your server, does it take forever? If so then you might be experiencing ProFTPd’s attempt at doing a reverse DNS lookup.

To remedy this problem, we are going to add a few lines to the proftpd configuration file for the Ensim webppliance.

Using your favourite text editor–I use PICO here as most beginner level people use it–to modify the following file:


then simply add the following lines:

UseReverseDNS off
DefaultRoot ~
ServerIdent on "FTP Server ready."
IdentLookups off

Note: the UseReverseDNS off does NOT work within the <Global></Global> tag!

Now restart proFTPd

/etc/init.d/proftpd restart

Then try to log into your server again. A connection should be made almost instantly.

Reference :

Thai identifier decryption

function makearray(n)
 for( var i=0; i < n ; i++)
  this[i] = 0;
 return this;
 ///// **** START
  var xCheck = 0;
  var g = 0;
  var c = 0;
 var totalX = 0;
 var sumX = 0;
 var x = new makearray(13);
 var z="0";
 var l="0";
  var tssn  = document.checkregister.tssn.value;
if ( tssn.length != 13 ) {
    return false;
    else if ( tssn.length > 1 )
 for (var q=0; q<tssn.length; ++q)
  var codee=tssn.charCodeAt(q);
//  if   (codee>=48 && codee<=57)continue ;
  if   (codee>=48 && codee<=57)
   if(codee == 48)
     { x[q] = 0; }
    else if(codee ==49)
     { x[q] = 1; }
    else if(codee == 50)
     { x[q] = 2; }
    else if(codee == 51)
     { x[q] = 3; }
    else if(codee == 52)
     { x[q] = 4; }
    else if(codee == 53)
     { x[q] = 5; }
    else if(codee == 54)
     { x[q] = 6; }
    else if(codee == 55)
     { x[q] = 7; }
    else if(codee == 56)
     { x[q] = 8; }
    else if(codee == 57)
     { x[q] = 9; }
    if (x[0] == 0) {
    if (x[0] == 9) {
    xCheck = x[q];
   if(q != 12){
    totalX = x[q]*(13-q);
    sumX = sumX + totalX;
 }// end for
  c = sumX % 11;
  if (c==0) g=1;
  else if (c==1) g=0;
    else g = 11- c;
    if (xCheck != g)
if (l=="1"){
return false;
else if (l=="0") 

             document.checkregister.send.disabled = true;
         return true;

  else {
 //// ***** End

Protection for your SSH

To provide some level of protection for SSH
Dictionary attack indeed. I started having serious problems with dictionary attacks on a Mandrake server I was running. To echo the above, I highly recommend using public key authentication. It is much more secure. Just type "openssh key authentication" in google and you’ll get plenty of information. It is pretty easy to do if you basically know your way around Linux.

Also, you should take advantage of AllowUsers in /etc/ssh/sshd_config. Only allow the users you want, that way nobody can get in through some strange user account you didn’t even know about that has a weak password or no password at all. Type "man sshd_config" for more information.

After I started using these two things, attacks on my ssh server basically stopped or were completely ineffective.

Reference :