finally: migrated!

I finally had the time and chance to migrate the site to a new server, whew. Had a few quirks along the way(it happens you know), one is permalinks was bugging me. I realized then that I missed transferring the .htaccess file which contains the following lines:

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress

After the changes were done, restarted apache but the same thing was still happening lexapro generic. 404 not found on friendly url links. Something might be wrong, there must be a misconfiguration somewhere in wordpress or apache itself. My suspicion was .htaccess wasn’t being read. I researched info on how to make apache read .htaccess files and found out that I needed to make changes to the following snippet:

<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>

To this:

<Directory />
Options FollowSymLinks
AllowOverride All
</Directory>

Reloaded apache and the permalinks were all working again. 1 point for the geek who knows how to google.

The 2nd issue I encountered was the sql dump file I got from the old site, restoring it didn’t require much work. But when I was viewing my site I kept on seeing npc’s (non-printable chars), the char  to be specific. I tried using dos2unix to convert the dump file hoping the said characters would get remove to no avail. Tried unix2dos too, duh, same thing. Finally, just vi’d it and removed those chars with :%s/Â//g. w00t. Another point for that geek who spends so much time in front of his pc.

What’s next on the todo list? I got some stuff that I should be really posting, it’s just that I’ve never been really excellent in documenting what I do. Somehow I still try to overcome that and somehow sometimes I prevail lol.

Good times.

Posted in Uncategorized | Tagged | Leave a comment

http proxy for icecast

Let’s say you have virtual hosting configured on your server and you want your icecast server to be accessible from

http://starvingpenguin.com:4200

to this:

http://stream.starvingpenguin.com

Add this to your httpd configuration:

<VirtualHost *:80>
ServerAdmin admin@starvingpenguin.com
ServerName stream.starvingpenguin.com

SetEnv downgrade-1.0 1
SetEnv force-response-1.0 1

ProxyRequests Off

ProxyPass / http://stream.starvingpenguin.com:4200/
ProxyPassReverse / http://stream.starvingpenguin.com:4200/

<Location />
Order allow,deny
Allow from all
</Location>
</VirtualHost>

Neat huh? Detailed info can be found here: http://tinyurl.com/5h3ktm

Posted in Uncategorized | Tagged , , | Leave a comment

icecast startup script

#!/bin/bash
#
# Startup script for icecast
#
# chkconfig: 2345 86 25
# description: Icecast Streaming Server
#
# processname: icecast
# config: /etc/icecast.xml
# pidfile: /usr/share/icecast/icecost.pid

. /etc/rc.d/init.d/functions

RETVAL=0
PIDFILE=/usr/share/icecast/icecost.pid
CONF=/etc/icecast.xml
ICECAST=/usr/bin/icecast
OPTS=”-b -c $CONF”

start()
{
echo -n $”Starting icecast”
daemon icecast $OPTS
RETVAL=$?
[ “$RETVAL” = 0 ] && touch /var/lock/subsys/icecast
echo
pidof icecast >$PIDFILE
return $RETVAL
}

stop()
{
echo -n $”Stopping icecast”
killproc icecast -TERM
RETVAL=$?
[ “$RETVAL” = 0 ] && rm -f /var/lock/subsys/icecast
echo
rm -f $PIDFILE
return $RETVAL
}

reload()
{
echo -n $”Reloading icecast”
killproc icecast -HUP
RETVAL=$?
echo
return $RETVAL
}

condrestart()
{
[ -e /var/lock/subsys/icecast ] && restart
return 0
}

case “$1″ in
start)
start
;;
stop)
stop
;;
restart)
stop
# wait for listening sockets to clear
echo “Waiting 5 seconds before restarting…”
sleep 5
start
;;
reload)
reload
;;
condrestart)
condrestart
;;
status)
status $ICECAST
RETVAL=$?
;;
*)
echo $”Usage: $0 {start|stop|restart|reload|condrestart|status}”
RETVAL=1
esac
exit $RETVAL

Posted in Uncategorized | Tagged , , | Leave a comment

playing with playlists.no pun intended.

suppose you have a playlist that you always use to stream audio content, but at times it gets boring to always listen to the same sequence of songs. what if you wanted your stream to play a commercial after every 2 or so songs? what if you want to shuffle your songs at random every time the last song on the playlist is heard? what is you want the same for your commercials? the wait is over, “playlist manipulator v432.65″ is finally here, for free, for everyone, yadayada

I’ve been playing around with playlists lately. Sometimes I would want to play albums/songs in a non-sequential manner, doing that would be easy I’d just randomize the .m3u file everytime it’s done being played. But I want something more, I thought of fictitious commercials like the ones being played in GTA games (ingame, when you enter a vehicle you can listen to the radio which plays songs and commercials in between not to mention hilarious commentaries from DJs). So yeah, why not play them commercials in between songs like after every 2 or 3 songs. Do them in random too I say.

There’s a lot of ways to accomplish such task, here’s how I did it:

-=BEGIN SCRIPT=-

#!/bin/bash

staging=/usr/local/music/staging
ezstream=/usr/local/ezstream/bin/ezstream
cd /usr/local/music/playlists

while :
do

#remove old playlists
rm -f /usr/local/music/staging/*

#create playlist for individual album, just so you have one for future reference/use
#do this manually, copy created playlist(with .m3u extension) in /usr/local/music/playlists folder
#to create a fresh ordered playlist for each album:
#for i in `ls /usr/local/music/albums`;do find /usr/local/music/albums/$i -name *.mp3 \
#-print >/usr/local/music/playlists/$i.m3u;done;

#exclude commercials.m3u
cat *.m3u |grep -v commercials >$staging/songs.m3u

#randomize songs
/usr/local/bin/sort -R $staging/songs.m3u > $staging/random-songs.m3u

#randomize commercials
/usr/local/bin/sort -R commercials.m3u >$staging/random-commercials.m3u

#in some cases, depending on which distro you’re using, your “sort” command won’t have the
#-R (randomize) option. Just like in my case(centos 5.4). You have 2 options here:
#1) use ezstream -s option to randomize the playlsit
# ezstream -s songs.m3u >random-songs.m3u
#2) download the source for coreutils, compile then install. be sure not to overwrite your rpm
# coreutils’ package binaries unless you know what you’re doing. To do so, just use an install prefix when
# configuring ./configure –prefix=/usr/local

#merge random-commercials.m3u and random-songs.m3u
#such that 1 commercial is played after every 2 songs
awk ‘FNR==NR{
song[FNR]=$0;
next
}
{
print song[FNR+line];line++;
print song[FNR+line]
print $0
}’ $staging/random-songs.m3u $staging/random-commercials.m3u \
>/usr/local/music/playlist.m3u

#feed the playlist to ezstream, its config file is now set to stream the playlist only once.
$ezstream -c /usr/local/music/ezstream_mp3_conf

done

-=END SCRIPT=-

The script will loop infinitely, everytime it loops it creates a new randomized list of songs and commercials.
Sample data would be something like below:

/usr/local/music/playlists folder contains individual playlists of various albums, playlist for commercials is also in said folder:
albumA.m3u – is a playlist of albumA
sample lines would be:
/usr/local/music/albums/albumA/albumA-song1.mp3
/usr/local/music/albums/albumA/albumA-song2.mp3
albumB.m3u – is a playlist of albumB
sample lines:
/usr/local/music/albums/albumB/song1.mp3
/usr/local/music/albums/albumB/song2.mp3
commercials.m3u – playlist of commercials
/usr/local/music/albums/commercials/commercial1.mp3
/usr/local/music/albums/commercials/commercial2.mp3

Once randomized and merged, the playlist to be streamed would look similar to this:
/usr/local/music/albums/albumB/song2.mp3
/usr/local/music/albums/albumA/albumA-song1.mp3
/usr/local/music/albums/commercials/commercial2.mp3
/usr/local/music/albums/albumA/albumA-song1.mp3
/usr/local/music/albums/albumB/song1.mp3
/usr/local/music/albums/commercials/commercial1.mp3

The sequence would change everytime the script loops, it loops when ezstream is done playing the current list.

Posted in Uncategorized | Tagged , , | 1 Comment

online radio station

goal: create my own shoutcast-like audio streaming server

apps used: icecast and ezstream

Just got a hold of a vps i finally can play with, w00t. And it’s just now that I had the time to do something with it. I’m thinking of migrating this blog there one of these days. The first thing I’ve thought of doing is setting up my own online radio station. The first app that came to mind was shoutcast(link shoutcast.com). But it’s not opensource =/. Googling around, I discovered icecast(link icecast.org). There are 2 major components in an online radio station: a streaming server(which is icecast) and a source client(there are various apps that be used for this). The soure client may reside on a computer different to the streaming server, but in my setup they’re gonna be on the same machine

The 1st one is where your listeners will connect to listen to the music you’ll be broadcasting, the latter is what you’ll use to broadcast your content. Think of it this way, the stream server is the radio station 95.5. Whenever you want to listen to this station you turn on your radio and jog the dial to that station. The source client is your dj, he speaks stuff and plays songs on air. He uses the radio station as his medium to let the listeners hear what he has on playing. Also the dj doesn’t have to be physically present in the radio station’s bldg to be able to broadcast through the radio station itself. Basically that’s how it works, although it can do more than what I intend it to do. At this time I only want to be broadcasting a specific playlist.

First step was to install icecast, I downloaded the source rpm from:

http://icecast.org/download.php

Ran rpmbuild –rebuild against the .src.rpm file and installed the needed dependencies. Once done building it, installed it, edited the config file and ended up with this config

(/etc/icecast.xml):

<icecast>
<limits>
<clients>100</clients>
<sources>2</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<client-timeout>30</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>10</source-timeout>
<burst-on-connect>1</burst-on-connect>
<burst-size>65535</burst-size>
</limits>
<authentication>
<source-password>xtremeleetpassw0rd</source-password>#for source client
<relay-password>anotherleetpassw0rd</relay-password>
<admin-user>admin</admin-user>
<admin-password>notsoleetpassw0rd</admin-password>
</authentication>
<hostname>starvingpenguin.com</hostname>
<listen-socket>
<port>8000</port>
</listen-socket>
<fileserve>1</fileserve>
<paths>
<basedir>/usr/share/icecast</basedir>
<logdir>/var/log/icecast</logdir>
<webroot>/usr/share/icecast/web</webroot>
<adminroot>/usr/share/icecast/admin</adminroot>
<alias source=”/” dest=”/status.xsl”/>
</paths>
<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<loglevel>3</loglevel>
<logsize>10000</logsize>
</logging>
<security>
<chroot>0</chroot>
<changeowner>
<user>nobody</user>
<group>nobody</group>
</changeowner>
</security>
</icecast>

After config was done, ran icecast with:

icecast -bc /etc/icecast.xml

Set up of 1st component is done, now comes the 2nd part which is installing a source client app. I chose to use ezstream because I uh, I like the name that’s why. The source code can be downloaded here.

./configure, make, then make install. I then had to create a config file for ezstream, sample configs can be found in /usr/local/share/examples/ezstream. I copied the sample ezstream_mp3.xml and made some changes to it to satisfy what I needed to do.

vi
<ezstream>
<url>http://localhost:8000/stream</url>
#your streaming server’s http://ip-or-host:port/stream
<sourcepassword>xtremeleetpassw0rd</sourcepassword>
#refer to your icecast.xml’s <source-password>
<format>MP3</format>
<filename>playlist.m3u</filename>#this still needs to be created
<stream_once>1</stream_once>#when file from <filename> is done being streamed, repeat.
<svrinfoname>pilip’s  Stream</svrinfoname>
<svrinfourl>http://www.starvingpenguin.com</svrinfourl>
<svrinfogenre>anything</svrinfogenre>
<svrinfodescription>music that i mostly listen to</svrinfodescription>
<svrinfobitrate>128</svrinfobitrate>
<svrinfochannels>2</svrinfochannels>
<svrinfosamplerate>44100</svrinfosamplerate>
<svrinfopublic>0</svrinfopublic>
</ezstream>
:wq /usr/local/music/my_first_stream.pl

Next is to create the playlist “playlist.m3u”. One way to do that would be to (my audio files are stored in /usrlocal/music):

find /usr/local/music -name *.mp3 >/usr/local/music/playlist.m3u

The file should now contain the filenames of my audio files along with their absolute paths.

To start streaming and have it run in the background:

ezstream -c /usr/local/music/my_first_stream.xml &

To access this stream, you can use the media players listed here: http://www.icecast.org/3rdparty.php. MSWindows’ own mediaplayer works too btw. The url to the stream would then be: http://starvingpenguin.com:8000/stream

goal completed.

TODOs:
-create start/stop script for icecast

Posted in Uncategorized | Tagged , | Leave a comment

the $? variable

my encounter with this variable hasn’t been very pleasant i’ve been trying to debug a snippet of shell script similar to this:

#!/bin/bash
Test()
{
var=3
return $var
}
Test2()
{
Test
print this is the return value $?
print this is the return value again $?
}
#it all starts here
Test

The output would then be:
this is the return value 3
this is the return value again 0

Instead of a 3 and 3 which is what I was expecting. It took a while to find this out.  The actual snippet was this:

select_server_root_folder()
{
choice=’server’

#display a menu
echo “select root folder to restore”
echo “”
echo “4. main menu”

read ans grbg
case $ans in
1)      folder=’usr’;;
2)      folder=’etc’;;
3)      folder=’var’;;
4)      main_menu;;
esac

#return value of function will be either 0 or 1
select_incremental_or_full $choice $folder
#echo below was used for testing to see if the value being returned by above function was either 1 or 0
#return value was just as expected/chosen.
echo return value is $?
#the condition statements below were working, at least i thought they were because i’ve only been testing the 1st #if statement below. as it turns out the value of $? resets to 0 which would be its default  after it’s used. i guess that’s the term for it.
if [ $? -eq 0 ]
then
do stuff here
fi
if [ $? -eq 1 ]
then
do other stuff here
fi
#this echo statement is where I was able to determine that you can only use the variable $? which holds the #return value of a function. it was always displaying 0 and 0 only
echo $?

}

to overcome the shortcoming of the script, i have set the value of the variable $? to another (var=$?) and used that variable instead.

note to self: good times ;]

note#2: found

Posted in Uncategorized | Tagged | Leave a comment

sync data between clustered servers

HOW TO SYNC DATA BETWEEN CLUSTERED WEB SERVERS

Supposed you had a cluster of web servers being load balanced, data between these servers have to be identical to each other. One way of doing that is whenever new content is to be added you upload data to all of the web servers. Would it be better if you only needed to update a single web server and the new content will then be propagated to all the other servers. This can be done with rsync, along with ssh to make it secure.

Let’s choose one of the web servers, in the cluster, as the main server where you’ll only be updating content. I choose you www1.myhomelab.net which is one of the servers being loadbalanced for www.myhomelab.net. The other server I’ll be propagating updates to will be www2.myhomelab.net.

www1.myhomelab.net – the main server where updates/changes are to be made
www2.myhomelab.net – will get synchronized with www1.

The folder where all my web content is at “/var/www/html”. Normally you can run this command, from the www1 server, to syn www2 and www1:

[root@www1]# rsync -ave “ssh -l root” –delete /var/data/www/ www2:/var/data/www/

It would then ask you for root’s password (ssh -l root), enter it then press enter and you’ll get an output similar to this:
root@www2′s password:
building file list … done
html/
html/favicon.ico
html/testdb.php
html/asdf/
html/asdf/.form.php.swp
html/asdf/del.php
html/asdf/form.php
html/asdf/view.php
html/asdf/db_stuff/
html/asdf/db_stuff/db_close.php
html/asdf/db_stuff/db_config.php
html/asdf/db_stuff/db_open.php

sent 30873 bytes  received 368 bytes  6942.44 bytes/sec
total size is 1121503  speedup is 35.90

The contents of /var/data/www/html on both servers would now be identical. Sounds good right? Just save the command to a script have it scheduled to run at a specific interval and you’re set. But wait there’s more, you’re doing the upload using root. That doesn’t sound good. You should assign a non-admin user to do the uploads, but before doing that make sure that he has write access to the folder you’ll be uploading data to. I’m gonna assign the user “pilip” which exists on www2 to do the uploading. The user needs to have write access to the folder that’ll be synchronized. Once permissions have been taken care of, this command can now be executed:

[root@www1]# rsync -ave “ssh -l pilip” –delete /var/data/www/ www2:/var/data/www/

The next thing to do now is to cron this. But wait, there’s more, when you’re gonna cron it it’ll have to be full automated. With the command above you’ll still be asked to enter “pilip’s” password everytime it’s executed. To overcome this, set up passwordless ssh connection. On www1, as any user(you’ll be using this user later to run the rsync command above at a scheduled basis) run this command to create your ssh public key:

[anyuser@www1]$ ssh-keygen -t rsa

When asked to enter or change default settings, just press enter. Same as true when asked for a passphrase, this is what you want for a passwordless ssh. Then to copy over your ssh public key to www2, issue this command:

[anyuser@www1]$ ssh-copy-id -i ~/.ssh/id_rsa.pub pilip@www2

Enter pilip’s password when asked to, then the next time you try to ssh to www2 as pilip you’ll log on automatically without being asked for a password. neat? great.

Then schedule it to run a specific interval, I set mine to run rsync every minute. Using cron, run:

[anyuser@www1]$ crontab -e

Then enter this:

*/2 * * * * rsync -ave “ssh -l pilip” –delete /var/www/html www2:/var//www/html

Save and exit, the crontab entry above will execute the rsync command every 2 minutes. This is only for testing purposes, time interval may have to be a bit longer depending on how often updates are made to the site.

good times.

Posted in Uncategorized | Tagged , , , , | Leave a comment

high availability and load balancing

OBJECTIVE: PROVIDE HIGH AVAILABILITY AND LOAD BALANCED WEB SERVER
==================
it’s like this, im the guru. i have many monkeys who do computing, but people don’t know about it. I get computing requests from people, instead of me solving them I pass over the problem to my monkeys. when they’re done solving it, they give me the solution and I, on the other hand, pass it over to whoever needed it without knowing that one of my monkeys performed the problem solving.

to be more specific: i’m a guru, i have groups of monkeys. each of these groups solve problems specific to their expertise (like problems related to chemistry, biology and what not). somebody asks me a question about “the meaning of life”, without that somebody’s knowledge or awareness, i ask a group of monkeys whose expertise lie on answering such queries and they give me an answer. i give the answer to the somebody who asked the question. i tell him “it’s 42″. he replies “thank you, you’re so good”. i answer back “i know”.

now for another scenario: i’m a guru but like all mortals, i have to leave my post temporarily to do some stuff like sleep, eat and drink. since i’m a kind guru, i try to be at least always 100% available to those who need help. so my plan is to have someone cover for me while i’m not around. that’s where my assistant comes in, he disguises to be just like me(and the people don’t even know he is acting as me when he does. when i need to go afk all of a sudden, i don’t have to tell him what to do. he’s always waiting to take my place and pretend he’s me and do my job, what an opportunist huh? so when the guru does go afk(he leaves his staff to his assistant who’s disguised as the guru) that symbolizes who he is, the assistant puts on a disguise to look exactly just like me and is the one who attends to the questions from people and passes them over to the group of monkeys who specialize in whatever category the question is. when he gets an answer from the monkeys, he then passes them back to the person who was asking it.

when the guru gets back from where ever he went, the assistant(“on noes, the master is back”) removes his guru disguise and returns the staff originally owned by his master.

that is loadbalancing and high availability explained to mere mortals.
=============

for this project, we’re gonna need at least 2 (in real world application, more than 2 is better) linux directors which will handle the requests and manage the forwarding of these requests to the real servers.

the 2 linux directors should at least have 2 nics.

OS: centos 5.4 (any distro you’re comfy with will do, even ubuntu *cough* noob)

for the directors, eth0 will be facing the external network and eth1 for internal network which will be facing the real servers.

master linux director:
eth0: 10.10.10.19
hostname(eth0): loadb1.myhomelab.net
eth1: 192.168.0.19

slave linux director:
eth0 10.10.10.20
hostname(eth0): loadb2.myhomelab.net
eth1: 192.168.0.20

the virtual ip addresses will be the following:
external: 10.10.10.21
hostname(ext): www.myhomelab.net

internal: 192.168.0.21
this internal ip will be used as the gateway of the real servers.

make sure the following packages are installed:
heartbeat
heartbeat-ldirectord

install them if they’re not:
[root@]# yum -y install heartbeat heartbeat-ldirectord

you’ll need the following files present in “/etc/ha.d”(for both directors):
authkeys – authentication file, should be similar for both slave and masta
ha.cf – configs nodes that will be acting as directors.
haresources – contains resources that will be moved between master and slave director(e.g. virtual ip addresses)
ldirectord.cf – contains the servers/services to be load balanced

/heartbeat/
sample configs with explanation can be found in “heartbeat” docs(rpm -qd heartbeat)
/usr/share/doc/heartbeat-[version]/ha.cf
/usr/share/doc/heartbeat-[version]/haresources
/usr/share/doc/heartbeat-[version]/authkeys

/heartbeat-ldirectord
sample configs with explanation can be found in ldirectord’s docs(rpm -qd heartbeat-ldirectord)
/usr/share/doc/heartbeat-[version]/ldirectord.cf

my sample configs:
authkeys (chmod this to 600, else heartbeat won’t start)
auth 2
2 sha1 monkey

ha.cf
logfacility    local0
mcast eth0 225.0.0.1 694 1 0
mcast eth1 225.0.0.1 694 1 0
auto_failback on
node loadb1.myhomelab.net
node loadb2.myhomelab.net

for the node names on ha.cf, run “uname -n” on both directors and replace the corresponding values above.if the output is “localhost.localdomain, change your machine’s hostname by doing any of the following:
1. execute “hostname loadb1.myhomelab.net” on the 1st director. “hostname loadb2.myhomelab.net” for the 2nd. duh.
2. edit /etc/sysconfig/network and key in this w/o the quotes obviously (duh) “HOSTNAME=loadb1.myhomelab.net”
do the obvious for the 2nd director too.

haresources
loadb1.myhomelab.net \
LVSSyncDaemonSwap::master \
ldirectord::ldirectord.cf \
IPaddr2::10.10.10.21/24/eth0:0 \
IPaddr2::192.168.0.21/24/eth1:0

ldirectord.cf
checktimeout=3
checkinterval=1
autoreload=yes
quiescent=yes
virtual=10.10.10.21:80
real=192.168.0.11:80 masq
real=192.168.0.15:80 masq
fallback=127.0.0.1:80
service=http
request=”test.html”
receive=”testing”
scheduler=rr
protocol=tcp
checktype=negotiate

the director will try to connect to each real server (192,168.0.11 and 15) and request a file “test.html” and look for the string “testing”. if it’s not found, it’ll be kicked of the pool of real servers elze it stays. it’ll be added back if it’s found on the next request.

[Do these operations on both servers too]

ldirectord should not run as standalone, it should be ran by heartbeat itself. to do so, run:
[root@loadb1]#service ldirectord
[root@loadb1]#chkconfig –del ldirectord

start up heartbeat and config it to run at boot:

[root@loadb1]# chkconfig heartbeat on
[root@loadb1]# service heartbeat start

Now, to check if both directors are communicating with each other stop hearbeat on the master director:
[root@loadb1]# service heartbeat stop

check the logs on the slave server(loadb2):
[root@loadb2]# tail -f /var/log/messages

you should see information that the slave is taking control of the virtual ip resources 192.168.0.21 and 10.10.10.21. after a few secs, check if they’re already up on the slave:

[root@loadb2]# ifconfig eth0:0
eth0:0    Link encap:Ethernet  HWaddr 08:00:27:54:75:A2
inet addr:10.10.10.21  Bcast:10.10.10.255  Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:10 Base address:0xd020
[root@loadb2]# ifconfig eth1:0
eth1:0    Link encap:Ethernet  HWaddr 08:00:27:BF:49:CC
inet addr:192.168.0.21  Bcast:192.168.0.255  Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:9 Base address:0xd240

you should see something similar above. now, bring heartbeat back up on loadb1.

to check if heartbeat’s in control of ldirectord, run (on current active director):

[root@]# /etc/ha.d/resource.d/ldirectord /etc/ha.d/ldirectord.cf status
ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 11198

to check the status of the real servers and their services:
[root@]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.10.10.21:http rr
-> 192.168.0.11:http            Masq    1      0          0
-> 192.168.0.15:http            Masq    1      0          0

to check if synchronization daemon for LVS is running, run this command on both directors:

[root@]# /etc/ha.d/resource.d/LVSSynchDaemonSwap master status

The output on the current active director should be:
“master running”
And this on the slave or standby:
“master stopped”

NOW TO CONFIGURE THE CONNECTION OF THE DIRECTORS WITH THE REAL SERVERS, THE REAL SERVERS’ CONNECTION WILL BE MASQUERADED VIA 192.168.0.21(this is the internal virtual ip on the directors):

[perform the following operations on both directors]

ENABLE PACKET FORWARDING ON DIRECTORS:
edit /etc/sysctl.conf and make this change:
net.ipv4.ip_forward = 1

save and exit then run this for the changes to take effect:

[root@]#sysctl -p

configure NAT using iptables
[root@]# iptables -t nat -A POSTROUTING -j MASQUERADE -s 192.168.0.0/24
[root@]# /etc/init.d/iptables save

to verify that the rules are in play:

[root@]# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  –  192.168.0.0/24       anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

=ON THE REAL SERVERS=
configure them to use the internal virtual ip as their default gateway, to do this edit:
/etc/sysconfig/network-scripts/ifcfg-eth0 (change this accordingly if this isn’t your interface that’s connected to your directors’ internal network)

and make this change:
GATEWAY=192.168.0.21

reload the network service so that it re-reads the config file:

[root@realserver]# service network reload

#TESTING
there are various ways to check if load balancing is working here’s the 2 simplest:
1. using a browser, open http://www.myhomelab.net and refresh it a couple of times. while doing that, on the master director run:

[root@]# ipvsadm -L

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.10.10.21:http rr
-> 192.168.0.11:http            Masq    1      0          6
-> 192.168.0.15:http            Masq    1      0          5

You should see an output similar to the one above, notice the values in the “InActConn” column increase as your refresh. This means load balancing is working.

2. another way would be to create a file on each server, having the same filename but with different content. put them on the same location on both servers. let’s name it monkey.html and put it on your web servers’ root folder, on one server put in a string like “CONGO” and “KINGKONG” on another. save file and view it using a web browser (http://www.myhomelab.net/monkey.html).  refresh a couple of times and you should notice the text change.

references: http://www.linuxvirtualserver.org

reminder to self: don’t want to change the green/black color theming on this site but i need to think of a better way to write stuff here. getting tired of having to color code commands, output and texts from configs every time.

[what’s next?]

sync the data between the real web servers and also configure a clustered mysql server.

Posted in Uncategorized | Tagged , | Leave a comment

musings with LAMP part2

while i was thinking about the way the php script would behave, i decided to add another column to the current existing database so it’ll be easier to delete data when i have to or just for the sake of being able to better remove content directly using the form. i had to run this command at the mysql prompt:

mysql> alter table characters add id int primary key auto_increment not null first;

it tells mysql to create a new column and name it “id” and is a primary key. it gets added, and incremented too, automatically everytime new data is entered to the table.

now, on to the form itself. the form calls some other small php scripts namely:

db_config.php – contains connection info

<?php
$dbhost = ‘darnassus.myhomelab.net’;
$dbuser = ‘uniqueusername’;
$dbpass = ‘ultrahardtoguesspassw0rd’;
$dbname = ‘ordinary_db’;
?>

db_open.php – contains instruction to connect to the mysql db

<?php
$kon = mysql_connect($dbhost, $dbuser, $dbpass) or die (“is ur db messed up in the head?”);
mysql_select_db($dbname);
?>

db_close.php – contains instruction to terminate connection from mysql

<?php
mysql_close($kon);
?>

del.php – this contains insructions to delete data from the db

<?php
include(“db_stuff/db_config.php”);
include(“db_stuff/db_open.php”);
$id = $_GET[‘id’];
mysql_query(“DELETE FROM characters where id=’$id’ “);
include(“db_stuff/db_config.php”);
print “ENTRY DELETED”;
include(“form.php”);
?>

now comes the main php form:

<form   method=”post”>
<table width=”500″ border=”0″ cellpadding=”2″ cellspacing=”1″ >
<tr>
<td width=”50″>Name</td>
<td><input name=”name” type=”text”></td>
</tr>
<tr>
<td width=”50″>Class</td>
<td><textarea name=”class” cols=”50″ rows=”1″></textarea></td>
</tr>
<tr>
<td width=”10″>Age</td>
<td><textarea name=”age” cols=”10″ rows=”1″></textarea></td>
</tr>
<tr>
<td width=”1″>Sex(F/M)</td>
<td><textarea name=”sex” cols=”1″ rows=”1″></textarea></td>
</tr>
<tr>
<td width=”1″>Profession</td>
<td><textarea name=”profession” cols=”50″ rows=”1″></textarea></td>
</tr>
<tr><br>
<td align=”center”><input name=”save” type=”submit” value=”Submit”></td>
</tr>
</table>
</form>
<?php
/* enter data into db */
if(isset($_POST[‘save’]))
{
$name   = $_POST[‘name’];
$class   = $_POST[‘class’];
$age   = $_POST[‘age’];
$sex   = $_POST[‘sex’];
$profession = $_POST[‘profession’];
include ‘db_stuff/db_config.php’;
include ‘db_stuff/db_open.php’;
$query = ” INSERT INTO characters (name,class,age,sex,profession) “.
” VALUES (‘$name’, ‘$class’,’$age’,’$sex’,’$profession’)”;
mysql_query($query) or die(‘Error ,query failed’);
echo ” entry added”;
}
/* end data db entry */
/* view current data in db */
include ‘db_stuff/db_config.php’;
include ‘db_stuff/db_open.php’;
$query=(“SELECT * FROM characters”);
$db_content = mysql_query($query);
print “<table border=1>”;
print “<td>ID</td><td>NAME</td><td>CLASS</td><td>AGE</td><td>SEX</td><td>PROFESSION</td>”;
while ($row = mysql_fetch_array($db_content))
{
print “<tr>”;
print “<td>$row[id]</td><td>$row[name]</td><td>  $row[class] </td><td> $row[age] </td><td> $row[sex] </td><td> $row[profession]</td>”;
print “<td><a href=del.php?id=$row[id]>delete</a></td>”; #add a delete column, just to make data management a bit easier.
print “</tr>”;
}
echo “</table>”;
/* end data view */
include ‘db_stuff/db_close.php’;
?>

that’s it for now, next one’s clustering LAMP.

Posted in Uncategorized | Tagged , , | Leave a comment

musings with LAMP part1

i was setting up LAMP on virtualbox a couple of nights ago, I’m doing it as part of another project I have on the way. My planned setup was to have the db on a separate server. Setting it up the servers was a breeze in vbox, but I bumped into a problem along the way(more on that later). I installed apache,php and mysql using centos5.3′s vanilla rpms, I didn’t find it necessary to get the latest rpms as I was just gonna use it for poc(proof of concept) purposes.

After changing root’s mysql password I created the database “ordinary_db” with a table named “characters”, these commands were ran to do so(as root logged in mysql):

mysql> create database ordinary_db; (DUH?!)
mysql>create table characters (name VARCHAR(100), class varchar(50), age int, sex char(1), profession VARCHAR(50));

Of course, at least, create a sample entry:

mysql>insert into characters (name ,class ,age ,sex ,profession) values (“zonkie”,”rogue”,10,”M”,”leatherworker”);

Then create a user that would have full access to the database “ordinary_db”:

mysql>grant allon ordinary_db.* to uniqueusername@stormwind.myhomelab.net identified by ‘ultrahardtoguesspassw0rd’;

The user ‘uniqueusername’ will be connecting from the host ‘stormwind.myhomelab.net’, which is where the web server is residing, using ‘ultrahardtoguesspassw0rd’ as the password(duh). Yeah I know, I should at least show that I have concern for security.

Web server: stormwind.myhomelab.net

Database server: darnassus.myhomelab.net

Now comes the problem I encountered while testing apache/mysql connectivity. Using the mysql command line tool, I was able to connect from stormwind to darnassus’ db. But trying to check apache/mysql connectivity, no cigar. Here’s a nice php script to check for mysql connectivity on your web server. Checked the logs, nothing interesting. Tried connecting using both ip address and hostname, nope. Went out to smoke, tried to think of a reason why this was happening but still I had no idea what was going on. Then it occurred to me, what if it had something to do with selinux. I never played around with selinux, I usually disabled it if it was giving me issues. But for now, and for the sake of security, I’ll leave it on. I googled around (yeah google, hire me! I know how to use your search engine) and found this solution(w00t!):

#setsebool -P httpd_can_network_connect=1

What this does is tell selinux to “Allow HTTPD scripts and modules to connect to the network”. Damn you selinux, I wish hating you was that easy. More information ’bout selinux boleans can be found here.

1st task’s done, next one will be to create a php script to insert data to the db and view it too.

Posted in Uncategorized | Tagged , , | Leave a comment