If you need I/O performance bust it might worth to turn on noatime flag on important partitions. Look down to see how it pushed down the read level on a server with Apache serving a static content.
Usually I add noatime flag during a system installation, but this time forgot about it and had to remount the file system. Thanks that mistake I got this beautiful image ;)
You can go even further and turn on nodiratime, it should decrease read even more.
Kind of my extended memory with thoughts mostly on Linux and related technologies. You might also find some other stuff, a bit of SF, astronomy as well as old (quantum) chemistry posts.
Search This Blog
Friday, November 13, 2009
GMAIL and msmtp (Mutt)
This is the example how to configure the msmtp (i.e. for Mutt) to use thegmail smtp server. Remember that you need the certificate. I have got mine from old Ubuntu installation (Saving
/etc
directory before reinstallation a box is good idea).
account your.user logfile ~/.msmtp.log tls on tls_starttls on tls_trust_file /etc/ssl/certs/ca-certificates.crt auth on host smtp.gmail.com port 587 from your.user@gmail.com user your.user@gmail.com password YOUR_passwordBTW, in Ubuntu you can grab certificate by sudo apt-get install ca-certificates.
Sunday, October 25, 2009
Nagios plugin
Not so long ago I heard the question if (or rather how) it is possible to write a NRPE plugin checking the resources utilization of an application. I'm using Nagios on the daily basis, but I haven't needed to write any plugin yet. When I went through existing plugins most/all of them checked resources on a server level. It makes sense, you are not so interested what exactly doing you server if your website/database is available and response fast. It especially true if you have 100, 500, 1000, ... servers. Anyway, I found the question interesting, even if was rather theoretical than practical one.
After some research I found Jason Faulkner plugin which should be a good base, modified it a bit and created this script:
#!/bin/bash # # Nagios plugin to monitor a process. Can easily be modified to do # pretty much whatever you want. # # Licensed under LGPL version 2 # Copyright 2006 Broadwick Corporation # By: Jason Faulkner jasonf@broadwick.com # # Modified to measure CPU usage of chosen process. # # USAGE: cpu.sh process_name warning_level critical_level # # Licensed under LGPL version 2 # Copyright 2009 Wawrzyniec Niewodniczański # Modification by: Wawrzyniec Niewodniczański wawrzek@gmail.com process_name=$1 WARLVL=$2 CRITLVL=$3 OKMSG="STATUS OK: ${process_name} running" CRITMSG="STATUS CRITICAL: ${process_name} using more than ${CRITLVL} % of Memory" WARNMSG="STATUS WARNING: >1 ${process_name} using more than ${WARLVL} % of Memory" UNKMSG="STATUS UNKNOWN: ${process_name}, check if process is running" PROCESS=`ps axu | grep -v ${0}|grep -v grep | grep ${process_name}` CPU=`echo ${PROCESS}| awk '{cpu+=$3} END {printf "%d", cpu}'` if [[ $PROCESS != "" ]] then if (($CPU < $WARLVL)) then echo "$OKMSG" exit 0 elif (( "$CPU" < $CRITLVL )) then echo "$WARNMSG" exit 1 else echo "$CRITMSG" exit 2 fi else echo "$UNKMSG" exit 3 fiI would say that it's nothing excited. There are two important lines. The first one searching the process name in output of ps command and excluding the lines with script name and grep from the list. The another one using awk to add value of CPU usage from the list created in first line. BTW if you would prefer to check memory usage rather then processor, change {cpu+=$3} to {cpu+=$4} (or even to {mem+=$4}) in awk command. I also wrote the nagios command which I believe should work. "believe" not "know", as I haven't try it yet ;)
# 'check_cpu' command definition define command{ command_name check_cpu command_line /usr/lib/nagios/plugins/check_cpu $ARG1$ $ARG2$ $ARG3$ ń}
Useful links
Monday, September 21, 2009
Escape, Escape
I couldn't understand why following command was working on a local machine, but not through ssh (following form):
ssh server \ "ls -l /var/log/httpd/*-20* \ | awk 'BEGIN {tsum=0} /sizetime/ {tsum += $5;} END {print tsum}'"
I asked my workmate and he also had problems for some time, but finally he suggested that we needed to "escape" something. After some try we found that ssh don't like $ character so following command works.
ssh server \ "ls -l /var/log/httpd/*-20* \ | awk 'BEGIN {tsum=0} /sizetime/ {tsum += \$5;} END {print tsum}'"
Thursday, August 27, 2009
Stone Redskin: comparision of Apache2 performance on HDD and SSD
Introduction
Recently, I had a chance to test the performance of a static content web servers. The initial analysis showed that the most important issue were the speed of a disks, which started to have problems with handling I/O operations. The numbers of files were huge what means that hard drives were engaged in many random access operation.
The latest tests has shown that the new Solid State Disk (SSD) mass storage beat the classic Hard Drive Disk (HDD) in such circumstances (in most others too). So it was quite natural to prepare a set of test helping to measure the effect of switch from a HDD to a SSD storage on the Apache performance.
Methodology
It should be keep in mind, that I wasn't interesting in a general comparison of SSD vs HDD, but concentrated my tests on the Apache performance. The Grinder 3.2 software was used to simulate a load on the web server. The list of requested URL based on the real Apache logs taken from the one of box serving the static content. To eliminate the influence of caching, before each test the memory cache was cleaned using following command
Hardware
The test machine was the Sun X4150 server with a 8GB memory and 2 4-core Xeon E5345 @ 2.33GHz processors working under control of the 32 bit version of CentOS 5.2 and the standard version of Apache2 (2.2.3). Finally, all data were served from ext3 partitions with the noatime flag.
Disks
Following disks were used for tests.
In the both table following acronyms has been used to describe measured parameters. (More info about them on Grinder web site.)
In the first phase of tests I compared the Apache's performance serving 300 000 request using data stored on classic HDD as well as SSD. Kernels from the 2.6 tree allow to choose a I/O scheduler. In theory the best scheduler for SSD devices is Noop, therefore in table below I compared results for the mentioned and default (CFQ) schedulers.
How we expected, the SSD disks (or rather Apache with content on them) proved to be much faster. The web server performance grown about 10 times when a HDD were substituted by a SSD. Another observation worth to note is that the results obtained using both sets of the SSD disks were very similar. Extreme Edition storages were few percent faster, but the different is probably too small to be the only reason to justify the higher cost. Additionally, it was clear that the Noop scheduler didn't dramatical change the Apache performance.
One hour data
It's obvious that 300k requests may not enough to show the full and true image, therefore I repeated test with a bigger set of data based on 1 hour worthy log. During that hour the original server had responded to 1 341 489 queries, but during creation of the file with input data for Grinder I saved the list of URL twice, therefore grinder was sending 2 682 978 queries during the test.
The results are presented in the next table. To the data collected from Grinder I added one more number, TT — the total time of the test, that is how long it took Grinder to send all the requests.
The increase of the queries number diminished the difference between the SSD and HDD disk performance, but also in second test the former storage was firm winner. I.e. the Total Time of test was 4 time shorter for any version of the SSD compare to the traditional disks. Another interesting observation is that difference in performance of Mainstream and Extreme disks decreased. Finally, the Noop scheduler didn't improve the results of that test too.
Summary
The results shown in the current study, as well as other not presented above, confirmed the hypothesis that SSD disks might be a good remedy for observed I/O problems. In the few weeks time you might expect some kind of appendix in which I will describe if baptism of fire on the battlefield of the web come off as well as the preliminary tests.
Recently, I had a chance to test the performance of a static content web servers. The initial analysis showed that the most important issue were the speed of a disks, which started to have problems with handling I/O operations. The numbers of files were huge what means that hard drives were engaged in many random access operation.
The latest tests has shown that the new Solid State Disk (SSD) mass storage beat the classic Hard Drive Disk (HDD) in such circumstances (in most others too). So it was quite natural to prepare a set of test helping to measure the effect of switch from a HDD to a SSD storage on the Apache performance.
Methodology
It should be keep in mind, that I wasn't interesting in a general comparison of SSD vs HDD, but concentrated my tests on the Apache performance. The Grinder 3.2 software was used to simulate a load on the web server. The list of requested URL based on the real Apache logs taken from the one of box serving the static content. To eliminate the influence of caching, before each test the memory cache was cleaned using following command
echo 3 > /proc/sys/vm/drop_caches
(suggested on Linux-MM).Hardware
The test machine was the Sun X4150 server with a 8GB memory and 2 4-core Xeon E5345 @ 2.33GHz processors working under control of the 32 bit version of CentOS 5.2 and the standard version of Apache2 (2.2.3). Finally, all data were served from ext3 partitions with the noatime flag.
Disks
Following disks were used for tests.
- RAID 1 matrix consist of 2 classical rotating HDD with the root file system and the partition storing files for Apache (on LVM2 volume).
Vendor: Sun Model: root Rev: V1.0 Type: Direct-Access ANSI SCSI revision: 02 SCSI device sda: 286494720 512-byte hdwr sectors (146685 MB)
- Standard Intel SSD storage with the partition holding Apache data.
Vendor: ATA Model: INTEL SSDSA2MH16 Rev: 045C Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdc: 312581808 512-byte hdwr sectors (160042 MB)
- 2 Intela SSD Extreme disks joined into the one LVM2 volume. It was necessary to create a partition big enough to keep all data for Apache.
Vendor: ATA Model: SSDSA2SH064G1GC Rev: 045C Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdd: 125045424 512-byte hdwr sectors (64023 MB)
In the both table following acronyms has been used to describe measured parameters. (More info about them on Grinder web site.)
- Test - Test name
- MTT (ms) - Mean Test Time
- TTSD (ms) - Test Time Standard Deviation
- TPS -Transactions Per Second
- RBPS - Response Bytes Per Second
- MTTFB (ms) - Mean Time to First Byte
In the first phase of tests I compared the Apache's performance serving 300 000 request using data stored on classic HDD as well as SSD. Kernels from the 2.6 tree allow to choose a I/O scheduler. In theory the best scheduler for SSD devices is Noop, therefore in table below I compared results for the mentioned and default (CFQ) schedulers.
Test | MTT (ms) | TTSD (ms) | TPS | RBPS | MTTFB (s) |
---|---|---|---|---|---|
HDD CFQ | 5.53 | 8.17 | 179.51 | 1231607.13 | 5.3 |
HDD Noop | 5.53 | 8.09 | 179.30 | 1230119.51 | 5.29 |
SSD CFQ | 0.77 | 3.06 | 1226.55 | 8415044.64 | 0.56 |
SSDn Noop | 0.74 | 2.77 | 1280.17 | 8782969.21 | 0.56 |
SSDe CFQ | 0.73 | 2.55 | 1280.23 | 8783381.50 | 0.52 |
SSDe Noop | 0.71 | 3.05 | 1326.62 | 9101643.04 | 0.53 |
It's obvious that 300k requests may not enough to show the full and true image, therefore I repeated test with a bigger set of data based on 1 hour worthy log. During that hour the original server had responded to 1 341 489 queries, but during creation of the file with input data for Grinder I saved the list of URL twice, therefore grinder was sending 2 682 978 queries during the test.
The results are presented in the next table. To the data collected from Grinder I added one more number, TT — the total time of the test, that is how long it took Grinder to send all the requests.
Test | MTT (ms) | TTSD (ms) | TPS | RBPS | MTTFB (s) | TT (h:m) |
---|---|---|---|---|---|---|
HDD CFQ | 2.65 | 5.29 | 371.71 | 2145301.3 | 2.45 | 02:00 |
SSDn CFQ | 0.63 | 3.19 | 1495.3 | 8630105.68 | 0.43 | 00:29 |
SSDn Noop | 0.64 | 2.52 | 1478.77 | 8534692.28 | 0.43 | 00:30 |
SSDe CFQ | 0.59 | 2.93 | 1594.06 | 9200064.95 | 0.42 | 00:28 |
SSDe Noop | 0.61 | 2.62 | 1530.84 | 8835205.22 | 0.42 | 00:29 |
Summary
The results shown in the current study, as well as other not presented above, confirmed the hypothesis that SSD disks might be a good remedy for observed I/O problems. In the few weeks time you might expect some kind of appendix in which I will describe if baptism of fire on the battlefield of the web come off as well as the preliminary tests.
Tuesday, August 25, 2009
Linux Works in Cambridge
Some time ago I created the "Linux Jobs in Cambridge" map on Google Maps, but something was wrong. Recently, I decided that the title was not very propriety. It's not the map of Linux related opportunities, but the map showing how important is Linux and general Open Sources for Cambridge. So I changed the name to "Linux Work in Cambridge" and it seems to be the right idea. There are some new very interesting entries (even one pub). Cheek it out yourself, and maybe add or correct something.
View Linux Works in Cambridge in a larger map.
Friday, August 21, 2009
Expect and operation on many computers
Recently, I had to delete a directory on around 200 computers. The directory belonged to root, so using my account with public key authentication wasn't possible. I googled a bit, found the expect and wrote the following script.
#!/usr/bin/expect -f set machine [lindex $argv 0] set command [lindex $argv 1] set timeout -1 spawn ssh -l root $machine $command match_max 100000 expect "?*assword: $" send "password\n" expect eofThe script sets the name of a remote machine
(set machine [lindex $argv 0])
and a command (set command [lindex $argv 1])
to execute from arguments it is started with. Next tries to connect to the remote machine (spawn ssh -l root $machine $command
) and when it's asked for the password (expect "?*assword: $"
) send it (send "password\n"
). Of course you have to change the password to the root password. Finally, it waits for the EOF from ssh (expect eof
). I have confess that I don't remember what exactly set timeout -1
and match_max 100000
means ;)
The script can be called with loop similar to one below.
for cell in 1{0..3}{0..9} ;\ do for box in {1..4} ;\ do echo c${bc}-box0${app} ; \ ./command.script bc${bc}app-0${app} "ls /var/log/httpd" ; \ done; \ doneOne more thing. The script assumes that you has connected at least one to all machines or rather that the machines has been added to your
.ssh/know_hosts
file. If you plan to use script to initialize the first connection you should add following line
expect "Are you sure you want to continue connecting (yes/no)?" send "yes\n"before the line
expect "?*assword: $"
, but in such case all machine haven't to be present in .ssh/know_hosts
file.
Tuesday, August 18, 2009
How to find the not commented line using Vim
The significant part of BOFH's live consist of editing config files. It's not so uncommon that you need find not commented lines (i.e. to find it something is set). With vim it's very easy:
/^[^#]The above line command the editor to: find a line which doesn't start with
#
or rather: find a string which is at the beginning of a line with the first character anything else then #
.
This advice will work not only for vim i.e. you can use it in grep as well:
[user@server]$ grep "^[^#]" modprobe.conf alias eth0 tg3 alias eth1 tg3 alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptspiI discussed the similar case some time ago in this note: How to find line not starting with X in Vim.
Tuesday, August 04, 2009
Reading from rather big files in Python
Recently I needed to open the big file (apache log - 14 GB or so) and cut some information from it. Of course use of file.read() and/or file.readlines() method wasn't possible. On the other hand, using file.readline() few (rather more than 20) million times doesn't sound right. Therefore, I looked for another resolution and found that you can limit the size of readlines().
f=open('filename','r') opensize=2**27 longlist=[] while 1: shortlist=[[l.split()[n] for n in [0,4,-2,-1]] for l in f.readlines(opensize)] if not list: break else: longlist.extend(shortlist)The script open the 'filename' file and next in the loop:
- read from that file lines of size close to 128 Mb (2**27),
- cut first, fifth, next to last and last column from each line,
- add created (temporary) list to the output list.
shortlistis not created the script will leave the loop (lines 6 and 7). It not obligatory, but I like to work with 2 powers, therefore opensize=2**27.
Monday, June 22, 2009
one for AWK and one for SVN
Another two useful one liner.
First awk. Sometimes you need to grab last "element" of lines in a file which has different numbers of spaces (or other separator). In such case use variable $NF (or $NF-1, $NF-2...). Good example for such situation might be apache log file, where user agent description is a string with various number of spaces, so it's hard to get columns after that. But you can use something similar to:
tail bo-access_log.2009-06-22 | \ awk '{print "size:\t"$(NF-1) "\t time:\t" $NF}'In the example log file the time is the last and size of file next to last field. Of course you can type it in one line. But Then you have to remove '\' character from end of first line. Second advice is related to SVN. I found reverting last submitted changes quite not clear there. Revert works only with no committed changes, so I used the command similar to below one.
svn merge -r HEAD:{2009-06-21} .The example reverts everything what has been submitted between 21st June 2009 and 'now'. However, today I found that PREV 'variable', so the following command should do I had wanted to achieve. Interesting how could I missed it?
svn merge -r HEAD:PREV .And one more update. In petke comments to this entry in Aral Balkan blog I found another one liner, which looks event easier:
svn update -r 2689
Labels:
awk,
links,
linux,
shell,
subversion
Wednesday, May 06, 2009
Vim substitution
Let say that you want to add string 'bprdp' and the end of each line beginning with string 'bc' and ending with comma you should use following command:
:% g/^bc/s/\,$/, bprdp/% means the whole file g/ for each line with pattern after '/' in above case pattern is ^bc line beginning with bc s/\,$/, bprdb/ substitute comma (\,) followed by end of line character ($) with ', bprdb'.
I wrote this message based on Vim regular expression and Vim Command Cheat Sheet.
Wednesday, March 11, 2009
Control the Vim from the edited file
One of the very nice Vim feature I've learnt recently is the possibility of controlling the Vim from a edited file. Chosen Vim commands may to be put in one of the first (specially formatted) file lines. The line format is describe in 'modeline' help keyword (
update: I forgot to add that you need to set modeline in .vimrc file.
:help modeline
). It worth to remember that text before and after main part has to be commenting out directive. Therefore, for example the line in HTML might looks similar to:
<-- vim: set tabstop=4 noexpandtab:-->for python:
# vim: tabstop=4 noexpandtab:If you like to learn more please check the modeline keyword in Vim help.
update: I forgot to add that you need to set modeline in .vimrc file.
Tuesday, March 10, 2009
My first Perl script
It's nothing big, but it's the first one and, as Perl is write only language, I'd better add the short description.
The script takes a list of files passed as arguments to the command; reads all lines (http addresses) from them and creates the list of unique domains names.
#!/usr/bin/env perl %seen = (); foreach (@ARGV) { open (LFILE,"$_"); for $line (Perl tutorial from tizag.com was helpful.) { @sline=split(/\//,$line); print ("@sline[2]\n") unless $seen{@sline[2]}++; } close LFILE; }
Monday, March 09, 2009
DarwinPorts via proxy
Recently, I needed a perl module not present on my MacOSX computer, which was behind a proxy. The friend suggested to use the Darwin ports rather the Perl from Apple. I downloaded and installed it to found I cannot install any port. The problem was due to the combination of using a proxy and the sudo rather then the root user. I guess such combination is rather common among MacOSX-perl users. So below I present the command which allows to use the Darwin ports from a normal MacOSX account. Generally note, you have to export both the RSYNC_PROXY as well as the http_proxy in the sudo environment.
sudo sh -c "export RSYNC_PROXY=proxy.server:port; \ export http_proxy=http://proxy.server:port; \ port install perl5.10 "
Friday, March 06, 2009
How to find the line not starting with "X" in Vim
I don't know why but most of Vim's search examples say nothing about how to find the line not starting with string "X". Finally, I found how to do this. I.e. following line find anything not started from del:
^[^d][^e][^l].*For people not advance in regex. The consecutive signs means:
^
- a line starting with[^d]
- character other than d;[^e]
- character other than e;[^l]
- character other than l;.*
- any string (any character repeated any times).
Tuesday, February 24, 2009
Total size of quite new files
Following command count the size of all files (-type f) newer than 10 days (-mtime -10). The size is printed in megabytes. "%k" argument of printf returns size in kilobytes, but "a/1024" in awk change it to megabytes.
find -type f -mtime -10 -printf "%k\n"| \ awk 'BEGIN {a=0} {a=a+$1} END {print a/1024}'
Wednesday, January 28, 2009
Remote diff
When you are working as a Linux SysAdmin quite often you have to compare files from two different machines. I found (here) the script which made my life easier, but after some time decided to customize and extent it a bit.
Usually I compare file in the same location therefore the first argument of my script is file path. I also gave chance user to pass an argument for the diff command (4th argument, the default is '-b').
#!/bin/bash # # this acts as a remote diff program, accepting two files and displaying # a diff for them. Zero, one, or both files can be remote. File paths # must be in a format `scp` understands: [[user@]host:]file [ -n "$1" ] || [ -n "$2" ] || [ -n "$3" ] || \ { echo "Usage: `basename $0` file1 server1 server2" && exit 1;} if test -e $4 then opt="-b" else opt=$4 fi scp "$2:$1" rdiff.1 >& /dev/null scp "$3:$1" rdiff.2 >& /dev/null diff $opt rdiff.1 rdiff.2 rm -f rdiff.1 rdiff.2
Subscribe to:
Posts (Atom)