Recently, I had a chance to test the performance of a static content web servers. The initial analysis showed that the most important issue were the speed of a disks, which started to have problems with handling I/O operations. The numbers of files were huge what means that hard drives were engaged in many random access operation.
The latest tests has shown that the new Solid State Disk (SSD) mass storage beat the classic Hard Drive Disk (HDD) in such circumstances (in most others too). So it was quite natural to prepare a set of test helping to measure the effect of switch from a HDD to a SSD storage on the Apache performance.
Methodology
It should be keep in mind, that I wasn't interesting in a general comparison of SSD vs HDD, but concentrated my tests on the Apache performance. The Grinder 3.2 software was used to simulate a load on the web server. The list of requested URL based on the real Apache logs taken from the one of box serving the static content. To eliminate the influence of caching, before each test the memory cache was cleaned using following command
echo 3 > /proc/sys/vm/drop_caches
(suggested on Linux-MM).Hardware
The test machine was the Sun X4150 server with a 8GB memory and 2 4-core Xeon E5345 @ 2.33GHz processors working under control of the 32 bit version of CentOS 5.2 and the standard version of Apache2 (2.2.3). Finally, all data were served from ext3 partitions with the noatime flag.
Disks
Following disks were used for tests.
- RAID 1 matrix consist of 2 classical rotating HDD with the root file system and the partition storing files for Apache (on LVM2 volume).
Vendor: Sun Model: root Rev: V1.0 Type: Direct-Access ANSI SCSI revision: 02 SCSI device sda: 286494720 512-byte hdwr sectors (146685 MB)
- Standard Intel SSD storage with the partition holding Apache data.
Vendor: ATA Model: INTEL SSDSA2MH16 Rev: 045C Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdc: 312581808 512-byte hdwr sectors (160042 MB)
- 2 Intela SSD Extreme disks joined into the one LVM2 volume. It was necessary to create a partition big enough to keep all data for Apache.
Vendor: ATA Model: SSDSA2SH064G1GC Rev: 045C Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdd: 125045424 512-byte hdwr sectors (64023 MB)
In the both table following acronyms has been used to describe measured parameters. (More info about them on Grinder web site.)
- Test - Test name
- MTT (ms) - Mean Test Time
- TTSD (ms) - Test Time Standard Deviation
- TPS -Transactions Per Second
- RBPS - Response Bytes Per Second
- MTTFB (ms) - Mean Time to First Byte
In the first phase of tests I compared the Apache's performance serving 300 000 request using data stored on classic HDD as well as SSD. Kernels from the 2.6 tree allow to choose a I/O scheduler. In theory the best scheduler for SSD devices is Noop, therefore in table below I compared results for the mentioned and default (CFQ) schedulers.
Test | MTT (ms) | TTSD (ms) | TPS | RBPS | MTTFB (s) |
---|---|---|---|---|---|
HDD CFQ | 5.53 | 8.17 | 179.51 | 1231607.13 | 5.3 |
HDD Noop | 5.53 | 8.09 | 179.30 | 1230119.51 | 5.29 |
SSD CFQ | 0.77 | 3.06 | 1226.55 | 8415044.64 | 0.56 |
SSDn Noop | 0.74 | 2.77 | 1280.17 | 8782969.21 | 0.56 |
SSDe CFQ | 0.73 | 2.55 | 1280.23 | 8783381.50 | 0.52 |
SSDe Noop | 0.71 | 3.05 | 1326.62 | 9101643.04 | 0.53 |
It's obvious that 300k requests may not enough to show the full and true image, therefore I repeated test with a bigger set of data based on 1 hour worthy log. During that hour the original server had responded to 1 341 489 queries, but during creation of the file with input data for Grinder I saved the list of URL twice, therefore grinder was sending 2 682 978 queries during the test.
The results are presented in the next table. To the data collected from Grinder I added one more number, TT — the total time of the test, that is how long it took Grinder to send all the requests.
Test | MTT (ms) | TTSD (ms) | TPS | RBPS | MTTFB (s) | TT (h:m) |
---|---|---|---|---|---|---|
HDD CFQ | 2.65 | 5.29 | 371.71 | 2145301.3 | 2.45 | 02:00 |
SSDn CFQ | 0.63 | 3.19 | 1495.3 | 8630105.68 | 0.43 | 00:29 |
SSDn Noop | 0.64 | 2.52 | 1478.77 | 8534692.28 | 0.43 | 00:30 |
SSDe CFQ | 0.59 | 2.93 | 1594.06 | 9200064.95 | 0.42 | 00:28 |
SSDe Noop | 0.61 | 2.62 | 1530.84 | 8835205.22 | 0.42 | 00:29 |
Summary
The results shown in the current study, as well as other not presented above, confirmed the hypothesis that SSD disks might be a good remedy for observed I/O problems. In the few weeks time you might expect some kind of appendix in which I will describe if baptism of fire on the battlefield of the web come off as well as the preliminary tests.