Machine configuration: Dual processor (Daystar arch) PowerPC 604e 200MHz Apple 8500 /dev/sda: 1.2GB disk with MacOS /dev/sdb: 4GB disk with Linux (Debian Potato) /dev/sdc: 1.2GB test disk on external port (5MB/s max xfer speed) memory: 160MB L2 cache: 256KB Linux 2.4.5-pre3 rsync'd from benh tree. ReiserFS endian patches applied to kernel and reiserfsprogs. Initial tar test of a directory of approx 450MB of files, mostly kernel source directories and tarballs. Command was "tar cf - . | (cd ; tar xvf -)" The first test was using a very old, full height 5.25" SCSI 1 disk that was quite slow. This is exactly the kind of use that something like reiserfs should be considered: to make an old slow computer and disk seem faster. All the tests were performed on /dev/sdc, all other filesystems on the machine were ext2. Reiserfs v. 3.6.25 results ---------------- # bonnie -s 158 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 156 1753 71.0 2463 34.9 1075 7.7 1639 48.7 2573 10.8 57.4 3.8 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 150 2115 81.6 3159 38.0 1188 8.3 1569 46.0 3122 13.0 72.5 4.2 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 150 1954 75.4 2902 32.6 1198 8.2 1643 48.5 3132 12.4 76.4 4.4 # tar real 16m5.820s user 0m18.230s sys 3m46.450s # rm real 0m51.634s user 0m1.170s sys 0m23.690s # tar real 16m41.504s user 0m18.260s sys 3m50.580s # rm real 0m44.245s user 0m1.130s sys 0m23.850s # tar real 16m39.403s user 0m17.750s sys 3m54.850s # rm real 0m43.509s user 0m0.940s sys 0m23.930s ========================================================================================= EXT2 results ------------ # bonnie -s 158 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 158 2616 78.7 2852 13.9 1029 5.1 1572 43.4 2733 7.0 69.5 2.5 Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 158 2680 79.6 2839 11.8 1018 5.0 1579 43.6 2733 7.4 73.5 2.7 Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 158 3031 89.2 3396 15.7 988 4.8 1450 40.1 2733 7.1 77.7 2.8 # tar real 17m41.845s user 0m15.320s sys 1m51.300s # rm real 0m48.349s user 0m0.570s sys 0m3.880s # tar real 16m59.133s user 0m16.380s sys 1m49.120s # rm real 0m44.024s user 0m0.530s sys 0m4.090s # tar real 17m6.732s user 0m15.460s sys 1m48.980s # rm real 0m43.599s user 0m0.540s sys 0m3.960s As you can see, the reiserfs results are a toenail clipping ahead of the ext2 results, compared to the size of an adult human body. So not much advantage there on outright speed. There are some additional tests to run, but let's try another disk first. A crash test was performed on both filesystems, and representative results are the same as the results in the crash test below. However, while performing this test on the ext2 filesystem for the second time, it caused the disk to blow its low level formatting, hence the switch to the 2GB disk. Don't try this at home! ========================================================================================= ========================================================================================= ========================================================================================= /dev/sdc 2GB disk Reiserfs results ---------------- # bonnie -s 158 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 158 1847 73.6 2918 40.3 1298 8.9 1621 48.0 3067 11.4 74.3 4.9 -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 158 2004 80.2 3226 40.4 1281 8.9 1535 45.5 3050 13.1 93.6 5.3 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ This test is the same tar test as before, but the dataset size is approx. 900MB, or, twice the size as before. # tar real 30m39.937s user 0m38.480s sys 8m36.700s # rm real 1m42.512s user 0m1.800s sys 0m46.260s # tar real 32m9.008s user 0m38.490s sys 8m40.190s # rm real 1m30.362s user 0m1.830s sys 0m48.640s # du -s . 937911 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Dirsize test: create four hundred thousand zero length files in a directory. # time ../../dirsize 400000 success creating 400000 files in directory real 3m36.333s user 0m14.710s sys 2m49.590s # +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Second dirsize test: create thirty thousand files in a directory with a file size of 128 bytes. Checked for time and amount of space consumed (small file packing). # time ../../filesize 30000 success creating 30000 files in directory real 0m52.918s user 0m1.860s sys 0m44.290s # du -s . 120703 . # +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Big file creation test: this program creates arg2 files of size arg1*arg3 bytes. If there is no arg3, then 16k is the default. # time ../bigfiles 32768 2 success creating 2 files in directory real 6m1.286s user 0m1.140s sys 1m49.350s # time ../bigfiles 65536 1 success creating 1 files in directory real 6m4.572s user 0m1.150s sys 1m49.520s # time ../bigfiles 119714 1 success creating 1 files in directory real 11m22.349s user 0m1.890s sys 3m40.260s # time /usr/bigfiles 1 40000 1024 success creating 40000 files in directory real 1m26.381s user 0m2.130s sys 1m7.740s # time /usr/bigfiles 1 20000 1024 success creating 20000 files in directory real 0m38.336s user 0m1.050s sys 0m31.870s # time /usr/bigfiles 1 10000 4096 success creating 10000 files in directory real 0m10.446s user 0m0.610s sys 0m9.290s Notes: there were several long periods of no activity on the computer during this test: no disk activity, no cpu activity, no nothing. This is almost certainly a bug in reiserfs. In addition, deleting these two files took almost 3 seconds, but less than one second on ext2. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Extreme prejudice test: the power cord is yanked while near the end of the tar directory copy test. Crash test (performed 5 times with the same result each time): After disasterous disk crash.... May 24 01:30:21 macfly kernel: reiserfs: checking transaction log (device 08:22) ... May 24 01:30:21 macfly kernel: reiserfs: replayed 21 transactions in 12 seconds ^^^^^^^^^^ ========================================================================================= Ext2 results ------------ Tar directory copy test. See above description. # tar real 34m2.136s user 0m32.750s sys 3m51.350s # rm real 1m44.359s user 0m1.050s sys 0m8.330s # tar real 33m19.065s user 0m32.470s sys 3m52.970s # rm real 2m2.644s user 0m1.020s sys 0m9.190s # du -s . 956964 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Dirsize test: create four hundred thousand zero length files in a directory. # ../dirsize 400000 3-4 hours later, with both processors running at maximum and the system completely bogged down, I killed the program. I then ran the following command: # ls -l | wc -l This ran for more than thiry minutes, then gave this output: 131292 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Second dirsize test: create thirty thousand files in a directory with a file size of 128 bytes. Checked for time and amount of space consumed (small file packing). # time ../../filesize 30000 success creating 30000 files in directory real 5m6.189s user 0m1.820s sys 5m4.000s # du -s . 120436 . # +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Big file creation test: this program creates arg2 files of size arg1*16k bytes. # time ../bigfiles 32768 2 success creating 2 files in directory real 5m51.544s user 0m0.650s sys 0m43.180s # time ../bigfiles 65536 1 success creating 1 files in directory real 5m57.516s user 0m0.640s sys 0m40.890s # time ../bigfiles 119714 1 success creating 1 files in directory real 11m0.741s user 0m0.810s sys 1m22.140s # time /usr/bigfiles 1 20000 1024 success creating 20000 files in directory real 3m12.585s user 0m1.140s sys 3m4.550s # time /usr/bigfiles 1 10000 4096 success creating 10000 files in directory real 0m51.316s user 0m0.420s sys 0m47.490s Notes: the amount of system time in this test is quite a bit less than that of reiserfs. This indicates that reiserfs could use some fixin' since it's time was longer as well. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Extreme prejudice test: the power cord is yanked while near the end of the tar directory copy test. Crash test: (performed 5 times with identical results each time) After disasterous disk crash.... Automatic fsck failed after 3 minutes, I had to run it by hand which took another 4 minutes to repair extensive damage.