Wednesday, December 9, 2009

Linux Filesystem Benchmarks

I wanted to see how various filesystems in Linux stacked up to each other. So, I decided to benchmark them.

The filesystems I am benchmarking are: ext3, ext4, xfs, reiserfs, btrfs, and nilfs2.

The system I am using to do the benchmarking:

Gigabyte MA790FXT-UD5P Motherboard
AMD Phenom II 955 Quad Core 3.2GHz
4 x Mushkin Silverline PC3-10666 9-9-9-24 1333 2GB Ram
Gigabyte GV-N210OC-512I Geforce 210 512MB 64-bit PCIE Video Card
LG 22x DVD Sata Burner
2 x WD Caviar Blue 320GB 7200RPM Sata Drives (OS/Other)
4 x WD Caviar Blue 80GB 7200RPM Sata Drives (Data)
4 x Patriot Memory 32GB Sata SSD (Database)

Gentoo Linux 10.1

The diskspace used is a software raid 0, comprised of 4 partition slices of 4864 cylinders (37.3GB) from the 80GB Hard Drives.

I used the fileio benchmarks in Sysbench 0.4.10 to do these tests.

I created a script that formats the filesystem, mounts it, runs the sysbench prepare statement, clears the filecache, and runs the benchmark. This is done 5 times for each filesystem - I/O Operation Mode tested, and averaged. Each filesystem is created with it's default values.

SEQWR

SEQWR is the Sequential Write Benchmark.
SEQWR

XFS is the clear winner here, with ext4 following closely. NILFS2 was really bad, but I have to attribute this to the newness of it, and that it's not production ready. It performed poorly in every test except one notably weird exception which I will discuss later (SEQRD). So ignoring NILFS2, JFS was the worst at 2.4x the best.

SEQREWR

SEQREWR is the sequential rewrite benchmark
SEQREWR
JFS just beat out XFS on this one, with EXT3 having a particularly bad showing here at 7x the best.

SEQRD

SEQRD is the sequential read benchmark
SEQRD
I cannot explain why NILFS2 chose this test to shine on, but I suspect either it is something I am missing, or that NILFS2 is just really good at this. All the rest of the filesystems were virtually equal here. If your workload is sequential read, it seems any filesystem would do.

RNDRD

RNDRD is the random read benchmark
RNDRD
The winner here is REISERFS, with JFS and EXT3 close behind. XFS was the worst at 2.8x the best.

RNDWR

RNDWR is the random write benchmark
RNDWR
I can't explain the fantastic showing here on REISERFS, unless it buffers the write and returns without having sync'd it to disk. Bypassing that, EXT3 showed well here, followed by BTRFS and JFS. EXT4 was the worst at 3.8x the best.

RNDRW

RNDRW is the combined random read/write benchmark
RNDRW
REISERFS won here, followed closely by JFS. The loser here is XFS at 4.9x the best.

Conclusion: REISERFS and JFS are pretty close contenders for first place, followed by BTRFS and EXT4. Good old EXT3 would be my pick for fifth, leaving XFS and the still immature, but interesting log based filesystem NILFS2 in last place.

As always, your mileage may vary. I could have done some tuning, as some of the filesystems have parameters for stride and stripe width for raid devices, but once I started tuning, I wouldn't know where to stop, so I thought it was more fair to compare them with their default values.

I plan on testing these same filesystems and I/O patterns on SSD disks next. Also, I am going to test BTRFS compression, but not sure yet if that is interesting enough to post about.