The filesystems I am benchmarking are: ext3, ext4, xfs, reiserfs, btrfs, and nilfs2.
The system I am using to do the benchmarking:
Gigabyte MA790FXT-UD5P Motherboard
AMD Phenom II 955 Quad Core 3.2GHz
4 x Mushkin Silverline PC3-10666 9-9-9-24 1333 2GB Ram
Gigabyte GV-N210OC-512I Geforce 210 512MB 64-bit PCIE Video Card
LG 22x DVD Sata Burner
2 x WD Caviar Blue 320GB 7200RPM Sata Drives (OS/Other)
4 x WD Caviar Blue 80GB 7200RPM Sata Drives (Data)
4 x Patriot Memory 32GB Sata SSD (Database)
Gentoo Linux 10.1
The diskspace used is a software raid 0, comprised of 4 partition slices of 4864 cylinders (37.3GB) from the 80GB Hard Drives.
I used the fileio benchmarks in Sysbench 0.4.10 to do these tests.
I created a script that formats the filesystem, mounts it, runs the sysbench prepare statement, clears the filecache, and runs the benchmark. This is done 5 times for each filesystem - I/O Operation Mode tested, and averaged. Each filesystem is created with it's default values.
SEQWR
SEQWR is the Sequential Write Benchmark.
XFS is the clear winner here, with ext4 following closely. NILFS2 was really bad, but I have to attribute this to the newness of it, and that it's not production ready. It performed poorly in every test except one notably weird exception which I will discuss later (SEQRD). So ignoring NILFS2, JFS was the worst at 2.4x the best.
SEQREWR
SEQREWR is the sequential rewrite benchmark
JFS just beat out XFS on this one, with EXT3 having a particularly bad showing here at 7x the best.
SEQRD
SEQRD is the sequential read benchmark
I cannot explain why NILFS2 chose this test to shine on, but I suspect either it is something I am missing, or that NILFS2 is just really good at this. All the rest of the filesystems were virtually equal here. If your workload is sequential read, it seems any filesystem would do.
RNDRD
RNDRD is the random read benchmark
The winner here is REISERFS, with JFS and EXT3 close behind. XFS was the worst at 2.8x the best.
RNDWR
RNDWR is the random write benchmark
I can't explain the fantastic showing here on REISERFS, unless it buffers the write and returns without having sync'd it to disk. Bypassing that, EXT3 showed well here, followed by BTRFS and JFS. EXT4 was the worst at 3.8x the best.
RNDRW
RNDRW is the combined random read/write benchmark
REISERFS won here, followed closely by JFS. The loser here is XFS at 4.9x the best.
Conclusion: REISERFS and JFS are pretty close contenders for first place, followed by BTRFS and EXT4. Good old EXT3 would be my pick for fifth, leaving XFS and the still immature, but interesting log based filesystem NILFS2 in last place.
As always, your mileage may vary. I could have done some tuning, as some of the filesystems have parameters for stride and stripe width for raid devices, but once I started tuning, I wouldn't know where to stop, so I thought it was more fair to compare them with their default values.
I plan on testing these same filesystems and I/O patterns on SSD disks next. Also, I am going to test BTRFS compression, but not sure yet if that is interesting enough to post about.
12 comments:
There are a number of reasons why people look for self storage options, and the storage options are as varied as the requirements.
are the filesystems formatted with no options at all? that means all default? that's not exacly an interesting benchmark... XFS has hundreds of options to be tuned (one of the reasons I would have left it out) and ext3/4 in sparse_super and dir_index works a lot better... that would have been interesting :)
plus there are some drawbacks in terms of data recovery and caching.
Did you try creating btrfs using its inbuilt RAID support? I'd like to see how that compares to running on top of a RAID layer.
What were the variances like on all those averages?
You should specify whether you had barriers enabled and what the journaling mode was. What did you use for RAID? Stripe size?
Ext4 has barriers on by default, so if your raid passed barriers down to disks it would be a lot slower than filesystems that don't have barriers on. But maybe the raid device drops the barrier commands. Same with journaling options - the defaults are different.
Does journaling even make sense in raid0? Should you put the journal on the same device or on a different disk?
Tinkering with inode size and stuff is overkill for most people, but regular users of raid would set the stripe size on the filesystem.
TL;DR - your numbers aren't as useful as they can be.
XFS has hundreds of options to be tuned and ext3/4 in sparse_super and dir_index works a lot better... that would have been interesting.
plus there are some drawbacks in terms of data recovery and caching.
Recently I just came across a good article on "100 Linux Tips and Tricks"
Here is its link.
It is my understanding that raid0 has a mind of it's own when it comes to writing data, one would have to use an actual storage device w/o any abstractions (especially software raid, who knows what happens there). NILFS is meant to be extremely fast when it comes to sequential r/w simply based on it's design.
Linux generally have different type of file system then other system like OS/2,Windows,FreeBSD but I think it's very much secure then other.
www.adminkernel.com
Linux, Red Hat, Debian. Storage options and file systems have a range of benchmarks and there are also a variety of self storage options as well
I am a newbie to Linux but this is definitely a good topic.
self storage
Need more time to figure it out.
six sigma training
Post a Comment