New Home media / File server build

Started by Lazybones, June 14, 2013, 10:29:52 AM

Previous topic - Next topic

Lazybones

Having issues getting the RAID to performance, but now my SSD flys

/dev/sda:
Timing buffered disk reads: 1544 MB in  3.00 seconds = 514.03 MB/sec (up from 387.93 MB/sec now getting full performance)


/dev/md0:
Timing buffered disk reads: 742 MB in  3.00 seconds = 247.19 MB/sec (up from 143.88 MB/sec still have a problem here)

Tom

247 is pretty decent for a 3 drive raid5 to be honest. I'm not sure how much more you can expect, I assume you're using the motherboard's built in ports. My system has a LSI based 8 port PCIE 2 x8 SAS card.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

#62
Quote from: Tom on June 23, 2013, 02:18:52 PM
247 is pretty decent for a 3 drive raid5 to be honest. I'm not sure how much more you can expect, I assume you're using the motherboard's built in ports. My system has a LSI based 8 port PCIE 2 x8 SAS card.

My main board is brand new and is supposed to support 6.0GB/s SATA across all 6 ports. I will post my final results when done.

Lazybones

#63
With some additional testing I was able to get speeds between 243 to 270 MB /sec out of the software raid... it is a bit inconsistent since I have services running on it.. Since there are only 3 disks and this is significantly faster than the speed of one disk that isn't bad.

Almost missed the fact I needed to change the start-up scripts to make the values persist.

Found this script handy for testing values
http://ubuntuforums.org/showthread.php?t=1916607

Tom

Quote from: Lazybones on June 23, 2013, 02:27:58 PM
Quote from: Tom on June 23, 2013, 02:18:52 PM
247 is pretty decent for a 3 drive raid5 to be honest. I'm not sure how much more you can expect, I assume you're using the motherboard's built in ports. My system has a LSI based 8 port PCIE 2 x8 SAS card.

My main board is brand new and is supposed to support 6.0GB/s SATA across all 6 ports. I will post my final results when done.
Onboard controllers aren't quite as speedy as a dedicated controller. My LSI card is technically a hw raid card, but I'm using it in jbod mode, but it still should be speedier than onboard sata. It also is SATA III/6GBps.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Quote from: Tom on June 23, 2013, 04:26:26 PM
Onboard controllers aren't quite as speedy as a dedicated controller. My LSI card is technically a hw raid card, but I'm using it in jbod mode, but it still should be speedier than onboard sata. It also is SATA III/6GBps.

Generally that is true but only if the board has a dedicated processor and memory..

You will note I am getting basically full spec speed out of my SSD and it is using the same set of onboard ports.

Tom

Quote from: Lazybones on June 23, 2013, 07:16:18 PM
Quote from: Tom on June 23, 2013, 04:26:26 PM
Onboard controllers aren't quite as speedy as a dedicated controller. My LSI card is technically a hw raid card, but I'm using it in jbod mode, but it still should be speedier than onboard sata. It also is SATA III/6GBps.

Generally that is true but only if the board has a dedicated processor and memory..

You will note I am getting full basically full spec speed out of my SSD and it is using the same set of onboard ports.
My card can even do raid5 with a special dongle. But yeah. the only other thing to worry about is total bandwidth across ports. But yeah, I think your new speeds are just fine. With some tweaks (if you haven't done them all) you could get better streaming reads. Though that usually impacts your small random access reads quite a lot.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

full list of my tweaks
Add the following lines to your /etc/rc.local

## SD drive Optimzation
blockdev --setra 8192 /dev/sda

## MD RAID Optimizations
echo 16384 > /sys/block/md0/md/stripe_cache_size

blockdev --setra 128 /dev/sd[dcb]
blockdev --setra 8192 /dev/md0

echo 256 > /sys/block/sdd/queue/max_sectors_kb
echo 256 > /sys/block/sdc/queue/max_sectors_kb
echo 256 > /sys/block/sdb/queue/max_sectors_kb

echo 1 > /sys/block/sdd/device/queue_depth
echo 1 > /sys/block/sdc/device/queue_depth
echo 1 > /sys/block/sdb/device/queue_depth

echo 120 > /sys/block/sdd/device/timeout
echo 120 > /sys/block/sdc/device/timeout
echo 120 > /sys/block/sdb/device/timeout

echo 256 > /sys/block/sdd/queue/nr_requests
echo 256 > /sys/block/sdc/queue/nr_requests
echo 256 > /sys/block/sdb/queue/nr_requests

Tom

This won't really help performance, but its an option:

#disable hdd power management
hdparm -q -B 255 /dev/$i


This may help performance:

echo deadline > /sys/block/$i/queue/scheduler


You don't really want to be running the CFQ io scheduler on the raid members. For that matter, you may not want it running on your SSD either I don't think. You may optionally choose 'noop' over deadline.

Hm, It doesn't seem like I'm playing with the ra values at all... I can't remember why not, possibly I just forgot after I set it all up. :D
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Quote from: Tom on June 23, 2013, 08:15:02 PM
This won't really help performance, but its an option:

#disable hdd power management
hdparm -q -B 255 /dev/$i


This may help performance:

echo deadline > /sys/block/$i/queue/scheduler


You don't really want to be running the CFQ io scheduler on the raid members. For that matter, you may not want it running on your SSD either I don't think. You may optionally choose 'noop' over deadline.

Hm, It doesn't seem like I'm playing with the ra values at all... I can't remember why not, possibly I just forgot after I set it all up. :D

i want the drives to spin down and save power so i left power management alone.

i tested deadline but performance went down so i reverted to defaults.

Tom

Quote from: Lazybones on June 23, 2013, 08:18:01 PM
Quote from: Tom on June 23, 2013, 08:15:02 PM
This won't really help performance, but its an option:

#disable hdd power management
hdparm -q -B 255 /dev/$i


This may help performance:

echo deadline > /sys/block/$i/queue/scheduler


You don't really want to be running the CFQ io scheduler on the raid members. For that matter, you may not want it running on your SSD either I don't think. You may optionally choose 'noop' over deadline.

Hm, It doesn't seem like I'm playing with the ra values at all... I can't remember why not, possibly I just forgot after I set it all up. :D

i want the drives to spin down and save power so i left power management alone.
Sadly that won't ever happen in linux. Not for very long. They'll constantly spin down, then back up and wear the drive out. Linux has some kind of periodic flush that it does on any mounted fs, that causes the drive to do something every few minutes.

Quote from: Lazybones on June 23, 2013, 08:18:01 PM
i tested deadline but performance went down so i reverted to defaults.
Huh. I seem to recall deadline helped. Weird.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

#71
So I am digging this old beast up because I have made changes..


1. Dumped the Full Ubunti / KVM host configuration
2. Installed VMware ESXi 5.5 update 2 (slip streamed in realtek and consumer intel drivers
3. Remounted the existing linux software RAID via a OpenMediaVault running as a guest VM and used raw device maps (even sees the smart values for the drives).
4. Just finished converting my PVR and Plex server VMs from KVM pcow2 to vmdk vrmware disks and loaded the correct VMtools.

Time to do some benchmarks soon...

One item of note initially is that my network transfer speeds have improved from 80-105 to 90-115MBytes /s doing some wich file transfers to OpenMediaVault.. I am going to do some more detailed disk benchmarks later.

Lazybones

OLD results with ubuntu on direct hardware:
/dev/md0:
Timing buffered disk reads: 742 MB in  3.00 seconds = 247.19 MB/sec

OpenMediaVault (debian 7.0) VM on vmware with raw mapped disk... SAME array of 3 disks
/dev/md0:
Timing buffered disk reads: 804 MB in  3.00 seconds = 267.93 MB/sec

Tom

Newer kernels have had some work done on io and MD performance. :)
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Recently had a drive fail, so while waiting for the RMA I purchased a replacement.

Then when the RMA arrived I expanded the RAID to 4 drives, proves more spindles = more speed. FYI on these 3TB drives the Rebuild / Expansion time was about 24hrs, which isn't bad.
New low:
/dev/md0:
Timing buffered disk reads: 1178 MB in  3.00 seconds = 392.41 MB/sec

New High:
/dev/md0:
Timing buffered disk reads: 1210 MB in  3.00 seconds = 403.09 MB/sec