New Home media / File server build

Started by Lazybones, June 14, 2013, 10:29:52 AM

Previous topic - Next topic

Lazybones

I haven't done any inter disk copies on the device from the SSD to the 3TB drives or back... Not really a concern in my build.. However as Tom points out with most RAID types your read speed goes up with the number of drives/spindles you have.. This is why often in corporate SAN system you will see smaller drives purchased but lots of them to fill an array where performance is needed. However that is changing to using SSD drives ether for direct I/O or to act as a high speed cache for larger slower disks.

Tom

It is nice to have the extra headroom for when local processes are using the array at the same time you're streaming from it.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Re-installed with Ubuntu 13.04..

- Onboard intel network adapter detected and was selectable as primary of the bad.
- Setup Md raid with XFS... Waiting for it to complete formatting... well it should be done now but it was running over night. ( makes me suddenly appreciate how ZFS allocates on the fly)

Thorin

"of the bad"

I assume this was auto-corrected from "off the bat".

Glad you're getting it running, though.  So I forgot, is this going to be your streaming media server as well?  Or just a nice NAS?
Prayin' for a 20!

gcc thorin.c -pedantic -o Thorin
compile successful

Lazybones

Quote from: Thorin on June 18, 2013, 01:00:31 PM
"of the bad"

I assume this was auto-corrected from "off the bat".

Glad you're getting it running, though.  So I forgot, is this going to be your streaming media server as well?  Or just a nice NAS?

This is intended to be my Everything server, including media trans-coding..

I would have just purchased a much cheaper QNAP or Synology NAS if it were not for the lackluster trans-coding abilities on those. I really like being able to stream or watch media on a variety of devices and they all tend to have different capabilities.

Tom

Quote from: Lazybones on June 18, 2013, 11:15:08 AM
Re-installed with Ubuntu 13.04..

- Onboard intel network adapter detected and was selectable as primary of the bad.
- Setup Md raid with XFS... Waiting for it to complete formatting... well it should be done now but it was running over night. ( makes me suddenly appreciate how ZFS allocates on the fly)
Dude. XFS takes seconds to format TBs of data. It too allocates on the fly. If it's still going, something is wrong.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Quote from: Tom on June 18, 2013, 04:02:20 PM
Quote from: Lazybones on June 18, 2013, 11:15:08 AM
Re-installed with Ubuntu 13.04..

- Onboard intel network adapter detected and was selectable as primary of the bad.
- Setup Md raid with XFS... Waiting for it to complete formatting... well it should be done now but it was running over night. ( makes me suddenly appreciate how ZFS allocates on the fly)
Dude. XFS takes seconds to format TBs of data. It too allocates on the fly. If it's still going, something is wrong.

MD raid building the raid set... I made the array options during install, and after the initial reboot checking Array status the status shows all 3 drives in initial rebuild mode. Which from past experience of traditional raid is standard. XFS will sit on top of MD. Formatting was probably the wrong word.

Where as ZFS just takes the raw devices and makes and all the parity is handled at the FS level dynamically.

Lazybones

Also I should note that looking up linux  mdadm vs lvm and what the optimal chunk and cluster sizes should be was a real crap shoot. If found forums full of contradictory info on what performs well, what to do for large files, etc..

Also interesting was that the defaults for mdadm recently changed to 512k for chunks I believe where as I have found forum posts stating 64k or 1024k where optimal.. I think some people are confusing the chunks and cluster sizes..

Tom

Ah, a rebuild. Yeah. That'll take a little while. I think my array took 15 hours or more. You CAN use it when its rebuilding. It's just in degraded mode.

Quote from: Lazybones on June 18, 2013, 04:29:34 PM
Also interesting was that the defaults for mdadm recently changed to 512k for chunks I believe where as I have found forum posts stating 64k or 1024k where optimal.. I think some people are confusing the chunks and cluster sizes..
The optimal chunk/stride size is completely appication dependant. Do you want faster small io random access or faster streaming access? Tweaking for one will hurt the other.

Here's one thing that helps speed /a lot/:

echo 32768 > /sys/block/md0/md/stripe_cache_size

in /etc/rc.local

Uses up a lot of ram, but thats why you want to put a lot of ram in a NAS. it basically just then slurps up stripes of data at a time, and you end up reading out of ram a lot of time.

I also added the following settings on startup:

#! /bin/bash

### BEGIN INIT INFO
# Provides:          drivetweaks
# Required-Start:    $remote_fs
# Required-Stop:     $remote_fs
# Default-Start:     2 3 4 5
# Default-Stop:
# Short-Description: drivetweaks
# Description:       tweaks settings of drives
### END INIT INFO

. /lib/lsb/init-functions

[ -f /etc/default/rcS ] && . /etc/default/rcS
PATH=/bin:/usr/bin:/sbin:/usr/sbin
DESC="tweak drive settings"
NAME="drivetweaks"

NR_REQUESTS=8192

DRIVES=()
RAID_DRIVES=()

do_drive_tweaks()
{
for i in `ls /sys/block`; do
# filter out all loop devices
[[ "$i" == loop* ]] && continue

DATA=$(file -b -s /dev/$i | cut -d' ' -f 2)
# echo $i: $DATA
if [ "xx$DATA" != "xxempty" ]; then
# if file returns something other than sticky empty, we have a device with data attached
eval $(blkid -o export /dev/$i)

if [ "xx$TYPE" = "xxlinux_raid_member" ]; then
DRIVES+=($i)
RAID_DRIVES+=($i)
# echo "got raid member: $i"
elif [ "xx$TYPE" == "xx" ]; then
# regular devices
DRIVES+=($i)
# echo "got regular block: $i"
fi
fi

unset TYPE
done

# return;

for i in ${RAID_DRIVES[*]} ; do
# echo raid drive: $i

# setup timeouts on raid members, default 30
echo 120 > /sys/block/$i/device/timeout

# disable NCQ
echo 1 > /sys/block/$i/device/queue_depth

# tweak queue settings, defaults are 128
echo $NR_REQUESTS > /sys/block/$i/queue/nr_requests

# disable disk power management
hdparm -q -B 255 /dev/$i
done

for i in ${DRIVES[*]} ; do
# echo drive: $i
# setup drive io scheduler
echo deadline > /sys/block/$i/queue/scheduler
done
}

undo_drive_tweaks()
{
# nothing to do here for now
echo "" > /dev/null
}

case "$1" in
  start)
log_begin_msg "Tweaking drive settings"
do_drive_tweaks
log_end_msg 0
;;
  stop)
log_begin_msg "Untweaking drive settings"
undo_drive_tweaks
log_end_msg 0
;;
  restart)
  $0 stop
sleep 3
$0 start
;;
  force-reload)
$0 restart
;;
  status)
log_success_msg "Ok!"
;;
  *)
log_success_msg "Usage: /etc/init.d/drivetweaks {start|stop|restart|force-reload|status}"
exit 1
;;
esac

exit 0


It all tends to help a little. Setting deadline helps with raid and ssds, and setting the device timeout on consumer hdds is rather important.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Since I plan on benchmarking the raid before moving my data I needed the rebuild to be complete.

Tom

Quote from: Lazybones on June 18, 2013, 05:35:22 PM
Since I plan on benchmarking the raid before moving my data I needed the rebuild to be complete.
True enough.
<Zapata Prime> I smell Stanley... And he smells good!!!

Tom

Oh, have you turned up the min and max rebuild speed? You should!
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

I looked into it, it does not impact initial build, my initial build was already running at more than double the rebuild limit.

However we are talking about 3x 3TB disks that need to have the array structure written from one end to the other when using traditional RAID.

It should already be done, I kicked it off last night.

Lazybones

Slow ass transfer under way, not that I have much options because the old unit is so slow.

Tom

You know what takes a while? Transfering 5TB over the lan. even GbE.
<Zapata Prime> I smell Stanley... And he smells good!!!