-Fractal Design Node 304 mITX Compact Case Black 6X3.5INT No PSU Front Audio 2XUSB3.0
-Seagate Barracuda 3TB 7200RPM SATA3 64MB Cache 3.5in Internal Hard Drive
-ASUS H87I-PLUS mITX LGA1150 H87 DDR3 Motherboard
-Kingston KHX1600C10D3B1K2/16G 16GB Kit 2X8GB 1600MHz DDR3 240PIN DIMM CL10 1.5V
-Intel Core i5 4430 Quad Core 3.0GHZ Processor LGA1150 Haswell 6MB Cache Retail *IR-$5*
-Thermaltake TR2 500W Power Supply Cable Management ATX12V V2.3 24PIN With 120mm Fan
-ADATA SX900 2.5" 128GB SSD SATA 3 SandForce 2281 550MB/SEC Write and 530MB/SEC Read
-TP-Link TG-3468 10/100/1000MBPS Gigabit PCIe Network Adapter
So after limping along on my ultra budget 2T NAS with my Server VMs running on my workstation I have run out of space and wanted to BUILD!.
Goals:
- The storage should be high performance SATA-3 6Gig
- The data volume should be RAID-5 (hardware / software)
- The server should be a virtual server platform.
- The OS for the server should not be on the storage volume.
- Quad Core for visualization
- There should be at least two NICs so that the server and VMs have separate interfaces.
So the hardware is build and now the wonderful issues being:
- Motherboard chipset is BRAND new I217, so the onboard NIC does not work with ESXi and there are only windows 8 drivers. (had to hack it to make it work with 2012 server)
- I am still debating completing this build with 2012 Hyper-V server however I have found setup a nightmare.. If I went ESXi the onboard NIC would not work.
- The RAID offered by the Mother board is not TRUE hardware RAID so I am sort of ignoring it.. It was not a key feature as I expected to do software RAID anyway... although that presents an issue if I decide to go ESXi
On the plus side:
- Even those it is a small case, I found it super easy to mount drives and the full SIZE ATX power supply.. This thing easily holds 6 drives no compromise.
- With all the reboots I have done fiddling, the SSD drive for the OS sure is quick.
- Over all it is very quite, I think the stock Intel CPU fan is louder than the case fans.
How is it for heat? The big problem I found running a media box was it was hot all the time and so the fans would be geared up to high RPMs in use.
Quote from: Mr. Analog on June 14, 2013, 10:32:10 AM
How is it for heat? The big problem I found running a media box was it was hot all the time and so the fans would be geared up to high RPMs in use.
The case has a built in fan controller with 3 settings, all of them are quiet. I still haven't transferred my data to it so I haven't had it run at full blast yet.. The design seems to be really well though out for air flow.
You should also be good with an SSD, fewer moving parts is usually less heat and vibration, the enemies of silent computing :)
Quote from: Mr. Analog on June 14, 2013, 10:40:06 AM
You should also be good with an SSD, fewer moving parts is usually less heat and vibration, the enemies of silent computing :)
It is 1x 128 Gig SSD for the OS / swap, and 3x 3TB drives for the data, those are regular spindle drives. (Total of 4 drives so far)
I should point out that this is a VERY large case for mITX, but I think it is a very SMALL case for 6x 3.5 inch drives.
Quote from: Mr. Analog on June 14, 2013, 10:32:10 AM
How is it for heat? The big problem I found running a media box was it was hot all the time and so the fans would be geared up to high RPMs in use.
I haven't noticed a heat problem with my NAS. Seems like a similar setup to Lazy's, except for it being a corei3 ivy on a intel m-itx server board.
So long as you get the air flowing, temps should be fine. Mind you all my nas does is file sharing. I have a separate htpc.
This is the case I went with: http://lian-li.com/v2/tw/product/upload/image/pc-q25/q25-12.jpg
Mainly because it had a few extra 3.5" hdd bays. Only problem with it so far is that the io backplate didn't quite fit for some reason, I had to modify the backplate a little, and the PSU I got for it is probably a little too big for it. I opted for a high-er quality PSU which generally means its a bit larger... It's a very tight squeeze getting it in there, and the power cables to route between it and the hot swap disk bays.
IMO lazy should have looked at ECC support, but it does add like a couple hundred to the cost of the build :( but if you care about file integrity at all, its kind-of a necessary thing. Especially with software raid. raid parity won't save you if the data was corrupted before it hit the parity generation :( I'm pretty sure its happened to me more than once on my older arrays.
Oh yeah, from the picture it didn't look that big, but then read the dimensions:
Case dimensions (W x H x D): 250 x 210 x 374 mm
Quote from: Tom on June 14, 2013, 10:43:26 AM
IMO lazy should have looked at ECC support, but it does add like a couple hundred to the cost of the build :( but if you care about file integrity at all, its kind-of a necessary thing. Especially with software raid. raid parity won't save you if the data was corrupted before it hit the parity generation :( I'm pretty sure its happened to me more than once on my older arrays.
Already over what I planned on spending going Intel VS AMD.. There was no budget for SERVER grade parts. My server resides in a cool basement, has a UPS providing clean power...
I am thinking of using the old 2TB nas as a backup target for the OS and critical files... I am not that concerned about bit flipping on my media files.
I just got incredibly sick of my files corrupting. I have a backup for my raid array. (sure I haven't actually put it online yet, but one step at a time ;D) Got tired of re acquiring all the unimportant stuff after a while too.
I just got sick of messing with cheap consumer stuff for my backups and data storage. I don't want to have to deal with the headaches that being cheap has caused me in the past.
For my most important stuff, it gets stored on the original machines, on a raid1 volume locally, and again remotely. And I may add another remote backup location. Paraniod? Me? maaaayyyybbbbeeee... Also may stick it on my data backup array (well, it'll be a linear array if I don't end up picking up a couple more 3TB disks...) as well.
If you were REALLY paranoid Iron Mountain would visit your house once a week so you could offsite your backups out of rotation LOOOL
Quote from: Mr. Analog on June 14, 2013, 11:02:25 AM
If you were REALLY paranoid Iron Mountain would visit your house once a week so you could offsite your backups out of rotation LOOOL
I haven't quite gotten rid of my cheap side.
RAID IS NOT BACKUP.. hehehe.. Ya that is why I plan on using the old slow NAS as a backup target...
Budget is key.. I was planning on going with a QNAP or Synology, but I really like having Plex Trans-code, and I plan on adding DVR style recording in the future.
Quote from: Tom on June 14, 2013, 11:04:02 AM
Quote from: Mr. Analog on June 14, 2013, 11:02:25 AM
If you were REALLY paranoid Iron Mountain would visit your house once a week so you could offsite your backups out of rotation LOOOL
I haven't quite gotten rid of my cheap side.
You have a sealed bag full of hard drives buried in some field don't you
Quote from: Lazybones on June 14, 2013, 11:04:51 AM
RAID IS NOT BACKUP.. hehehe.. Ya that is why I plan on using the old slow NAS as a backup target...
Hehe. Yeah, I've had my arrays blow up, I am /fully/ aware of how raid != backup... Especially when the admin is fond of fat fingering commands.
Thats why I have a backup array in the works, one that will be easy to expand (linear concat!), and maintain. I have 3x3TB drives ready for it. More than enough space to backup all the data I have right now, but not large enough to make a complete copy of the nas if it were full. If I do end up finding some 3TB disks for cheap, I may build the backup array as raid5, but I dunno. a plain old linear concat would be more than enough, and scale better.
Quote from: Mr. Analog on June 14, 2013, 11:05:20 AM
Quote from: Tom on June 14, 2013, 11:04:02 AM
Quote from: Mr. Analog on June 14, 2013, 11:02:25 AM
If you were REALLY paranoid Iron Mountain would visit your house once a week so you could offsite your backups out of rotation LOOOL
I haven't quite gotten rid of my cheap side.
You have a sealed bag full of hard drives buried in some field don't you
I've been thinking about getting a underground fireproof safe. One of the ones that get mostly encased in cement. Then place disks and some archive grade storage media in there.
Quote from: Lazybones on June 14, 2013, 11:04:51 AM
RAID IS NOT BACKUP.. hehehe.. Ya that is why I plan on using the old slow NAS as a backup target...
Budget is key.. I was planning on going with a QNAP or Synology, but I really like having Plex Trans-code, and I plan on adding DVR style recording in the future.
(http://zeldor.biz/wp-content/uploads/2012/03/RAID.jpg)
So this build rages on, mostly due to the fact that my vision, budget and software have not jived.
- screw hyper-v server 2012, this legally free version can basically only be setup in a full domain. I spent hours trying to work out the workgoup security, and even with the aid of a utility to debug the setup I could not get server manager on my win8 workstation to properly work with it. I also had to fudge the win8 driver to get the onboad nix to work.
ESXi does not support the onboard intel i217 chip or the cheap Realtek secondary NIC I installed so it is also dead
On to Ubuntu... Been running 12.04 LTS on my VMs so I have been giving that a shot. Tried the onboard raid which it does detect but gives the wrong vol size. Also tried ZFS, because I wanted dedup but have given up on that due to insane ram requirements for that to work.
I also shutdown and reboot hang, tried a buch of grub options to fix it but nothing worked. I think I will try ubuntu 13 next as I think it comes with specific Haswell chipset support.
I think I am going to end up with traditional Linux software raid and KVM as my virtual machine platform. Or I will just consolidate all my Linux VMs on one instance however there are benefits to segregating them.
Just be aware that windows guests on KVM perform poorly. If you're interested in windows guests, you would be better off with virtualbox.
I am currently not running any windows guests, but that is good to know.
It's better if you can find the virtio drivers for windows, but last I checked they didn't support all versions of windows. You'll have to check for yourself. It's mostly disk io and video that's stupid slow.
Quote from: Tom on June 17, 2013, 08:30:31 AM
It's better if you can find the virtio drivers for windows, but last I checked they didn't support all versions of windows. You'll have to check for yourself. It's mostly disk io and video that's stupid slow.
Good to know... Going to scrub the server OS again and start fresh with Ubuntu 13.04 and setup the software RAID, do some performance tests and hopefully move on with the build.
Been stuck trying to perfect this before using it as once it is full of data there will be little I can do.
Ah, the call of perfection. I reinstalled Windows 7 on my laptop five times in a row because I was setting things up not quite right. Now it hums, even as others with the same laptop complain of problems. I totally hear you on wanting to have the perfect installation before copying data onto it.
Too bad that you're having so many problems getting drivers for hardware and setting up your software RAID and all that.
Some of his problems are purely due to the brand spanking new hardware he's using.
Quote from: Thorin on June 17, 2013, 10:36:37 AM
Ah, the call of perfection. I reinstalled Windows 7 on my laptop five times in a row because I was setting things up not quite right. Now it hums, even as others with the same laptop complain of problems. I totally hear you on wanting to have the perfect installation before copying data onto it.
Too bad that you're having so many problems getting drivers for hardware and setting up your software RAID and all that.
Ya, this is all the bad stuff however the hardware seems to be great.
1. Case small considering it has room for 6 3.5 drives.
2. Case is reasonably quiet with 1 SSD and 3 3.5 drives in it right now. I believe the fan I hear is the stock CPU fan, I could swap that out if I wanted.
3. Boot time is VERY VERY fast, and that is a good thing considering how much rebooting I have been doing.
4. 6GB SATA drives seem to be performing well for both SSD and the 3T drives... I can easily get 100MB/s or 800Mbit/s file transfers out of this box vs the 30MB/s / 240Mbit/s (max not average) transfers on the old single drive NAS.
5. No DOA hardware.
6. No clear HW incompatibilities.
All my issues are with software and my design goals / budget.. If I had more budget some problems would have been easy to solve, others are just plain software issues.
Interesting things I have learned due to design goals:
Block level DeDupe
- Saw claims that with 2012 some people where getting significant reductions even on media files due to common media file headers and metadata like album art.
- Tested ZFS under linux, could not get any dedup savings with media files alone short of literal duplicate files even going down to a 4K block size.
- ZFS sounds amazing.. but is too expensive to implement, it requires GIGS of ram for every TB of data or throughput drops by factors of 20x or more.
- OpenDupe is not vary fast, and written in java... Not mature, not ready for production
- LessFS, even less mature than OpenDedupe if I am following post correctly.
Intel Haswell chip-set support has issues in Linux
- Mostly has to due with video 3D support but over all there are issues, under Ubuntu you need to be running 13.04 to get any of the native functions
Linux remote access for a windows user:
- Although you can use VNC, and reconfigure it to run out side of someone being logged in it is kind of clunky in most distros including Ubuntu.
- http://www.xrdp.org/ <- Use the windows RDP client with linux... just install the package and it works in Ubuntu...
Heh. My lean&mean nas has 16GB ram, and no GUI to speak of. running debian sid.
It's currently using 2GB ram, and most of that is Plex and Deluge. Otherwise the cache is near full @ 13GB.
I have it running off of an old 30GB Vertex SSD. Its fast enough, but nowhere near as fast as a modern ssd. I think it tops out at 200-250MB/s. It is a SATAII drive as well.
The stupid awesome thing about my setup is the pcie v2 x8 SAS/SATA card, what with raid5 and 7x2TB hdds, I get upwards of 800MB/s locally. So theres plenty of bw left over for local stuff to be happening and remote access via nfs or plex.
On Topic: I'd go with XFS for bulk storage. Look up some of the tweaking options. the only time I have issues with it at all is when it gets near full. it starts slowing down quite significantly (or it did in the past, I haven't tested it in quite a while).
Quote from: Tom on June 17, 2013, 12:16:50 PM
Heh. My lean&mean nas has 16GB ram, and no GUI to speak of. running debian sid.
It's currently using 2GB ram, and most of that is Plex and Deluge. Otherwise the cache is near full @ 13GB.
I have it running off of an old 30GB Vertex SSD. Its fast enough, but nowhere near as fast as a modern ssd. I think it tops out at 200-250MB/s. It is a SATAII drive as well.
The stupid awesome thing about my setup is the pcie v2 x8 SAS/SATA card, what with raid5 and 7x2TB hdds, I get upwards of 800MB/s locally. So theres plenty of bw left over for local stuff to be happening and remote access via nfs or plex.
On Topic: I'd go with XFS for bulk storage. Look up some of the tweaking options. the only time I have issues with it at all is when it gets near full. it starts slowing down quite significantly (or it did in the past, I haven't tested it in quite a while).
I was looking at https://help.ubuntu.com/community/LinuxFilesystemsExplained and was wondering if ext4 would not be fine given my storage requirements vs XFS which is maintained but not standard. I am dialing back my Advanced design to , a stable design... Heck I would stay on 12.04 LTS if I was sure I was getting my CPU / Chipset performance and sort out the hang on shutdown / reboot (with through all the grub fixes and still doesn't work).
FYI if you are into file systems and still interested in ZFS, don't bother with zfs-fuse, install the zfz-linux native kernel build... I still rand into the dedup memory wall, but it was many times faster than fuse still.
Also performance wise ext4 seems like a good choice
http://www.phoronix.com/scan.php?page=article&item=linux_310fs_fourway&num=1
ext4 works well for some TB luns we have at work. Good performance.
ext4 is good. but XFS is better long term for large filesystems with a lot of large files.
If it were between ext3 and xfs, its xfs hands down, but since they added extents and such to ext4, its a closer call.
I still think its easier to tweak xfs into much better performance. but hey, you may not need better performance than ext4 gets you, so its kind of moot.
Quote from: Lazybones on June 17, 2013, 01:35:13 PM
I was looking at https://help.ubuntu.com/community/LinuxFilesystemsExplained and was wondering if ext4 would not be fine given my storage requirements vs XFS which is maintained but not standard.
XFS is installed by default on every distro I know of. its in the kernel as a supported driver. Sooo, not sure what you mean, other than distros pick ext for their main filesystems. XFS isn't /for/ your root drive, though you can use it for that quite successfully. It's main claim to fame is large volumes with lots of large files, and it excels there.
Historically, ext is designed for filesystems with LOTS of small files. Ext4 does change that up a bit with extent support, so it can support larger files much better, but its still probably tweaked for standard linux root filesystems. lots and lots of tiny files.
Damn, now I want to spend a bunch of money on new computers to fill my house. I have a large sum of money ($20k) just about to come my way. Except I owe more than that on credit cards, so it's all spoken for. Still, thanks for sharing your trials and tribulations, hopefully someday I can take advantage of what I'm learning here :)
100MB/s vs 30MB/s is a huge difference, especially when you're trying to copy files to your phone to take on the road with you.
Tom, 800MB/s? not MBit?
And yes, SSDs boot up way, way, faster than HDDs.
Quote from: Thorin on June 17, 2013, 03:17:21 PM
Damn, now I want to spend a bunch of money on new computers to fill my house. I have a large sum of money ($20k) just about to come my way. Except I owe more than that on credit cards, so it's all spoken for. Still, thanks for sharing your trials and tribulations, hopefully someday I can take advantage of what I'm learning here :)
100MB/s vs 30MB/s is a huge difference, especially when you're trying to copy files to your phone to take on the road with you.
Tom, 800MB/s? not MBit?
Each of the 7 2TB drives are capable of at least 120+MB/s on their own (if not 140MBps+). Now running them in RAID5 gets you striping, so you're reading from all of them simultaneously, and with a good enough pre-load cache, you can read off them at a pretty good clip.
Quote from: Thorin on June 17, 2013, 03:17:21 PM
And yes, SSDs boot up way, way, faster than HDDs.
Yup. I have a speed demon of an SSD in my laptop. capable of 550MB/s read/write, and excellent latency and random access times. boots in a few seconds. It actually starts faster from a cold boot than it does from suspend.
I haven't done any inter disk copies on the device from the SSD to the 3TB drives or back... Not really a concern in my build.. However as Tom points out with most RAID types your read speed goes up with the number of drives/spindles you have.. This is why often in corporate SAN system you will see smaller drives purchased but lots of them to fill an array where performance is needed. However that is changing to using SSD drives ether for direct I/O or to act as a high speed cache for larger slower disks.
It is nice to have the extra headroom for when local processes are using the array at the same time you're streaming from it.
Re-installed with Ubuntu 13.04..
- Onboard intel network adapter detected and was selectable as primary of the bad.
- Setup Md raid with XFS... Waiting for it to complete formatting... well it should be done now but it was running over night. ( makes me suddenly appreciate how ZFS allocates on the fly)
"of the bad"
I assume this was auto-corrected from "off the bat".
Glad you're getting it running, though. So I forgot, is this going to be your streaming media server as well? Or just a nice NAS?
Quote from: Thorin on June 18, 2013, 01:00:31 PM
"of the bad"
I assume this was auto-corrected from "off the bat".
Glad you're getting it running, though. So I forgot, is this going to be your streaming media server as well? Or just a nice NAS?
This is intended to be my Everything server, including media trans-coding..
I would have just purchased a much cheaper QNAP or Synology NAS if it were not for the lackluster trans-coding abilities on those. I really like being able to stream or watch media on a variety of devices and they all tend to have different capabilities.
Quote from: Lazybones on June 18, 2013, 11:15:08 AM
Re-installed with Ubuntu 13.04..
- Onboard intel network adapter detected and was selectable as primary of the bad.
- Setup Md raid with XFS... Waiting for it to complete formatting... well it should be done now but it was running over night. ( makes me suddenly appreciate how ZFS allocates on the fly)
Dude. XFS takes seconds to format TBs of data. It too allocates on the fly. If it's still going, something is wrong.
Quote from: Tom on June 18, 2013, 04:02:20 PM
Quote from: Lazybones on June 18, 2013, 11:15:08 AM
Re-installed with Ubuntu 13.04..
- Onboard intel network adapter detected and was selectable as primary of the bad.
- Setup Md raid with XFS... Waiting for it to complete formatting... well it should be done now but it was running over night. ( makes me suddenly appreciate how ZFS allocates on the fly)
Dude. XFS takes seconds to format TBs of data. It too allocates on the fly. If it's still going, something is wrong.
MD raid building the raid set... I made the array options during install, and after the initial reboot checking Array status the status shows all 3 drives in initial rebuild mode. Which from past experience of traditional raid is standard. XFS will sit on top of MD. Formatting was probably the wrong word.
Where as ZFS just takes the raw devices and makes and all the parity is handled at the FS level dynamically.
Also I should note that looking up linux mdadm vs lvm and what the optimal chunk and cluster sizes should be was a real crap shoot. If found forums full of contradictory info on what performs well, what to do for large files, etc..
Also interesting was that the defaults for mdadm recently changed to 512k for chunks I believe where as I have found forum posts stating 64k or 1024k where optimal.. I think some people are confusing the chunks and cluster sizes..
Ah, a rebuild. Yeah. That'll take a little while. I think my array took 15 hours or more. You CAN use it when its rebuilding. It's just in degraded mode.
Quote from: Lazybones on June 18, 2013, 04:29:34 PM
Also interesting was that the defaults for mdadm recently changed to 512k for chunks I believe where as I have found forum posts stating 64k or 1024k where optimal.. I think some people are confusing the chunks and cluster sizes..
The optimal chunk/stride size is completely appication dependant. Do you want faster small io random access or faster streaming access? Tweaking for one will hurt the other.
Here's one thing that helps speed /a lot/:
echo 32768 > /sys/block/md0/md/stripe_cache_size
in /etc/rc.local
Uses up a lot of ram, but thats why you want to put a lot of ram in a NAS. it basically just then slurps up stripes of data at a time, and you end up reading out of ram a lot of time.
I also added the following settings on startup:
#! /bin/bash
### BEGIN INIT INFO
# Provides: drivetweaks
# Required-Start: $remote_fs
# Required-Stop: $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: drivetweaks
# Description: tweaks settings of drives
### END INIT INFO
. /lib/lsb/init-functions
[ -f /etc/default/rcS ] && . /etc/default/rcS
PATH=/bin:/usr/bin:/sbin:/usr/sbin
DESC="tweak drive settings"
NAME="drivetweaks"
NR_REQUESTS=8192
DRIVES=()
RAID_DRIVES=()
do_drive_tweaks()
{
for i in `ls /sys/block`; do
# filter out all loop devices
[[ "$i" == loop* ]] && continue
DATA=$(file -b -s /dev/$i | cut -d' ' -f 2)
# echo $i: $DATA
if [ "xx$DATA" != "xxempty" ]; then
# if file returns something other than sticky empty, we have a device with data attached
eval $(blkid -o export /dev/$i)
if [ "xx$TYPE" = "xxlinux_raid_member" ]; then
DRIVES+=($i)
RAID_DRIVES+=($i)
# echo "got raid member: $i"
elif [ "xx$TYPE" == "xx" ]; then
# regular devices
DRIVES+=($i)
# echo "got regular block: $i"
fi
fi
unset TYPE
done
# return;
for i in ${RAID_DRIVES[*]} ; do
# echo raid drive: $i
# setup timeouts on raid members, default 30
echo 120 > /sys/block/$i/device/timeout
# disable NCQ
echo 1 > /sys/block/$i/device/queue_depth
# tweak queue settings, defaults are 128
echo $NR_REQUESTS > /sys/block/$i/queue/nr_requests
# disable disk power management
hdparm -q -B 255 /dev/$i
done
for i in ${DRIVES[*]} ; do
# echo drive: $i
# setup drive io scheduler
echo deadline > /sys/block/$i/queue/scheduler
done
}
undo_drive_tweaks()
{
# nothing to do here for now
echo "" > /dev/null
}
case "$1" in
start)
log_begin_msg "Tweaking drive settings"
do_drive_tweaks
log_end_msg 0
;;
stop)
log_begin_msg "Untweaking drive settings"
undo_drive_tweaks
log_end_msg 0
;;
restart)
$0 stop
sleep 3
$0 start
;;
force-reload)
$0 restart
;;
status)
log_success_msg "Ok!"
;;
*)
log_success_msg "Usage: /etc/init.d/drivetweaks {start|stop|restart|force-reload|status}"
exit 1
;;
esac
exit 0
It all tends to help a little. Setting deadline helps with raid and ssds, and setting the device timeout on consumer hdds is rather important.
Since I plan on benchmarking the raid before moving my data I needed the rebuild to be complete.
Quote from: Lazybones on June 18, 2013, 05:35:22 PM
Since I plan on benchmarking the raid before moving my data I needed the rebuild to be complete.
True enough.
Oh, have you turned up the min and max rebuild speed? You should!
I looked into it, it does not impact initial build, my initial build was already running at more than double the rebuild limit.
However we are talking about 3x 3TB disks that need to have the array structure written from one end to the other when using traditional RAID.
It should already be done, I kicked it off last night.
Slow ass transfer under way, not that I have much options because the old unit is so slow.
You know what takes a while? Transfering 5TB over the lan. even GbE.
Quote from: Lazybones on June 18, 2013, 08:56:14 PM
Slow ass transfer under way, not that I have much options because the old unit is so slow.
Did you do your benchmarking? Is it fast, or
fast! ?
Didn't do an internal drive to drive test but my over the network write speed was 90-99MB/s and reads where topping out at 110MB/s bytes not bits so as far as gigabit file serving goes it is performing very well.
I will do some internal tests to the SSD and back later.
My old unit tops out at 30MB/s over the network and I have 1.5-2TB of data to get transferred so I have started that and it is going to just have to run a long time.
That sounds pretty damn fast. 90MB/s for 2TB (2,048GB, 2,097,152MB) means 23,301 seconds, or about 388 minutes, or about 6.5 hours. The old unit is limited to one-third of that, so 19.5 hours. Yikes. My Drobo's slow like that, I think it runs maybe 20-22MB/s. But it sure is easy to set up and add drives to without having to rebuild arrays or anything.
Just make sure you build your array large enough initially so you won't have to extend it for a year or three. :D my new nas is over twice the size of my old data share, and I have 5TB or more free. And I don't think I'll fill it up any time soon.
By the time I do need to expand it. I'll probably replace all of the drives outright with 3 or 4TB (or larger) disks. So a little rebuilding won't bug me much :)
(the system fits 7 3.5" HDDs).
Quote from: Tom on June 18, 2013, 11:23:36 PM
Just make sure you build your array large enough initially so you won't have to extend it for a year or three. :D my new nas is over twice the size of my old data share, and I have 5TB or more free. And I don't think I'll fill it up any time soon.
By the time I do need to expand it. I'll probably replace all of the drives outright with 3 or 4TB (or larger) disks. So a little rebuilding won't bug me much :)
(the system fits 7 3.5" HDDs).
Old capacity was 2TB new capacity is 6TB, the three fold increase should last a while... Also I almost went with 4TB drives as they where on sale for the same cost per TB but that would have driven system costs up even more.
Quote from: Lazybones on June 18, 2013, 11:30:16 PM
Quote from: Tom on June 18, 2013, 11:23:36 PM
Just make sure you build your array large enough initially so you won't have to extend it for a year or three. :D my new nas is over twice the size of my old data share, and I have 5TB or more free. And I don't think I'll fill it up any time soon.
By the time I do need to expand it. I'll probably replace all of the drives outright with 3 or 4TB (or larger) disks. So a little rebuilding won't bug me much :)
(the system fits 7 3.5" HDDs).
Old capacity was 2TB new capacity is 6TB, the three fold increase should last a while... Also I almost went with 4TB drives as they where on sale for the same cost per TB but that would have driven system costs up even more.
Thats why I went with 2TB disks for my nas. Also I had a few because I'd been stockpiling them when ever a really good sale was on. I also have 3 3TB disks just sitting around. I'll eventually add them to my old array and turn it into the backup for the new one. at some point.
Data transferred, share links updated. VMs converted to KVM.
Only issue so far is that the easiest KVM tool Virt-Manager only runs under Linux. Investigating various web UI options for remote management.
FYI converting VMs the simplest solution turned to be booting from CloneZilla saving to a share then booting a new empty VM and restoring. Under Linux CloneZilla updated all of the IDE boot devices to virtual devices.
Much better than attempting direct image conversion followed by manual edit.
Finished the switch late last night haven't done any speed or other tests yet.
Performance of the software raid
/dev/md0:
Timing cached reads: 25406 MB in 2.00 seconds = 12717.42 MB/sec
Timing buffered disk reads: 432 MB in 3.00 seconds = 143.88 MB/sec
http://www.storagereview.com/seagate_barracuda_3tb_review_1tb_platters_st3000dm001
Average read - 156 MB/s
Max read - 210 MB/s
So the software RAID is performing just under average for this model of drive..
Performance of the SSD
/dev/sda:
Timing cached reads: 25928 MB in 2.00 seconds = 12979.79 MB/sec
Timing buffered disk reads: 1164 MB in 3.00 seconds = 387.93 MB/sec
http://www.anandtech.com/show/5710/the-adata-xpg-sx900-128gb-review-maximizing-sandforce-capacity/2
I seem to be better or on the money for the SSD performance.
An RSYNC copy from the RAID to the SSD was only about 99MB/s but that isn't a very optimal test.
Increase the stripe cache, that may help your streaming speeds. That is quite low for what you have... You have three 3 TB drives in raid 5?
I'd kind of expect 150% of a single drive's speed with that array. But there are lots of tweaks you can make to improve things. Take a look at that scrip I posted earlier. Its an init.d script.
As for rsync speeds, that's pretty decent. rsync has lot of overhead.
I can setup my 3tb drives as a test to compare if you want.
Ya I haven't done any optimization, data is all transferred now so I plan on only doing some minor tweaks.
Sorry, what's the point of the timing of the cached read? It doesn't actually show how fast the drive is getting accessed, does it?
Nice access speeds, though, especially compared to my (what, now six year old?) machines sitting at home.
Quote from: Thorin on June 23, 2013, 03:08:50 AM
Sorry, what's the point of the timing of the cached read? It doesn't actually show how fast the drive is getting accessed, does it?
Nice access speeds, though, especially compared to my (what, now six year old?) machines sitting at home.
Cached read is the speed of the interface. Usually it tops off at the speed of the sata bus, but it looks like its reading out of memory for his setup.
For instance, here's my newer raid array:
/dev/md0:
Timing cached reads: 12232 MB in 2.00 seconds = 6119.60 MB/sec
Timing buffered disk reads: 2190 MB in 3.00 seconds = 729.94 MB/sec
As a comparison to Lazy's SSD here's my two fast ones:
ADATA SX900 256GB in laptop
/dev/sda:
Timing cached reads: 22814 MB in 2.00 seconds = 11420.22 MB/sec
Timing buffered disk reads: 1186 MB in 3.00 seconds = 395.08 MB/sec
Kingston HyperX 3K 240GB in desktop:
/dev/sda:
Timing cached reads: 9480 MB in 2.00 seconds = 4743.41 MB/sec
Timing buffered disk reads: 1388 MB in 3.00 seconds = 462.16 MB/sec
Damit, I knew I should not have bench-marked it...
Now I will likely find the speed issue is due to my block size or something and can't change that.
Quote from: Lazybones on June 23, 2013, 10:50:00 AM
Damit, I knew I should not have bench-marked it...
Now I will likely find the speed issue is due to my block size or something and can't change that.
Nah, I think the default stripe unit size is fine for a NAS. They upped it significantly a few revisions (of mdadm) back. It's 512K now I think, rather than the old default of 64K which is great for random access, but horrible for contiguous streaming.
root@mrbig:~# mdadm -D /dev/md0 | grep "Chunk Size"
Chunk Size : 512K
Your's should be the same.
Also I think it might be possible to change it on the fly with a reshape command.
Quote from: Lazybones on June 23, 2013, 10:50:00 AM
Damit, I knew I should not have bench-marked it...
Now I will likely find the speed issue is due to my block size or something and can't change that.
Ignorance is bliss, and all that, eh?
Having issues getting the RAID to performance, but now my SSD flys
/dev/sda:
Timing buffered disk reads: 1544 MB in 3.00 seconds = 514.03 MB/sec (up from 387.93 MB/sec now getting full performance)
/dev/md0:
Timing buffered disk reads: 742 MB in 3.00 seconds = 247.19 MB/sec (up from 143.88 MB/sec still have a problem here)
247 is pretty decent for a 3 drive raid5 to be honest. I'm not sure how much more you can expect, I assume you're using the motherboard's built in ports. My system has a LSI based 8 port PCIE 2 x8 SAS card.
Quote from: Tom on June 23, 2013, 02:18:52 PM
247 is pretty decent for a 3 drive raid5 to be honest. I'm not sure how much more you can expect, I assume you're using the motherboard's built in ports. My system has a LSI based 8 port PCIE 2 x8 SAS card.
My main board is brand new and is supposed to support 6.0GB/s SATA across all 6 ports. I will post my final results when done.
With some additional testing I was able to get speeds between 243 to 270 MB /sec out of the software raid... it is a bit inconsistent since I have services running on it.. Since there are only 3 disks and this is significantly faster than the speed of one disk that isn't bad.
Almost missed the fact I needed to change the start-up scripts to make the values persist.
Found this script handy for testing values
http://ubuntuforums.org/showthread.php?t=1916607
Quote from: Lazybones on June 23, 2013, 02:27:58 PM
Quote from: Tom on June 23, 2013, 02:18:52 PM
247 is pretty decent for a 3 drive raid5 to be honest. I'm not sure how much more you can expect, I assume you're using the motherboard's built in ports. My system has a LSI based 8 port PCIE 2 x8 SAS card.
My main board is brand new and is supposed to support 6.0GB/s SATA across all 6 ports. I will post my final results when done.
Onboard controllers aren't quite as speedy as a dedicated controller. My LSI card is technically a hw raid card, but I'm using it in jbod mode, but it still should be speedier than onboard sata. It also is SATA III/6GBps.
Quote from: Tom on June 23, 2013, 04:26:26 PM
Onboard controllers aren't quite as speedy as a dedicated controller. My LSI card is technically a hw raid card, but I'm using it in jbod mode, but it still should be speedier than onboard sata. It also is SATA III/6GBps.
Generally that is true but only if the board has a dedicated processor and memory..
You will note I am getting basically full spec speed out of my SSD and it is using the same set of onboard ports.
Quote from: Lazybones on June 23, 2013, 07:16:18 PM
Quote from: Tom on June 23, 2013, 04:26:26 PM
Onboard controllers aren't quite as speedy as a dedicated controller. My LSI card is technically a hw raid card, but I'm using it in jbod mode, but it still should be speedier than onboard sata. It also is SATA III/6GBps.
Generally that is true but only if the board has a dedicated processor and memory..
You will note I am getting full basically full spec speed out of my SSD and it is using the same set of onboard ports.
My card can even do raid5 with a special dongle. But yeah. the only other thing to worry about is total bandwidth across ports. But yeah, I think your new speeds are just fine. With some tweaks (if you haven't done them all) you could get better streaming reads. Though that usually impacts your small random access reads quite a lot.
full list of my tweaks
Add the following lines to your /etc/rc.local
## SD drive Optimzation
blockdev --setra 8192 /dev/sda
## MD RAID Optimizations
echo 16384 > /sys/block/md0/md/stripe_cache_size
blockdev --setra 128 /dev/sd[dcb]
blockdev --setra 8192 /dev/md0
echo 256 > /sys/block/sdd/queue/max_sectors_kb
echo 256 > /sys/block/sdc/queue/max_sectors_kb
echo 256 > /sys/block/sdb/queue/max_sectors_kb
echo 1 > /sys/block/sdd/device/queue_depth
echo 1 > /sys/block/sdc/device/queue_depth
echo 1 > /sys/block/sdb/device/queue_depth
echo 120 > /sys/block/sdd/device/timeout
echo 120 > /sys/block/sdc/device/timeout
echo 120 > /sys/block/sdb/device/timeout
echo 256 > /sys/block/sdd/queue/nr_requests
echo 256 > /sys/block/sdc/queue/nr_requests
echo 256 > /sys/block/sdb/queue/nr_requests
This won't really help performance, but its an option:
#disable hdd power management
hdparm -q -B 255 /dev/$i
This may help performance:
echo deadline > /sys/block/$i/queue/scheduler
You don't really want to be running the CFQ io scheduler on the raid members. For that matter, you may not want it running on your SSD either I don't think. You may optionally choose 'noop' over deadline.
Hm, It doesn't seem like I'm playing with the ra values at all... I can't remember why not, possibly I just forgot after I set it all up. :D
Quote from: Tom on June 23, 2013, 08:15:02 PM
This won't really help performance, but its an option:
#disable hdd power management
hdparm -q -B 255 /dev/$i
This may help performance:
echo deadline > /sys/block/$i/queue/scheduler
You don't really want to be running the CFQ io scheduler on the raid members. For that matter, you may not want it running on your SSD either I don't think. You may optionally choose 'noop' over deadline.
Hm, It doesn't seem like I'm playing with the ra values at all... I can't remember why not, possibly I just forgot after I set it all up. :D
i want the drives to spin down and save power so i left power management alone.
i tested deadline but performance went down so i reverted to defaults.
Quote from: Lazybones on June 23, 2013, 08:18:01 PM
Quote from: Tom on June 23, 2013, 08:15:02 PM
This won't really help performance, but its an option:
#disable hdd power management
hdparm -q -B 255 /dev/$i
This may help performance:
echo deadline > /sys/block/$i/queue/scheduler
You don't really want to be running the CFQ io scheduler on the raid members. For that matter, you may not want it running on your SSD either I don't think. You may optionally choose 'noop' over deadline.
Hm, It doesn't seem like I'm playing with the ra values at all... I can't remember why not, possibly I just forgot after I set it all up. :D
i want the drives to spin down and save power so i left power management alone.
Sadly that won't ever happen in linux. Not for very long. They'll constantly spin down, then back up and wear the drive out. Linux has some kind of periodic flush that it does on any mounted fs, that causes the drive to do something every few minutes.
Quote from: Lazybones on June 23, 2013, 08:18:01 PM
i tested deadline but performance went down so i reverted to defaults.
Huh. I seem to recall deadline helped. Weird.
So I am digging this old beast up because I have made changes..
1. Dumped the Full Ubunti / KVM host configuration
2. Installed VMware ESXi 5.5 update 2 (slip streamed in realtek and consumer intel drivers
3. Remounted the existing linux software RAID via a OpenMediaVault running as a guest VM and used raw device maps (even sees the smart values for the drives).
4. Just finished converting my PVR and Plex server VMs from KVM pcow2 to vmdk vrmware disks and loaded the correct VMtools.
Time to do some benchmarks soon...
One item of note initially is that my network transfer speeds have improved from 80-105 to 90-115MBytes /s doing some wich file transfers to OpenMediaVault.. I am going to do some more detailed disk benchmarks later.
OLD results with ubuntu on direct hardware:
/dev/md0:
Timing buffered disk reads: 742 MB in 3.00 seconds = 247.19 MB/sec
OpenMediaVault (debian 7.0) VM on vmware with raw mapped disk... SAME array of 3 disks
/dev/md0:
Timing buffered disk reads: 804 MB in 3.00 seconds = 267.93 MB/sec
Newer kernels have had some work done on io and MD performance. :)
Recently had a drive fail, so while waiting for the RMA I purchased a replacement.
Then when the RMA arrived I expanded the RAID to 4 drives, proves more spindles = more speed. FYI on these 3TB drives the Rebuild / Expansion time was about 24hrs, which isn't bad.
New low:
/dev/md0:
Timing buffered disk reads: 1178 MB in 3.00 seconds = 392.41 MB/sec
New High:
/dev/md0:
Timing buffered disk reads: 1210 MB in 3.00 seconds = 403.09 MB/sec
FYI for those that are dealing with larger RAID sets I have had a second Seagate st300dm001 fail on me, not the worst but also not very good.
I think the high summer temps probably don't help.. My setup is in the basement and well vented but it can still get very warm in the house.
Quote from: Lazybones on August 08, 2015, 07:39:16 PM
FYI for those that are dealing with larger RAID sets I have had a second Seagate st300dm001 fail on me, not the worst but also not very good.
I think the high summer temps probably don't help.. My setup is in the basement and well vented but it can still get very warm in the house.
I've had a few of those fail. They are known to be bad. HDDs should cope with 50c+ internal temp just fine, so its probably not the heat.
Had a 3rd ST300DM fail on me this week..
And thus the problem with building a home array, lots of drives cost a lot so you end up purchasing cheap drives.
At this rate cloud storage is staring to look affordable.
Yeaahhh, I've had several cheap Seagate drives die. they really don't rate them for 24/7 use. I've started to switch to WD Reds. More expensive, but they aught to last longer.
My fear is that I buy the same make/model to build an array every time and that if one drive goes it's likely that the whole array isn't far behind
I've been "lucky" in that most of my array failures have been on the controller side (and thus unrecoverable in most cases)
The problem is that for RAID to be efficient the performance and latency of each drive needs to be near identical or the system needs to wait for things to sync up for writes and reads.
You are even supposed to ensure the drives have the same firmware level which can be impossible for many consumer drives that don't provide updates tools.
For software raid you really don't need to care about most of those things. You're not in it for ultimate performance on most home NASs. I pitty the foo who use hardware raid for a home NAS :o
Wow you guys have bad luck. My Synology has 8 bays and only lost two drives in 4 years. And I don't buy NAS drives, just regular old seagates or WDs, which ever is cheaper. And as I don't use it as an NFS target, 5400-5900s are good enough for 1080p streams :D.
For me, I don't think its luck any more. Maybe a couple, or a few drives, but after like 5+ in the past few years? Nah man. nah. combination of tighter tolerances on newer hw, lowered MTBF/rated-on-time, and lower prices. The drives just aren't made to be used for what I was using them for.
Maybe your synology actually knows how to spin the drives down when not in use. That alone could save drives I bet.
Also I'm pretty sure I got hit with bad firmware on a few of my drives.
Well looking back at this thread this recently failed disk had been in service for close to 3 years.
I think I am mostly disappointed that the cost of the drives has not dropped.