New Home media / File server build

Started by Lazybones, June 14, 2013, 10:29:52 AM

Previous topic - Next topic

Lazybones

So this build rages on, mostly due to the fact that my vision, budget and software have not jived.

- screw hyper-v server 2012, this legally free version can basically only be setup in a full domain. I spent hours trying to work out the workgoup security, and even with the aid of a utility to debug the setup I could not get server manager on my win8 workstation to properly work with it. I also had to fudge the win8 driver to get the onboad nix to work.

ESXi does not support the onboard intel i217 chip or the cheap Realtek secondary NIC I installed so it is also dead

On to Ubuntu... Been running 12.04 LTS on my VMs so I have been giving that a shot.  Tried the onboard raid which it does detect but gives the wrong vol size.  Also tried ZFS, because I wanted dedup but have given up on that due to insane ram requirements for that to work. 

I also shutdown and reboot hang, tried a buch of grub options to fix it but nothing worked. I think I will try ubuntu 13 next as I think it comes with specific Haswell chipset support.

I think I am going to end up with traditional Linux software raid and KVM as my virtual machine platform. Or I will just consolidate all my Linux VMs on one instance however there are benefits to segregating them.

Tom

Just be aware that windows guests on KVM perform poorly. If you're interested in windows guests, you would be better off with virtualbox.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

I am currently not running any windows guests, but that is good to know.

Tom

It's better if you can find the virtio drivers for windows, but last I checked they didn't support all versions of windows. You'll have to check for yourself. It's mostly disk io and video that's stupid slow.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Quote from: Tom on June 17, 2013, 08:30:31 AM
It's better if you can find the virtio drivers for windows, but last I checked they didn't support all versions of windows. You'll have to check for yourself. It's mostly disk io and video that's stupid slow.

Good to know... Going to scrub the server OS again and start fresh with Ubuntu 13.04 and setup the software RAID, do some performance tests and hopefully move on with the build.

Been stuck trying to perfect this before using it as once it is full of data there will be little I can do.

Thorin

Ah, the call of perfection.  I reinstalled Windows 7 on my laptop five times in a row because I was setting things up not quite right.  Now it hums, even as others with the same laptop complain of problems.  I totally hear you on wanting to have the perfect installation before copying data onto it.

Too bad that you're having so many problems getting drivers for hardware and setting up your software RAID and all that.
Prayin' for a 20!

gcc thorin.c -pedantic -o Thorin
compile successful

Tom

Some of his problems are purely due to the brand spanking new hardware he's using.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Quote from: Thorin on June 17, 2013, 10:36:37 AM
Ah, the call of perfection.  I reinstalled Windows 7 on my laptop five times in a row because I was setting things up not quite right.  Now it hums, even as others with the same laptop complain of problems.  I totally hear you on wanting to have the perfect installation before copying data onto it.

Too bad that you're having so many problems getting drivers for hardware and setting up your software RAID and all that.

Ya, this is all the bad stuff however the hardware seems to be great.

1. Case small considering it has room for 6 3.5 drives.
2. Case is reasonably quiet with 1 SSD and 3 3.5 drives in it right now. I believe the fan I hear is the stock CPU fan, I could swap that out if I wanted.
3. Boot time is VERY VERY fast, and that is a good thing considering how much rebooting I have been doing.
4. 6GB SATA drives seem to be performing well for both SSD and the 3T drives... I can easily get 100MB/s or 800Mbit/s file transfers out of this box vs the 30MB/s / 240Mbit/s (max not average) transfers on the old single drive NAS.
5. No DOA hardware.
6. No clear HW incompatibilities.

All my issues are with software and my design goals / budget.. If I had more budget some problems would have been easy to solve, others are just plain software issues.

Interesting things I have learned due to design goals:
Block level DeDupe
- Saw claims that with 2012 some people where getting significant reductions even on media files due to common media file headers and metadata like album art.
- Tested ZFS under linux, could not get any dedup savings with media files alone short of literal duplicate files even going down to a 4K block size.
- ZFS sounds amazing.. but is too expensive to implement, it requires GIGS of ram for every TB of data or throughput drops by factors of 20x or more.
- OpenDupe is not vary fast, and written in java... Not mature, not ready for production
- LessFS, even less mature than OpenDedupe if I am following post correctly.

Intel Haswell chip-set support has issues in Linux
- Mostly has to due with video 3D support but over all there are issues, under Ubuntu you need to be running 13.04 to get any of the native functions

Linux remote access for a windows user:
- Although you can use VNC, and reconfigure it to run out side of someone being logged in it is kind of clunky in most distros including Ubuntu.
- http://www.xrdp.org/ <- Use the windows RDP client with linux... just install the package and it works in Ubuntu...

Tom

Heh. My lean&mean nas has 16GB ram, and no GUI to speak of. running debian sid.

It's currently using 2GB ram, and most of that is Plex and Deluge. Otherwise the cache is near full @ 13GB.

I have it running off of an old 30GB Vertex SSD. Its fast enough, but nowhere near as fast as a modern ssd. I think it tops out at 200-250MB/s. It is a SATAII drive as well.

The stupid awesome thing about my setup is the pcie v2 x8 SAS/SATA card, what with raid5 and 7x2TB hdds, I get upwards of 800MB/s locally. So theres plenty of bw left over for local stuff to be happening and remote access via nfs or plex.

On Topic: I'd go with XFS for bulk storage. Look up some of the tweaking options. the only time I have issues with it at all is when it gets near full. it starts slowing down quite significantly (or it did in the past, I haven't tested it in quite a while).
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

#24
Quote from: Tom on June 17, 2013, 12:16:50 PM
Heh. My lean&mean nas has 16GB ram, and no GUI to speak of. running debian sid.

It's currently using 2GB ram, and most of that is Plex and Deluge. Otherwise the cache is near full @ 13GB.

I have it running off of an old 30GB Vertex SSD. Its fast enough, but nowhere near as fast as a modern ssd. I think it tops out at 200-250MB/s. It is a SATAII drive as well.

The stupid awesome thing about my setup is the pcie v2 x8 SAS/SATA card, what with raid5 and 7x2TB hdds, I get upwards of 800MB/s locally. So theres plenty of bw left over for local stuff to be happening and remote access via nfs or plex.

On Topic: I'd go with XFS for bulk storage. Look up some of the tweaking options. the only time I have issues with it at all is when it gets near full. it starts slowing down quite significantly (or it did in the past, I haven't tested it in quite a while).

I was looking at https://help.ubuntu.com/community/LinuxFilesystemsExplained and was wondering if ext4 would not be fine given my storage requirements vs XFS which is maintained but not standard. I am dialing back my Advanced design to , a stable design... Heck I would stay on 12.04 LTS if I was sure I was getting my CPU / Chipset performance and sort out the hang on shutdown / reboot (with through all the grub fixes and still doesn't work).

FYI if you are into file systems and still interested in ZFS, don't bother with zfs-fuse, install the zfz-linux native kernel build... I still rand into the dedup memory wall, but it was many times faster than fuse still.

Also performance wise ext4 seems like a good choice
http://www.phoronix.com/scan.php?page=article&item=linux_310fs_fourway&num=1


Melbosa

ext4 works well for some TB luns we have at work.  Good performance.
Sometimes I Think Before I Type... Sometimes!

Tom

ext4 is good. but XFS is better long term for large filesystems with a lot of large files.

If it were between ext3 and xfs, its xfs hands down, but since they added extents and such to ext4, its a closer call.

I still think its easier to tweak xfs into much better performance. but hey, you may not need better performance than ext4 gets you, so its kind of moot.
<Zapata Prime> I smell Stanley... And he smells good!!!

Tom

Quote from: Lazybones on June 17, 2013, 01:35:13 PM
I was looking at https://help.ubuntu.com/community/LinuxFilesystemsExplained and was wondering if ext4 would not be fine given my storage requirements vs XFS which is maintained but not standard.
XFS is installed by default on every distro I know of. its in the kernel as a supported driver. Sooo, not sure what you mean, other than distros pick ext for their main filesystems. XFS isn't /for/ your root drive, though you can use it for that quite successfully. It's main claim to fame is large volumes with lots of large files, and it excels there.

Historically, ext is designed for filesystems with LOTS of small files. Ext4 does change that up a bit with extent support, so it can support larger files much better, but its still probably tweaked for standard linux root filesystems. lots and lots of tiny files.
<Zapata Prime> I smell Stanley... And he smells good!!!

Thorin

Damn, now I want to spend a bunch of money on new computers to fill my house.  I have a large sum of money ($20k) just about to come my way.  Except I owe more than that on credit cards, so it's all spoken for.  Still, thanks for sharing your trials and tribulations, hopefully someday I can take advantage of what I'm learning here :)

100MB/s vs 30MB/s is a huge difference, especially when you're trying to copy files to your phone to take on the road with you.

Tom, 800MB/s?  not MBit?

And yes, SSDs boot up way, way, faster than HDDs.
Prayin' for a 20!

gcc thorin.c -pedantic -o Thorin
compile successful

Tom

Quote from: Thorin on June 17, 2013, 03:17:21 PM
Damn, now I want to spend a bunch of money on new computers to fill my house.  I have a large sum of money ($20k) just about to come my way.  Except I owe more than that on credit cards, so it's all spoken for.  Still, thanks for sharing your trials and tribulations, hopefully someday I can take advantage of what I'm learning here :)

100MB/s vs 30MB/s is a huge difference, especially when you're trying to copy files to your phone to take on the road with you.

Tom, 800MB/s?  not MBit?

Each of the 7 2TB drives are capable of at least 120+MB/s on their own (if not 140MBps+). Now running them in RAID5 gets you striping, so you're reading from all of them simultaneously, and with a good enough pre-load cache, you can read off them at a pretty good clip.

Quote from: Thorin on June 17, 2013, 03:17:21 PM
And yes, SSDs boot up way, way, faster than HDDs.
Yup. I have a speed demon of an SSD in my laptop. capable of 550MB/s read/write, and excellent latency and random access times. boots in a few seconds. It actually starts faster from a cold boot than it does from suspend.
<Zapata Prime> I smell Stanley... And he smells good!!!