Specing/Pricing out a home ESXi 5.5 box?

Started by Tom, May 04, 2014, 04:00:17 PM

Previous topic - Next topic

Tom

A decent sized ssd cache in front of the raid array could help. there are a few options there, assuming linux.

Getting the absolute highest speed/performance is not required, just "decent". I was not at all unhappy with what I was seeing in my win7+kvm test. But I imagine it could be better with esxi, or if I tried virt-io drivers for windows, but I don't know if those have been updated. I haven't looked into it lately.

When you set up your vm image storage, what size drives do you prefer? Lots of smaller ones? Or fewer larger ones? Could get a boat load of cheap 500GB drives for more spindles (assuming equal total space compared to using larger drives), or load up on more 1TB drives (I already have 5-6). Of course the main limiting factor will be the number of bays I have available...
<Zapata Prime> I smell Stanley... And he smells good!!!

Melbosa

I've done 9x600GB SAS 10K and been somewhat happy with the performance when 15-20 various VMs running on it.  I typically spec my Space requirements, add 50%, then find the most spindles I can run in that total space and what my chassis can hold.   Best solution is of course shared storage arrays, but for that you need a synology or qnap or something similar.
Sometimes I Think Before I Type... Sometimes!

Tom

One sad thing, is one of the 1TB drives I had in my old array was clunking :( and it was yanked out of the old array before I tore it apart and rebuilt it in another machine. I think I have another one or two sitting /somewhere/ but that dead one will need to be RMAed if its warranty is still good.

How busy are your vms? do they end up using a lot of io bandwidth and ioops?
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

It is really going to depend on what those VMs are doing...

Melbosa

Yes Lazy is right.  But from an Small Business perspective the Hosts are doing everything because usually you don't have distributed loads.

For you I'm sure 6 spindles will be more than enough at 7200 RPM.  So maybe a raid 5 at 7 Spindles?
Sometimes I Think Before I Type... Sometimes!

Tom

That's doable. That is what I was figuring on going with. I'd have 7 1TB drives, but one died :( I'll probably leave some room for growth if needed, but 6 1TB drives is a fair amount of space.

I'll try and do some actual testing sometime soon, spin up an install of the software I'm going to be using (buildbot), let it run for a while, and see if performance is adequate.
<Zapata Prime> I smell Stanley... And he smells good!!!

Melbosa

I have a whole bunch of 1TBs SATAs doing nothing, so don't buy some new ones if you need more.
Sometimes I Think Before I Type... Sometimes!

Tom

Oh, I might take you up on that. I haven't decided on the case yet. probably some standard tower case, with lots of 3.5" bays. I imagine the board I get will of course have a bunch of sata ports, and potentially a built in lsi 2008 controller, and I have another IBM M1015 card (lsi 9211 like) I can use in it, so I am not in need of ports.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Quote from: Tom on May 05, 2014, 05:21:23 PM
Oh, I might take you up on that. I haven't decided on the case yet. probably some standard tower case, with lots of 3.5" bays. I imagine the board I get will of course have a bunch of sata ports, and potentially a built in lsi 2008 controller, and I have another IBM M1015 card (lsi 9211 like) I can use in it, so I am not in need of ports.

Finding a properly rated PSU with that many direct SATA power connectors, or clean output to enough molex connectors is the key when you start to go over 4 or so drives... Drives are the next most power hungry devices to high end GPUs...

Tom

Quote from: Lazybones on May 05, 2014, 09:02:14 PM
Quote from: Tom on May 05, 2014, 05:21:23 PM
Oh, I might take you up on that. I haven't decided on the case yet. probably some standard tower case, with lots of 3.5" bays. I imagine the board I get will of course have a bunch of sata ports, and potentially a built in lsi 2008 controller, and I have another IBM M1015 card (lsi 9211 like) I can use in it, so I am not in need of ports.

Finding a properly rated PSU with that many direct SATA power connectors, or clean output to enough molex connectors is the key when you start to go over 4 or so drives... Drives are the next most power hungry devices to high end GPUs...

I'm still flip flopping on components, including the case. there are some decent 4U cases that have a backplane, which makes it easier to manage power and data cables.

My last real concern is per thread performance vs number of cores... I can spec out a decent AMD based box with 16-24 cores, but relatively lower per thread performance vs a 8-12 core intel based system. I've found it hard to find cpu comparisons for many of the opteron and xeon cpus :(

The price premium you pay for intel hardware is incredible.

I have not yet decided on if per thread is more important than multi threaded, but much of the work it does would benefit from more cores, except in some cases, like the windows based jobs, where there's really only going to be one or two cores available for a single job anyhow.... so in that case per thread performance does matter, but not a HUGE amount. I suppose I have a habit of answering my own questions, but I'm still waffling on this a little.
<Zapata Prime> I smell Stanley... And he smells good!!!

Tom

SO hw wise I've decided on a Supermicro H8DG6-F motherboard, dual Opteron 6282 16-core 2.6Ghz processors, 64GB (8x8GB) DDR3 1333 ECC Registered ram. No definitive case yet. Might go supermicro, might go full tower case that has plenty of 3.5" bays, or I might go for a cheap 4u or tower server case that isn't a Norco. As mentioned previously, noise level is a definite concern, so a 1u or 2u case is right out.

It's looking like I'll end up spending 2k after all is said and done :( more than I wanted to, but I felt it was better to go with slightly better, more upgradable hardware with far more cores, than older less upgradable hw with fewer cores (lga1366 cpus, or 4-8core c32 amd's).

Probably overkill, but it's better to be prepared for the future imo. I really wish I had put 32GB ram in my current home server when I bought it. The price of ram right now is incredible. $400+ for 32GB ECC Unbuffered ram.
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Quote from: Melbosa on May 05, 2014, 12:58:36 PM
If you have to buy a windows server license anyway you might as well go HyperV. I could even set it up for you.  HyperV also does supportlinux distros... I run a few on my HyperV installations.  Your hardware options expand with the HyperV option as well.

Are you running those in domains? I tried Hyper-V Server 2012 with local permissions and nearly tore my hair out... If you run it as a full windows server you can always RDP in but the free server edition seems to depend on domain permissions heavily.

Melbosa

HyperV 2012 R2 is great actually either as part of a domain or workgroup.  But even if you strip the GUI out of it might as well attach it to a AD if you have one.

I found the trick was to think of them interms of Windows not VMware.  I've had to do the same thing with XenServer; think of it as a Linux distro rather than VMware or HyperV.  They are all very different in management interfacing and configuration.  In the Windows world workgroup means very little kerberos authentication so your management functions are limited to that which does not utilize kerberos - live migrations for example.  Similar problem to if you had VMware without a VMotion license and vCenter to manage.  And XenServer... well its a beast all on its own in that regard.

So yeah while I still say VMware is the #1 for sure, HyperV isn't that terrible anymore with 2012 R2... its actually production ready.  Anyone who started with HyperV 2008 rather... well I can see where the hatred would come from /badtasteinmouth
Sometimes I Think Before I Type... Sometimes!

Darren Dirt

reading this thread, and others like it over the last few years, I gotta say I feel like this:





...carry on, you crazy admin-types, you...
_____________________

Strive for progress. Not perfection.
_____________________

Tom

<Zapata Prime> I smell Stanley... And he smells good!!!