Refresh

This website wahlnetwork.com/2012/03/13/building-esxi-5-whitebox-home-lab-servers/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

Building ESXi 5 Whitebox Home Lab Servers

I recently decided it was time to graduate into a more robust home lab environment, as I’ve been pushing the boundaries of what a single Dell T110 running ESXi 5 can do. I’m no longer satisfied with nesting virtual ESXi 5 servers like a set of Russian nesting dolls, although we all have to start somewhere. To that end, I have decided to go forth with some whitebox builds to upgrade the Wahl Network vSphere 5 home lab.

[symple_box color=”yellow” text_align=”left” width=”100%” float=”none”]
This is a rather old post focused on a Sandy Bridge design – you’re welcome to head over to my updated post that takes advantage of a Haswell design here.
[/symple_box]

For those interested in doing the same, this post outlines the additions I am making along with the overall design that I am working towards.

I just had to use this nesting doll photo, do you blame me?

Selecting ESXi 5 Host Hardware

The biggest conflict when picking a platform for a host is memory. Server memory is expensive, desktop memory is cheap, and pretty much anything you buy from a vendor will charge a mint for memory. If it were just about buying sticks of RAM, I’d go with a desktop build. However, I noticed that desktop builds just lose out on so many features that would make life easier. Things like IPMI, VT-x / VT-d (variable support), ECC, internal ports, network interfaces, and so on. So, I went with a build that utilizes server parts.

Let me briefly state that I don’t feel there is a wrong or right answer to what you ultimately choose to build with. As with any design, identify your functional requirements, the nice-to-haves, the budget, and then go forth.

I met the infamous @RootWyrm (Phillip Jaenke) at an HP Cloud Tech Day event last year, who runs a tech website that contains the build list for a whitebox server called the Baby Dragon. One thing I learned about Phil is that he’s very passionate about server builds and really hates noise and heat. He updated the Baby Dragon build to version 2, which is where I based my parts list. I’ve made a few changes to suit my tastes, with the end result being (per server):

  • CPU: Intel Xeon E3-1230 “Sandy Bridge” – 3.2GHz, 4 Cores, 8 Threads, 8MB (Amazon)
  • Motherboard: Supermicro X9SCM-F – Intel C204, Dual GigE, IPMI w/Virtual Media, 2x SATA-3, 4x SATA-2 (Amazon)
  • RAM: 16GB (4 x 4GB) Kingston 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 (PC3 10600) Server Memory Model
  • Updated! RAM: 32GB (4 x 8GB) Kingston 240 PIN DDR3 SDRAM ECC Unbuffered 1600 (PC3 12800) Server Memory Model (Amazon)
  • Disk: Lexar Echo ZX 16GB (Amazon)
  • Case: LIAN LI PC-V351B Black Aluminum MicroATX Desktop Computer Case (Amazon)
  • Fans: 2 x Scythe SY1225SL12L 120mm “Slipstream” Case Fan (Amazon)
  • Power: Seasonic 400W 80 Plus Gold Fanless ATX12V/EPS12V Power Supply (Amazon)

Hey there, sexy, want to run some ESXi 5?

Cost per server (at time of writing) is about $850.

The end result is a small form factor box that will produce nearly no noise (the case fans are only 10.7 dBA @ 41CFM), has no spinning disk (again, less heat and power), and has a dedicated out of band management port with the ability to do virtual media. Each box also has a pair of GbE NICs.

Down side? It only has 16 GB of RAM – there is no financially viable option for 8GB sticks of ECC UDIMMs at this time. I could have bought SSD drives for local swap cache, and may do so in the future when the price of SSDs fall further.

Update 3/20/2012: Intel 82579LM Drivers

There does not seem to be a driver for the Intel 82579LM card at this time for ESXi 5.X. In the meantime, use the other port, which is an Intel 82574L, to install the hypervisor. You can then add a custom driver to enable the other port by following the instructions found in this thread:

Install your machine(s) with the vanilla ESXi 5.0 ISO.

Log on to the console (or via ssh) of one of the machines and install the vib file by using the following commands:

esxcli software acceptance set --level=CommunitySupported
esxcli software vib install -v http://files.v-front.de/net-e1001e-1.0.0.x86_64.vib

Reboot and configure all NICs.

Update 4/12/2012: VMware Site Survey (Fault Tolerance)

Per a request in the comments, I’ve run the VMware Site Survey report to verify that it results in compatibility with fault tolerance.

Update 9/4/2013: ESXi Hardware Status (CIM data)

Per request, here is a look at the hardware status tab in vSphere to show you some of the data collected via CIM.

supermicro-cim-data

How About Some NAS?

I also decided to retire an old model whitebox tower that contained 6 x 300 GB SATA drives in favor of a Synology DS411. Ultimately, I heard a lot of good feedback on the Synology, and it supports SSD! Not much else to say beyond that. 😉

  • Enclosure: Synology DS411 Diskless System 4-bay NAS Server (Amazon)
  • SSD: 2 x Intel 320 Series SSDSA2CW120G310 2.5″ 120GB SATA II MLC Internal Solid State Drive (Amazon)
  • SATA: 2 x Seagate Barracuda Green ST2000DL003 2TB 5900 RPM 64MB Cache SATA 6.0Gb/s 3.5″ Internal Hard Drive (Amazon)

Cost for the NAS with drives (at time of writing) is about $950.

Because I can tap into SSDs, I went with the green 5900 RPM model to save heat and power. I analyzed my lab environment using Xangati and found that I peak out at about 50 IOPS in most cases.

An IOPS report from Xangati

Thoughts

I waffled for a long time on the storage, but ultimately the ability to use SSD won me over. I am confident that in about a year, SSD technology will be at a price point where putting 4 x 250 GB drives in a NAS box will be budget friendly. It would be neat to see a small form factor storage appliance come out from Tintri or Pure Storage as I enjoy both of their approaches to making flash sing. Wishful thinking? 🙂

You can also get another perspective on building with a whitebox from Robert Novak, who has a very in depth post on his build process using a Shuttle box.