Designing A Home Lab? Here’s My Three Favorite Tips

I’ve had countless conversations with technology enthusiasts and professionals who wish to step into the world of a home lab. Typically, we’ll chat about design, use case, and cost, with a few questions that are often repeated. I’ve been a staunch supporter of home labs, ever since building one years ago to start down the path of becoming a VMware Certified Professional, and have reaped the benefits many times over. Having a lab is a superb way to get your hands dirty with hardware and troubleshooting that just can’t be experience in a “cloud” environment. This is no knock on remote labs, such as the VMware Hands on Labs, which are great for having a place to sandbox a software stack or application.

Here’s a list of three of my favorite tips. These come from the experience of building and rebuilding my lab many times, until finally I have things the way I want. Keep in mind that you’ll pretty much never be satisfied with your home lab because there’s always something bigger or better out there. Come to terms with that, make sure the lab is meeting your needs, and gaze appreciatively at the things on Amazon and Newegg until they come down in price. 🙂

#1 Choose a Specific Equipment Layout

I like a neat and efficiently designed lab. As such, there are a few “enterprise” rules that I break for the home lab layout. The first centers around switch placement. Unless you’re trying to deploy a true data center cabinet of gear, which calls for a switch placed top-of-rack (TOR), I like to sandwich my networking gear in the middle of storage (bottom) and compute (top).

This serves a few purposes:

  • Compute, which generates the most heat, is on top (heat rises)
  • Reduced cost of cabling to the gooey network center
  • Less cabling mess to the tiers

network-cabling

Cost is reduced partially because you’ll use shorter cables and in lab modifications. Here’s why: rather than buying a lot of custom length cables to reach the lower and middle levels of your lab, you’re buying roughly the same length of cable. Basically, you’ve standardized on a cable length that is easy to consume regardless of where you put the gear, making layout modifications much less annoying. If you add new gear, you can stockpile a few extra cables without worry, instead of having to acquire various lengths to cover each tray or shelf height. I’ve also found that there are more choices for short cable lengths between 1 to 7 feet than beyond that.

I also mentioned less cabling mess. This can help save dollars in the form of eliminating airflow problems that can occur in non racked setups (like mine) where the long cables would form a barrier somewhere. It also feeds into my next section on organization.

I use Monoprice exclusively for all of my cabling. They are awesome and have a ton of Cat6 selection.

Another equipment layout trick is to mount power strips inverted at each level of your home lab shelving. This greatly reduces the cabling complexity and allows you to make your power grid modular. In my case, I have one for the computing top tier of my lab, another for the networking middle tier, and a third for the bottom storage tier. I’ve highlighted the one for my computing tier, which sits right below three of my lab servers.

power-strip

#2 Organization and Labeling are a Must

So you just bought that new, shiny Widget 9000 device and set up an IP address, cabled it up, and are ready to go to town. Fast forward a few weeks or months. Where did you plug in that thing and what port is it using? I would normally have no idea, and unless your lab is super tiny, you probably won’t either.

Enter your good friend Marker Ties! This bag of 100 ties that I’m holding is just about 1 dollar. That’s about a penny per tie. I think you can budget for these things. 😉

marker-ties

Normally you’d use these little guys to tie together a number of cables and mark them, but I use them on the ends of each cable to denote where it’s going. The switch end tells me what server or storage array is connected, and the opposite end tells me which switch and port is connected. Here’s a closeup of the marker ties on my switch cabling.

marker-ties-closeup

I’m also a stickler about color coding the cables, but that’s optional. I use black for my NAS attachments, orange for storage traffic to hosts, blue for guest traffic to hosts, grey for northbound, etc. I also keep a few spare neon pink cables for temporary cabling until I can buy the right color, and that’s mostly because they are a very obnoxiously loud color. 🙂

#3 Purchase Quality Storage and Backup

The final tip is to buy quality storage, especially if you plan to build a lab for virtualization. You can get away with desktop CPUs for your servers or a cheapy consumer grade network switch, but don’t cheap out and think you’re going to get away with a 2-bay NAS enclosure to run 25 virtual machines. It won’t work. I’m a huge fan of Synology arrays and have heard good things about whitebox Nexenta builds. Design for performance first, then make up for capacity needs.

I’d advise doing three things in order of priority:

  1. Leverage flash storage (SSDs) either in the servers or in the array
  2. Make sure you can survive disk failures
  3. Ensure you have backups that can be restored

Flash is everywhere these days. Be it server-side caching via vSphere as the vSphere Flash Read Cache or a third party such as PernixData FVP, the use of an SSD for your write log in ZFS, or simply cramming a ton of SSDs into an array like in my Synology 2411+, you’ll need flash. It’s much cheaper in the long run than trying to pack an array with spindles, consumes less power, produces less heat, and provides the much needed IOPS bling.

Make sure you can survive a disk failure. Don’t use RAID-0 for any data you care about, because failure happens. For my flash array, I use RAID-5 or RAID-SHR to help spread out wear (the latest generation of Synology DSM 4.3 supports TRIM with certain SSDs in specific RAID configurations). I doubt your lab will really need double parity from RAID-6 or the heavy handed approach with RAID 1+0 unless you have a significant number of disks.

And finally, have a backup. Something in your lab will be “production” – like the Domain Controller perhaps – and should be protected from site failure. Veeam is a great option that has NFR licenses for a slew of IT professionals or even the free edition.  Then, once you have your backup plan in place, restore from it. Make sure the restore works on some sort of regular basis. Otherwise, you’re just wasting your time.