Building a New Network Design for the Lab

Back in the old days, I created a network for my home lab. It was a single subnet and was without VLANs. This was because I did not have a switch capable of doing such things, and I was not really focused on the network design. Shame on me.

Fast forward  to a few weeks ago, when I decided to change things and come up with a better design. I tickled Twitter to see if anyone might be interested in documentation on what I did, and some folks seemed to dig the idea, so here we go. At the very least, it’s nice to write this down for my own records in case I forget why I did something. 😉

At a high level, I wanted to address these functional requirements:

  • Stop using VLAN1 for all devices, both physical and virtual
  • Create discrete networks for major traffic types in the lab and at home
  • Eliminate network subnet collisions with my work VPN
  • Isolate wireless devices
  • Allow for greater growth, scale, and general “robustness” of the network
  • The design should be simple to understand and maintain
  • Use existing hardware (if at all possible)

I came up with these design elements to address the requirements:

  • Migrate all devices off VLAN 1
  • Create a series of unique VLANs for Home, Wireless, Servers, vMotion, NFS, and Other traffic
  • Migrate from 10.0.x.0/24 networks to a series of 172.16.x.0/24 networks
  • Put Wireless on its own subnet and VLAN.
  • Splitting Home, Wireless, and Servers into discrete networks significantly increases usable IP space

Here’s a rough history of how it went down.

[symple_box color=”red” fade_in=”false” float=”center” text_align=”left” width=””]
Take a look at all of the hardware in my lab.

Sketching the Network Topology

I started with a rough sketch of what I wanted it to look like while keeping my functional requirements in mind. Whiteboards rock! Don’t mind the blurry part, it isn’t anything top secret … it just has nothing to do with my drawing, so I fuzzed it out.

Whiteboarding the Network Design

I kept things pretty simple. My Layer 3 switch, an HP V1910-24G, would be the VLAN aggregation point.

  • Pretty much everything outside of the lab will be VLAN 10 (wired) or VLAN 15 (wireless). The wireless AP operates in bridged mode, which really just extends the network segment to wireless devices.
  • Every VLAN except for vMotion has a switch virtual interface (SVI) to get from place to place.
  • VLAN 50 (the one marked Other) will just be for random things I want to play with in isolation. I might use it as a DMZ or as a lab zone – I’m not sure yet – but I like the idea of having a “something else” bucket.
  • Every upstream SVI or routed interface is using a .1 value for the fourth octet, making it easy to fill in default gateways.
  • I felt that creating additional VLANs for Fault Tolerance, ESXi management, and so-on was over complicating the design for no real gain.
  • If needed, iSCSI can share the NFS network, especially since my Synology NAS devices are not capable of creating VIFs. Also, I don’t use iSCSI. 😉

Here’s a Visio-fied version of the network diagram from a high level:

Wahl Network Lab Network

And here is a more physical look at the configuration:

Detailed Physical Network


[symple_box color=”red” fade_in=”false” float=”center” text_align=”left” width=””]
Note: All trunk ports are using native VLAN 999, which goes no where. I call this my black hole VLAN.

Routing Tables

To get traffic out of my network, I created a small /29 transit network between my switch and the router / firewall device. At the time of my design, it was an Untangle 10 whitebox, but I have plans to shift into a new Meraki MX60W in the near future (check out this YouTube video of me unboxing an MX60W). The switch default gateway is pointed at the router / firewall interface. Here’s the routing table on my layer 3 switch for clarification:

Routing Table

Each SVI also has an IPv6 address associated with it for some future fun.


A few hours later, the network design above has been implemented. I now have a lot of empty networks and have validated that routing works between them. I’ve also placed a test workload in each subnet and made sure it can hit the web, other networks, and other devices. Thumbs up.

DHCP, And Lots Of It

I hate managing IP addresses, and my lab is small enough that I rarely care to use a tool. I only have about 40-something servers to deal with, plus various devices in the house. I decided to add a new goal to my list:

  • Shift to DHCP for IP assignment, use reservations when required, and rely on DNS updates for A record and PTR records.
  • Use redundant DHCP High Availability in Windows 2012 (Eric Shanks has a good write-up on DHCP HA on his blog).

I created new DHCP scopes for each subnet that would need dynamic IP assignments. The scope of each subnet starts at .100 and goes up to .199, effectively giving me 100 addresses per subnet. I can also easily eyeball an IP and see if it is coming from DHCP or not.

I used Scope Options to set the router, NTP, DNS, and domain identities as shown below:

DHCP Configuration

[symple_box color=”red” fade_in=”false” float=”center” text_align=”left” width=””]
I am putting the DHCP High Availability requirement on hold until I migrate my domain controllers to Windows 2012 R2.

Rather than multi-homing my DHCP server, I used DHCP Relay settings on my layer 3 switch to pass along the DHCP discover messages to the appropriate server. The VLANs highlighted below are able to receive offers.

DHCP Relay Configuration

After a bit of effort, there’s now a bunch of new networks and the ability to use DHCP to assign addresses within them. To validate, I plopped down a device on each VLAN and made sure it received an IP address. Once this was done, I moved on to the next step.

DNS Points the Way

DNS is important in many network situations, but for a DHCP fueled environment, it was critical to have a healthy configuration. I went through my AD integrated DNS config and created reverse lookup zones for every VLAN that would need a name space. The reverse lookup records, or PTR records, are also required by some software installations.

DNS PTR (Reverse Lookup) Zones

I also spent time validating that devices with DHCP addresses had matching DNS records with a recent update stamp.

Next Steps

At this point the infrastructure pieces are completed. The network underlay is ready to receive my various devices and servers. I’ll craft up an additional post that covers the migration of IP addresses to the new networks – including vCenter Server and my ESXi hosts – in the near future as time permits.

If you have questions, found an error, or just generally wanted to comment on this lab architecture, feel free to drop a note below. 🙂