I have spent several years working with Cisco’s Unified Computing System (UCS) as both a channel partner (design, deploy, configure, troubleshoot) and as an instructor on Pluralsight – specifically, as part of my CCNA Data Center course that covers the 640-916 exam. Recently, I’ve been watching as bits of data have been quietly emerging on the web around Cisco’s latest addition to the Fabric Interconnect family: the 6324.
A standard UCS design calls for a pair of 1U or 2U Fabric Interconnects (FIs) that form the aggregation point for all server (blade and rack), storage, network, and management connectivity. Each FI has complete awareness of the UCS domain and actively handles data traffic. One FI is acting as the active node for management and the other waiting as a standby.
The 6324 changes this up by shrinking the FIs into small, FEX sized modules that slip into the slot normally reserved for an IO Module (IOM) on the back of a slightly modified 5108 chassis called the 5108-AC2. This seems to be a way to offer a small style footprint of Cisco UCS by removing the IOMs and replacing them with 6324 FIs. Below is a closeup of the new 6324 FI:

It sort of looks like a 2204 IOM with extra ports on it, right?
Note that the differences between the 5108 and 5108-AC2 chassis is minor: there’s a new backplane involved so that the two 6324 FIs can communicate with one another, which would normally be done via the L1/L2 interfaces on a 6200 FI but are missing from the 6324. The blade slots in the 5108-AC2 look and work like normal, so there’s no need to rip and replace a chassis – which would be incredibly painful for most organizations. Besides, the modified chassis is there for a 6324 FI anyway. 🙂
Contents
6324 Ports and Optics
But hey, what do the ports above actually do? Good question. There are five different types of ports on the 6324:
- Management
- Console
- USB
- Qty x4 SFP+
- QSFP+
I’ve slightly altered a source graphic with some colors to make this fairly clear below:

While the management, console, and USB ports are rather self explanatory, the others are worth going deeper into.
The bank of 4 SFP+ ports are similar to the unified ports you use today on a 6200 FI. The difference is that they are not used to connect to a downstream IO Module (IOM). You can use them as uplinks out of your system (Ethernet, FC, or FCoE), connect them to C-series rack servers, or use them as appliance ports. Few customers I’ve worked with have gone down the appliance port – I have an example post using them for NFS storage with NetApp.
The QSFP+ port can be broken out into 4 SFP+ ports with a breakout cable. The result is 4 more connections to C-series rack servers, appliance ports, or FCoE storage ports.
Today’s Configuration Maximum Design
When you’re all done, the end state configuration maximum is 8 blades (in the 5108-AC2 chassis) and 7 C-series rack mount servers. The reason it’s not 8 C-series rack mount servers is because you need to use at least one connection as an uplink to get traffic out of the system. Currently, the only supported compute nodes are:
I would imagine this will change quickly once the various code revisions required are added to the system. The B200 M3 is incredibly popular for all types of business, so I don’t see this being a big deal based on who I see consuming this offering.
Data Center + ROBO + UCS Central
One idea that struct me around future state architecture would be using UCS Central, which is a sort of global manager for UCS Manager (UCSM), to easily control UCS domains across the data center and remote office / branch office (ROBO) deployments. Something along the lines of 6200 series FIs in the data center for large scale compute projects and 6324 FIs in ROBOs for smaller deployments of mixed workloads of virtual desktops, servers, and anything else that might be required on site. Experience has shown me that any data intensive applications that can be run remote – such as database, analytics, or other data heavy apps – might be a solid fit to run in a 6324 stack, shipping only the deltas, such as with ETL workloads, back to the data center.
The real meat and potatoes of UCS has always been the intelligence baked into the FIs for near-stateless computing configuration (service profiles). At the very least, it’s a much lower cost point than getting into a pair of 6248’s – especially if your remote office is lacking 10 GbE – while still getting to delight on UCSM across the enterprise.