I’ve spent several years working with folks to design, install, and configure Cisco UCS domains across the blade and rack mount products, from videos on the UCS Emulator to a Pluralsight course that walks through UCS for the CCNA DC exams. So imagine my delight when I was asked to be part of their launch event to talk about all sorts of new additions and updates to the UCS product family – glee! Heck, I even went and bought some special earphones just for the occasion.
In this post, I’ll walk through some of the announcements from Cisco’s UCS Grand Slam launch event in New York City where myself and other Tech Field Day delegates made a splash. It includes hands on time with the gear and some thoughts around ways to design and consume the technology. Let’s dive in!
[symple_box color=”yellow” fade_in=”false” float=”center” text_align=”left” width=””]
Note: All travel and incidentals were paid for by Gestalt IT to attend this event. No other compensation was given.
[/symple_box]
Large Punch in a Small Package
I wrote about the 6324 series fabric interconnects a while back, but at the launch Cisco has officially blessed the name UCS Mini. I won’t go into the weeds because everything I wrote about here is still valid, it’s just that Cisco had not made a formal announcement on the name or product until the Grand Slam Launch.

Monster C-Series Storage Box
The C-series line of servers are rack-mount boxes with various configurations and are usually great for more modular designs or for when drive space is a requirement. For example, a pair of C200 or C220 servers make a great out-of-band Management Cluster for vSphere and is included in every Vblock as part of the Advanced Management Pod (AMP). This leaves room for blades as the workload landing pad, often called the Resource Clusters.
This is being spun on its head a little bit with the new C3160 which is defined as a Capacity Optimized Server. I suggested it be renamed a giant box with a bajillion hard drives but was shot down. As you might have guessed from the naming, it can contain 60 hard drives in a 3.5″ form factor. 54 of them are top loaders in a 14 x 4 layout shown below:
Here’s another angle of the rows. Each drive is removed using a simple latch release move and slides out of the top.
The remaining four drives are located on the rear side of the unit. This is also where the PSUs, network interfaces, and flash devices reside. I’ve pulled out a spinning disk and SSD in the photo below:
The idea behind this box is to offer a lot of capacity. Each drive was rated 4 TB in the demo unit I looked at, which would end up being somewhere around 234 TB of raw storage in a 4U box. It wouldn’t be all that fast because they are spinning disks, and you would need a number of RAID groups to protect all this data (which eats into the raw capacity), but there are many ways to decouple performance from capacity using server-side flash and memory or even in-line flash caches on servers and network devices. Or, the workload may not require high performance storage, such as with large media files being written infrequently and read repeatedly.
For now, this box can not be managed by UCS Manager, but there’s still lots of possibility to make the C3160 one piece of the puzzle for a number of different designs. The list price seems rather steep, as according to Timothy Morgan over at EnterpriseTech:
The UCS 3160 server will be available in October. With two Xeon E5-2620 v2 processors, 128 GB of memory, two 120 GB SSDs for storage platform boot, and four power supplies, the system costs $35,396. That price does not include any disk drives.
M-Series with Modular Cartridges
A few new letters are being added to the UCS family. We already had A reserved for Infrastructure Software (UCS Manager and such), B was for blades, and C was for rack-mount boxes. Meet M, the Modular Series of servers.
There’s a few new terms here, so let’s review the architecture of the M-Series. First, there’s a new chassis called the M4308. It holds 8 cartridges, which look like very skinny blades. Each cartridge is logically isolated into two unique nodes (servers). Here’s the math:
[8 Cartridges] x [2 Nodes per Cartridge] = 16 Nodes (Servers) per M4308 chassis
Here’s what one of the M142 compute cartridges look like:
Each node holds an E3 Intel proc with up to 32 GB of RAM. There are no drives or VIC (Virtual Interface Cards) on the node, but we’ll get to that. The idea here is that you have a very easy-to-manage compute node (via UCS Manager or Director) that can be consumed by folks needing bare metal such as application developers looking to use containers or some other technique. If you want to slap ESXi hypervisors on these, you’d be better off looking at the 5108 chassis with B-series blades.
On the rear of the M4308 sits a small bay of 4 SSDs. They are plumbed into a SCSI controller and made available to the nodes in a shared fashion. Here’s a shot of the SSD drive connection and make note that the black heat sink you see towards the bottom is the VIC. It’s a way different form factor that what we’re used to seeing (such as the Mezz card with thumb screws).
And here’s the SCSI controller card:
Pretty much all the components here are modular, except the VIC. There’s also an expansion slot shown in the picture above. The idea is that you can plumb this 4308 chassis into the network and divide up the interfaces much like what you’d do with UCS B-series today, but using a new ASIC called System Link Technology (Jason Edelman has written about that here). This is a pretty big deal, as most folks have a design in which each node (server) has discrete networking, which adds to the cabling and overall complexity of the design.
If the VIC fails, which is already extremely rare to begin with, the idea is that your workload would be able to restart on other nodes. This is the modern way to design applications: assume the hardware will fail, and plan to resolve it in software.
Thoughts
There’s a ton of new nerd knobs for us technical folks to turn with the new line of UCS products. I really enjoyed getting to put my hands on brand new tech and hear stories around how customers are using the gear and how Cisco plans to further build out new use cases. These are the first products from Cisco UCS to really branch out beyond the standard server configuration, in that we’re now seeing small sized compute nodes that don’t even have storage, and massively sized compute nodes that have very little processing power but gobs of storage.
More choice is good, and the fact that the products can all tie back into UCS Manager and Director further add value to people who have to actually build, configure, and do day-to-day administration of the gear.