Building a Monster SAN with the Cisco MDS 9710 Director

It’s difficult to spend an entire day in a room with a “mystery box” shrouded with a sheet, especially when it is called out and you are asked not to peek. This accurately describes the time I spent at Storage Field Day 3, all the while wondering what was in the mystery box without giving into the temptation of pulling back the sheet.

My patience was rewarded when the Cisco Data Center (Cisco DC) team arrived – it was the new Cisco MDS 9710 Director! Sitting at an impressive 14 RU, which is the same as its older brother the 9513, this new SAN Director has a lot of neat features that put it into the “Monster SAN” category in my mind.

Note: All travel and incidentals were paid for by Gestalt IT to attend Storage Field Day 3. No other compensation was given.

Notable MDS Improvements

From a hardware perspective, I immediately liked the fact that the MDS 9710 uses front-to-back airflow. This avoids needing to burn up extra tile space in a co-location on side-to-side airflow with a 4 post rack, or purchasing a special purpose cabinet with the airflow cowling.

Additionally, the supervisor modules are next to each other and consume a half-width slot instead of eating up an entire full-width slot, which allows for a more compact design. They are also much more beefy than the 9500 Series Sup-2As, packing 4x more memory (8GB instead of 2GB) and 4x more CPU cores (4 cores instead of 1 core) at a higher clock speed. This makes obvious sense (as newer hardware is released at price points palatable to both Cisco and the consumer) but it’s nice to see it’s more than an incremental improvement: 4x more is pretty beefy.

Cisco definitely had redundancy in mind with the 9710 chassis, as everything about it comes in N+1 or Grid redundancy levels. For example: the front side, bottom portion houses a total of 8 power supplies rated at 3000W each, which equates to 3 power supplies with an additional spare for each PDU (power fabric). There’s also room for 3 fan trays on the rear, with each tray housing 4 fans, that sit over the fabric modules.

I’m including a video below of the components introduced with the 9710 at Storage Field Day 3, or you can watch any of the other videos from Cisco DC’s presentation.

The 9710 Provides 16Gb FC Line Rate … Everywhere

48-port-16gb-fcThe MDS 9710 uses up to 6 crossbar switching fabric slots on the rear of the chassis. This really boils down to the idea of using line rate for every port when at least 3 fabric modules are installed. I would imagine that a lot of storage engineers are excited to hear this, because the previous model was to stagger connected ports to monopolize the speed provided by the underlying ASICs to avoid oversubscription. The 9710 supports the use of 16Gb FC ports on their 48-port line card, arranged in 12 4-port port groups. Ports can be sent to a variety of speeds, including 4, 8, 10, and 16 Gb FC.

Cisco also provided details on a new FCoE native line card that will be released in the near future. It will also be a 48-port line card and can be mixed with the 16Gb FC cards if that meets your design needs or use case.

What’s In A Speed? FC vs Ethernet Data Rates

One final note of mention is in the labeling of port speeds vs data bandwidth. It strongly smacks of the days when a controversy arose over how to measure CRT computer monitors: viewable size or physical size? (Spoiler: Ultimately the number that was larger was chosen). In the case of a SAN, Ethernet speeds are true to their name, while FC speeds are misleading. Cisco has decided to “talk about FC bandwidth as front panel FC bandwidth, but fabric module bandwidth in actual data bandwidth” if you read their presentation materials.

fcoe-vs-fc-bandwidth

Note: 16Gb FC really only operates at data rate of 13.6Gb. To put in perspective, 8Gb FC has a data rate of 6.8Gb. 10Gb Ethernet (FCoE) operates at a speed of 10Gb. See how this can get confusing? 🙂

I would tend to agree with Cisco. Now that we’re mixing Ethernet and FC, the fabric modules will have speeds represented in actual data bandwidth instead of FC front slot port speeds. Kudos on them. This also means that a fabric module may have a data bandwidth of 220 Gb per slot, but in “FC land” it would be 256 Gb of front panel FC bandwidth.

Suffice to say, having 3 fabric modules is enough to ensure there is no overcommit necessary for line rate speeds on all front side 16Gb FC ports. Additional fabric cards provide redundancy and increased bandwidth for future port speeds and types.

Thoughts

There are many non-trivial improvements with the 9710 that really make it attractive for anyone who has a 9500 coming off maintenance soon, or for those interested in building out a SAN in a greenfield environment. Once the announced FCoE line cards are released, SAN engineers will have some very nifty tools at their disposal for designing a storage network that accomplishes their goals. Just make sure to have a friend or two help you rack this thing – it’s over 400 pounds fully loaded. 🙂