While working in my lab, I decided to try and simulate an iSCSI topology using iSCSI binding on my Cisco 3550 switch. It’s a solid switch that I mainly use for learning Cisco IOS, but one major drawback is that the 48 ports are all Fast Ethernet, aka 100 megabit.
I know that iSCSI over 100 megabit is not a supported configuration, but I thought I’d set it up just to go through the motions in a lab environment. Interestingly enough, vSphere refuses to use the uplink at all. iSCSI binding works fine, and I am allowed to add the bindings to the software iSCSI adapter, but traffic refuses to flow. There are no warnings about using 100 megabit from the user interface.
Everything looks fine from the binding perspective
Digging through the hostd.log file, the only indication is an entry that states “vmk# is unable to connect to iqn.####”. Migrating those same uplinks to a gigabit switch immediately fixed the problem, and I was able to see and attach the iSCSI LUNs.
Thoughts
The major takeaway is that not only is 100 megabit not supported, it simply won’t work. Sometimes those two facts are not mutually exclusive (supported often means “we don’t want you to do it”).
It would be nice if the iSCSI binding failed, citing 100 megabit port speed is not allowed, or if the logs contained something more than “unable to connect”. I think you’d be pretty crazy to use 100 megabit in production anyway, but it can be a pinch in a home lab. As a resolution, I am going to migrate from a Cisco 3550 to an HP V1910-24G, as it seems like a great home lab multilayer switch with 24 gigabit ports.