Converged Networking–Current Thoughts May 2015

In my current environment we predominantly run HP blade servers with dual-port 10Gb CNAs (converged network adapters) in Fibre Channel over Ethernet mode (FCoE). This allows us to push all of our network and storage traffic down the same adapter and fabric.

I know that converged networking is a great idea and it certainly has benefits in terms of reduced management, cost etc. however my feelings are I would prefer dedicated network and storage fabrics.

If one of the traffic types experiences a surge it can impact on the other and while dual 10Gb links sound like a lot of bandwidth it’s remarkably easy to eat into that with things like multi-NIC vMotion and high consolidation ratios. Personally I would love to keep the 10Gb NICs and have a dual-port 8Gb FC adapter thrown in for my storage traffic.

I have a third datacentre of sorts which has HP rack mount servers running 4 x 10Gb links which is where my argument starts to fall down as I have so much bandwidth it’s hard to saturate with the load on those hosts. It should be noted those particular hosts are very lightly loaded so it’s hard to compare them to my main DCs which have higher consolidation ratios and much higher VM virtual hardware allocations. Obviously as the adapters we deploy grow in capacity (I’m looking at you 40Gb and 100Gb) the argument starts to go away but for me I think there will always be a place for segregation of workload and type. It’s hard to beat a dedicated FC link to your storage. Hardware iSCSI has definitely come on leaps and bounds and with 10Gb networking it is a real contender to the storage crown that is FC.


This is just a small post which I will probably come back to in the future to reference and see how my experience with new deployments and working has altered my perspective on the matter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.