My home lab is comprised of two Supermicro SYS-5028D-TN4T servers which utilise the Intel Xeon-D system on chip (SoC) CPU. This chip has built in 10Gb Ethernet support and provides two ports for cool stuff like storage/vMotion/VSAN etc. We also have two 1Gb NICs, again provided through the SoC. Unfortunately the standard installation of VMware ESXi does not (currently) include the drivers for the 10Gb NICs so it is necessary to add them after install.
You can get details on the current driver version and compatibility from this link – VMware Compatibility Guide – Intel(R) Ethernet Connection X552/X557-AT 10GBASE-T
If we use the esxcli command ‘esxcli network nic list‘ we can see that only the dual 1Gb NICs are recognised by ESXi.
[root@bsa-esxi02:~] esxcli network nic list Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description ------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- ------------------------------------------------- vmnic0 0000:05:00.0 igb Up Up 1000 Full 0c:c4:7a:c2:a3:ac 1500 Intel Corporation I350 Gigabit Network Connection vmnic1 0000:05:00.1 igb Up Up 1000 Full 0c:c4:7a:c2:a3:ad 1500 Intel Corporation I350 Gigabit Network Connection [root@bsa-esxi02:~]
Now I will install the driver – in this instance I had a copy of the VIB (vSphere Installation Bundle) file stored on one of my datastores. You can also pull the file from a remote location, for example over HTTPS so there are other deployment options than the one outlined below. The command is esxcli software vib install -v [path to VIB]
[root@bsa-esxi02:~] esxcli software vib install -v /vmfs/volumes/BSA-ESXi02\ M.2\ NVME/net-ixgbe-4.2.2-1OEM.550.0.0.1331820.x86_64.vib Installation Result Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true VIBs Installed: Intel_bootbank_net-ixgbe_4.2.2-1OEM.550.0.0.1331820 VIBs Removed: VMware_bootbank_net-ixgbe_3.7.13.7.14iov-19vmw.600.0.0.2159203 VIBs Skipped: [root@bsa-esxi02:~]reboot
The system indicated a reboot was required -after a reboot we can run the esxcli network nic list command again and jump with joy at the sight of our 10Gb NICs.
[root@bsa-esxi02:~] esxcli network nic list Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description ------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- --------------------------------------------------- vmnic0 0000:05:00.0 igb Up Up 1000 Full 0c:c4:7a:c2:a3:ac 1500 Intel Corporation I350 Gigabit Network Connection vmnic1 0000:05:00.1 igb Up Up 1000 Full 0c:c4:7a:c2:a3:ad 1500 Intel Corporation I350 Gigabit Network Connection vmnic2 0000:03:00.0 ixgbe Up Up 10000 Full 0c:c4:7a:c2:a6:fe 1500 Intel(R) Ethernet Connection X552/X557-AT 10GBASE-T vmnic3 0000:03:00.1 ixgbe Up Up 10000 Full 0c:c4:7a:c2:a6:ff 1500 Intel(R) Ethernet Connection X552/X557-AT 10GBASE-T [root@bsa-esxi02:~]
The 10Gb NICs are now available to bind to a VMware vSwitch –
In my case I have connected the 10Gb NICs from both servers directly into each other (back to back) so that I have a direct path between nodes. As you can imagine this gives me a huge amount of bandwidth to play with things like vMotion and storage traffic. Whether I can get hold of enough flash storage to push this to the limit is another matter!
As always if you have any comments, questions or suggestions please drop them in the comments section below.
Hi, Do you still have net-ixgbe-4.2.2-1OEM.550.0.0.1331820.x86_64.vib file? I can’t get it downloaded from vmware. Thanks!
I’ll have to check if there is still a copy as I rebuilt my lab and wiped those drives.