Home Lab Storage and Network Upgrade

I’m a huge advocate of home lab equipment – it’s one of the best ways to learn and grow as an IT engineer or whatever your job title names you. Having stuff you can intentionally break then try to fix or to test something so you are ten times more confident doing it for real is priceless.

Some companies have lab kit for engineers to use but in my case it is all self-funded from my own pocket, obviously I’d rather have spent that money on Steam sales or collectables but needs must. Also I love my work and the hardware so I kind of like having my own servers :)

My servers were documented in this older post (it still checks out) – https://www.bytesizedalex.com/supermicro-sys-5028d-tn4t-servers-first-look/

That was nearly a year ago and the lab served me pretty well however a few issues resulted in me upgrading that setup in so far as the networking and storage are concerned. The Xeon-D chips are awesome as they have on-board 10Gb NICs, two of them in fact. This is vital as the microservers I use only have a single PCIe slot, as I wanted to make use of NVMe flash storage I needed that slot and the M.2 slot for storage and couldn’t afford to lose it to a 10Gb NIC.

Anyway you’re probably wondering what my upgrades look like and how sexy they are… let’s go take a look.

Network Upgrade

First off the network has been upgraded to 10Gb using a Netgear “XS716T ProSAFE 16-Port 10-Gigabit Smart Managed Switch” which is quite a mouthful. I would have loved to stick with Cisco kit but budget and noise were a factor. The Netgear is a bit of a change but it simply required me to learn a new CLI and some other quirks and all was good in the end. I still make use of my trusty Cisco SG300-20 switch for the IPMI interfaces and dual gigabit NICs. It is also used for any other connectivity needs in my home so it isn’t going to be removed anytime soon.

As you can see from the picture below we have 16 10Gb ports, with two of the copper ports twinned with SFP+ ports. The Xeon-D on-board ports are all copper in my Supermicro servers so this was exactly what I needed. We have some status lights on the left along with reset buttons and a USB port which serves a few purposes outside the scope of this post.

XS716T Front View

The rear of the switch has a power socket, a slot for a Kensington lock and a port that apparently is not for customer use. I am rather tempted to see what it can be used for as that might be an interesting project. That being said I do kind of need this expensive switch to work so tinkering may be best left till later.

XS716T Rear View

If we look at the switch from a side view we can see the two fan outlets – in my use they are very quiet once the device has finished initialising. During boot etc the thing is like any switch you’d have in a DC and definitely not something you want running in your house. I live in a small’ish terraced house in the UK with no garage or other space so noise is a big concern for me.

XS716T Side View

I connected both 10Gb ports to the switch on each server and used LACP to bond the ports, I’m running ESXi Enterprise Plus edition at home so I can leverage this with a Distributed Switch/NSX for cool networking. I have another two ports for some other devices in use and then another LACP trunk to my Cisco SG300-02 switch. This link provides access to the rest of my home network and the outside world for those systems I allow out.


Storage Upgrade

The storage upgrade came about as I found the M.2 Samsung SSD drives struggled with load at times and thermally throttled themselves. Latency was also far too unpredictable and just generally became a headache. These are great little drives for booting a client system or running a client workload but I always knew it was a push asking them to run vSAN under heavy load.

To resolve this I went all out and upgrade the write cache and capacity tier with something rather sexy…

Intel Optane SSD 900P and DC P3520 Upgrade

What you’re looking at is a pair of 280GB Intel Optane 900P drives as the write cache and 1.2TB DC P3520 cards for the vSAN capacity tier. That means I still have all flash, all NVMe storage but with significantly better write endurance, reliability and capacity.

For those of you who like figures, the links below will take you to the Intel ARK site articles for each of the above but I’ll also provide the headline values…

Intel Optane – https://ark.intel.com/products/123623/Intel-Optane-SSD-900P-Series-280GB-2_5in-PCIe-x4-20nm-3D-XPoint

  • Sequential Read (up to) 2500 MB/s
  • Sequential Write (up to) 2000 MB/s
  • Random Read (8GB Span) (up to) 550000 IOPS
  • Random Read (100% Span) 550000 IOPS
  • Random Write (8GB Span) (up to) 500000 IOPS
  • Random Write (100% Span) 500000 IOPS
  • Latency – Read 10 µs
  • Latency – Write 10 µs
  • Endurance Rating (Lifetime Writes) 5.11 PB Written

This is basically a typical SSD form factor using a U.2 connector – in my case I actually connect via the provide U.2 -> M.2 cable.

Intel DC P3520 – https://ark.intel.com/products/88722/Intel-SSD-DC-P3520-Series-1_2TB-12-Height-PCIe-3_0-x4-3D1-MLC

  • Sequential Read (up to) 1700 MB/s
  • Sequential Write (up to) 1300 MB/s
  • Random Read (100% Span) 320000 IOPS
  • Random Write (100% Span) 26000 IOPS
  • Latency – Read 20 µs
  • Latency – Write 20 µs
  • Endurance Rating (Lifetime Writes) 1480 TBW

These cards make use of the single PCIe slot available in the microserver.

My hope is I will get some time to do some ‘proper’ testing on this setup but I can tell you right now the latency is fantastic and the performance is consistent and blows the previous setup out of the water. While I did shop around and work to get good deals these items still cost me some serious cash, that being said my blogging, speaking and whatever else I do with it is in my own time/annual leave. That means I want this stuff to be as quick as possible so I don’t spend time waiting on it. If you’re lucky to be given equipment to use or have kit donated to your lab that is a very lucky position to be in.

Once I finish deploying all the new vRealize appliances I will try to get some tests done that show how this setup performs. I do need to spin up a vSAN witness or if I win the lottery buy a third node that is set up the same as these two but no chance of that in the near future I’m afraid!

5 thoughts on “Home Lab Storage and Network Upgrade”

    • Hi,

      I found it to be stable and didn’t have any issues. I made sure to install the latest Intel drivers, I’m in the process of rebuilding the whole setup so once it’s all up and running again I can do some further testing but for now I’d say it’s been fine.

  1. Hi – you got a very cool Lab in your hands. I’m currently dressing up a grocery list for my future ESXi lab (ony 1 Tower with nested ESXi) )and I must say your component’s choice are really helping me out. Have you kept the M.2 for a secondary datastore ?

    • It’s a nice setup that’s for sure! I still have the M.2 sticks but I don’t have any way of using them on those microservers. They have a single slot and PCIe slot so with my new storage there was nowhere left for them. I’m using them on my PC but if I upgrade my NAS I might use them as a cache on that.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.