HPE 3PAR Want List 2017

I still remember being impressed with the design and architecture of the HPE 3PAR storage system all those years ago when going through a tender process for a replacement SAN. Over time new versions of the hardware and software have been released and again I’ve been impressed and excited to see the new possibilities. I figured I’d write down some of the things that interest me right now and that I hope to see in 3PAR arrays going forward.

 

Peer Persistence

Oh how I love Peer Persistence! When we implemented it all that time ago we struggled to get any references and it seemed like we were one of the few to be deploying it in production way back when. Moving forward I’m sure there are many more people running it – considering I’ve been a reference and spoken with other customers I think it’s safe to say it really took off. At the time I wanted it to run on my Hyper-V environment as well as the VMware setup but it wasn’t qualified for HV at the time so we had to stick with the HPE Cluster Extensions (CLX) model instead.

Moving forward I’d really like to see support for VMware Virtual Volumes (vVOLS) – it would be great to spin up a VM and as part of that define that it needs to be synchronously replicated and protected by Peer Persistence.

Non-Volatile Memory Express (NVMe)

NVMe is an exciting technology which helps us to more fully realise the benefits of flash and other high speed storage technologies. It is a new protocol stack which does away with many of the inefficiencies of previous solutions. When combined with high speed interconnects (such as PCIe) it allows for high throughput/IOPs and very low latency, in fact it blows the previous methods right out of the water!

There are certainly some challenges in implementing a system with many PCIe connected NVMe devices – the major one being the limited number of lanes available from the CPU(s). This can be overcome with PCIe switching but that then adds it’s own overhead. Of course the 3PAR doesn’t just have Intel Xeon CPUs there are the ASICs as well so I’m sure HPE can find the lanes required or implement them.

Remote Direct Memory Access (RDMA)

Whether you are running a hyperconverged setup or a central SAN, storage latency is incredibly important. RDMA is designed to reduce latency to the point where it is basically equal to accessing local storage along with reducing the processing overhead on systems. As the name implies it allows for direct memory access (bypassing hardware and software stacks) on a remote system. There are many demos online of suppliers showing off RDMA, in these they draw attention to the fact they can get huge numbers of IOPs at basically the same latency as accessing the same storage locally. I’m keen to see how HPE integrate RDMA in the 3PAR product line as well as their fabric and compute systems. There is a HPE build which has a blade chassis directly connected to a 3PAR – it could be really interesting to see this setup leveraging RDMA and NVMe storage (NVMe over Fabric) technologies.

Storage Class Memory (SCM)

We already have SCM hardware in the form of Intel 3DXPoint caching modules for 3PAR All Flash Arrays (AFA) to further accelerate read requests. Demos of this in action show impressive reductions in latency and increase in IOPs. While the modules are currently just in use as an extended system read cache it will be really interesting to see if HPE implement a high speed tier sitting above existing flash.

3PAR Simulator

I really appreciate vendors who provide product simulators, having an opportunity to interact with a system and use it’s management interfaces gives a much better feel of what it will be like to run one yourself. Obviously there are limitations to simulators and some suppliers provide very limited functionality while others give you something that is basically a complete product. I really hope HPE continue to develop the existing 3PAR simulator. It was great seeing it made publicly available for more people to experience the product so fingers crossed they continue in this way.

Leave a Reply