HP 3PAR Simulator Deployment on VMware vSphere ESXi

Today we will deploy the HP 3PAR simulator on VMware vSphere ESXi 5.5. You can also deploy the simulator on VMware Workstation however the steps are slightly different. Please note this is quite a long post as it includes many screenshots and notes.

System Requirements

The 3PAR Simulator requires a VMware ESXi 5.x or a VMware Workstation 9 or 10 setup. The simulator configuration
requires deploying three VMs; two VMs simulating the cluster nodes, a third VM to simulate the enclosure and private
network configuration to enable communication between the three VMs.
The minimum system resources required for each VM are:

  • One virtual CPU (single core)
  • Three virtual NICs
  • 2GB RAM
  • 50GB disk space (full or thin-provisioned)


Note
– If you are going to deploy the 3PAR Simulator with Remote Copy over IP (RCIP) the cluster node VMs will require more resources. HP suggest 4 GB of RAM is needed for each cluster node.

Step 1 – Deploying the Cluster Nodes

The first step is to deploy the OVF package to our virtual environment. The simulator comes with two OVF files, one for the cluster nodes and the other for the enclosure node. We will start by deploying the cluster node OVF.

Open vSphere and work your way through the ‘Deploy OVF Template’ wizard. I have taken screenshots from the vCenter web client interface for guidance –

Deploy OVF Template

  •  Select the cluster node – click ‘Local file’ and use the ‘Browse’ button

Select Local File

Review the OVF template details

Review OVF Details

  • Give the cluster node a name – choose something meaningful, for example I have named mine 3PARSIMCN01 (CN = cluster node)
    • You will also need to select a folder from within your vSphere inventory, I have to blank out this section as it includes company specific information

Select Name and Folder

  • Select the ESXi cluster and potentially a specific node for the VM to start on. Again I have to blank some of this information out

Select Cluster or Host

  • Select a datastore and provisioning format

Select Datastore and Disk Format

  • Pick the network for the VM – in this case leave the setting on ‘VM Network’. The VMs have 3 vNICs and we do not want to configure them at this stage

Select VM Network

 

Once you have deployed the first cluster node you will need to deploy a second – if you are curious as to why then think about how the physical array works; it requires at least two physical controllers and the simulator is emulating this behaviour.

The process for deploying the second node is simple – either clone the node you have just deployed or work through the OVF deployment wizard a second time using the same settings as above but with a new VM name. Below I have included screenshots of the cloning process –

Clone Virtual Machine

  • Give the cloned VM a name – in this instance I will append 02 to help me identify the second cluster node

Select a Name and Folder

  • Select a cluster/node for the VM to start on

Select Cluster or Host

  • Choose a datastore for the VM

Select Datastore and Disk Format

We do not need to change settings at this point

Select Clone Options

  • Review your settings and click Finish to begin the cloning process

Review Settings

Once the cloning process has completed we can move on to the next step – deploying the enclosure OVF.

Step 2 – Deploying the Enclosure Node

Now we have two controller nodes deployed we can deploy the enclosure node OVF which is the final VM required for the simulator.

Again we will use the wizard to deploy an OVF template. The steps are identical to those above so I don’t think we need another round of screenshots.

Step 3 – Configuration

Now that we have our VMs deployed it’s time to start configuring. Make sure you complete the steps below BEFORE you power on the VMs.

VM Configuration

To begin with, if you intend to deploy a second simulator and link the two to test things like Remote Copy over IP the now is the time to increase the RAM allocated to the cluster nodes. HP recommend at least 4GB of RAM when utilising the RCIP feature. I would suggest creating a DRS rule to keep the simulator VMs together.

Network Configuration

When we deployed the VMs we selected the ‘VM Network’ option for our networking. It is now time to configure the vSwitches in vSphere for the simulator. The cluster nodes each have 3 NICs and each has a specific use and requirement.

vNIC1

  • The first vNIC is for management – this is what we will use to connect over SSH, the 3PAR Management Console and any other management tools we may wish to use. Ensure you connect this vNIC to a vSwitch that has access from your desired management station.

vNIC2 (optional config)

  • The second vNIC is used for Remote Copy over IP and is dedicated to this purpose only. The second vNIC should only be configured on the cluster nodes, there is no need to configure the enclosure node vNIC. If you do not intend to use the RCIP feature then you can skip configuring this adapter.

vNIC3

  • The final vNIC is used to connect the 3 virtual machines which make up the simulator. This is done on a private vSwitch with no pNICS (physical NICs) attached.

vSwitch Configuration

It is important to note that the following is a guide only and assumes you are configuring your environment in the same way as myself. My setup includes 2 ESXi hosts using standard vSwitches, if your setup differs you will need to modify the steps to fit.

We will start by deploying a standard vSwitch for vNIC3 – this is a private network and should not have any pNICs attached to it.

I will use the vSphere Web Client to complete the required steps –

  • Click to ‘Add Host Networking’

Add Host Networking

  • Choose the ‘Virtual Machine Port Group for a Standard Switch’ option

Select Connection Type

  • Select ‘New Standard Switch’

Select Target Device

  • In my lab there are no spare adapters therefore the listing shows as blank, if you did have adapters remember that for this vSwitch we do not want any pNICS configured so do not select any

Create a Standard Switch

  • vSphere warns us that we have not selected any pNICS – click OK to continue

Physical Network Adapters Warning

  • Pick a name for your ‘Network Label’ – in my case I left it with the auto-generated name of ‘VM Network 2’. Leave the ‘VLAN ID as ‘Non (0)’

Connection Settings

  • Click Finish to deploy the new vSwitch

Review Settings - Ready to Complete

Note – If you have multiple VMware ESXi hosts in a cluster and you want your simulator VMs to vMotion then you will have to configure the same Standard Switch across all hosts.

Once you have completed the above steps it’s time to configure networking for the simulator VMs.

Open the first cluster node VM settings and configure each vNIC –

  • vNIC1 – I have connected this vNIC to VLAN 500 on vSwitch0. This is my management network and is accessible by VMs running on this host and external hosts such as my laptop
  • vNIC2 and vNIC3 have both been configured on vSwitch1 – my reasoning for this is I want to confine the RCIP traffic to this vSwitch and not have it exit onto my physical network and route back into the host

Edit VM Network Settings

The traditional C# VMware vSphere client gives a better overview of the final network topology –

Network Topology Diagram

As you can see, we have the management vNICs for the simulator VMs in vSwitch0 connected to the port group VLAN 500. This vSwitch has access via 4 pNICs. The simulator VMs also have vNICs in vSwitch1, this vSwitch has no pNICS connected to it.

Next configure the second cluster node in the same way as above – vNIC1 on the management network on vSwitch0 and vNIC2/vNIC3 on vSwitch1.

Finally configure the enclosure node.

I suggest you create a rule to keep the virtual machines together on the same host.

Now that we have completed the network configuration we can move on to the next step.

Simulator Configuration

Make sure you have the following information to complete the configuration –

  • HP 3PAR StoreServ storage system name, an alphanumeric name up to 31 characters.
  • IP address, subnet mask and Gateway IP address. The IP address must be a statically assigned address. Contact your administrator for the address information.
  • Gateway IP address
  • NTP server IP address (optional)
  • DNS server IP address (optional)

Power on your simulator VMs – depending on your hardware they may take a while to complete the boot process.

Once the VMs have started open the console to your enclosure VM – remember there is only one of these. You can tell you are on the correct VM as the CLI prompt will show ‘esdhost login’ –

Logon to ESD Host

Logon using the following credentials –

  • Username – root
  • Password – root

 

  • Enter the command createcages

Enter createcages Command

As you can see a number of options are available to us. This is where we get to decide how our virtual simulator disk layout will look, review the options and pick one of your choice. I will select option 2 –

  • 4 cages, 48 HDDs 1 DCN1 cage with 6 FC drives and 6 SSD drives 2 DCS2 cages with 6 FC drives and 6 SSD drives each

Enter a number to create cages

  • Now that we have chosen a disk layout it is time to start the esd daemon. To do so, enter the command esd

Enter ESD command

Now it is time to configure the cluster node VMs.

Open a console for the first cluster node you deployed, logon to the node with the following credentials –

  • Username – console
  • Password – cmp43pd

Logon to Cluster Node

  • Choose Option 1 – Simulator Cluster Node Configuration – you will be prompted to accept the license agreement
  • Enter 0 for the Node ID
  • Enter a serial number for the cluster node. HP provide two serial numbers – 1699678 or 1699679. Make a note of which serial number you choose – we will need this when configuring the second cluster node
  • Confirm your settings

Configure Cluster Node

  • Choose Option 5 to Reboot the node – do this now

Open a console for the second cluster node you deployed, logon to the node with the following credentials –

  • Username – console
  • Password – cmp43pd

Complete the same steps as above but this time make sure to select 1 for the Node ID.

Note – it is VITAL that you enter the same serial number for Node 1

Configure Cluster Node

  • Choose Option 5 to Reboot the node – do this now

Once both nodes have rebooted logon to the first cluster node.

  • Choose Option 2 – Out Of The Box Procedure

Configure Cluster Node Out of Box Procedure

  • Type yes
  • Type c

Configure Cluster Node

  • Select regional settings and configure the date and time

Configure Cluster Node

  • Enter the InServ name for this array

Configure Cluster Node

  • Verify the InForm OS versions

Configure Cluster Node

  • You will be prompted to confirm details a number of times – press c to continue on each
    • Note – this process may take a while
  • An SSL certificate will be generated – the system will present you with the Subject and SHA1 fingerprint, make a note of these for later verification

Configure Cluster Node - SSL Fingerprint

  • The system will prompt you to configure IP address details

Configure Cluster Node

  • I have chosen to only use an IPv4 address. The system will prompt for an IP, subnet mask and gateway address

Configure Cluster Node

  • I have chosen to use auto negotiation
  • Enter an NTP server IP if you have one
  • Enter a DNS server IP if you have one
  • Do not disable non-encrypted ports

Configure Cluster Node

  • Check your settings and confirm they are correct
    • The system will now complete configuration of the cluster nodes
  • You will be prompted to select a spare chunklet selection algorithm – I have chosen to use the minimal setting by entering Mi

Configure Cluster Node

  • You will be prompted to Continue and then asked whether you wish to enable the CIM-based management API. I chose to enable this functionality

Configure Cluster Node

  • After some scrolling the system will ask you to confirm your acceptance of the terms and conditions – do so now

Configure Cluster Node

That’s it!

We are finished and can test functionality by opening the 3PAR Management Console and trying to connect. Of course you can ping the management IP you defined earlier to test basic connectivity exists.

I created a static DNS record for this IP address to make life a little easier.

Open the 3PAR Management Console and use the following credentials to connect –

  • Username – 3paradm
  • Password – 3pardata

3PAR Management Console Logon

Assuming you have configured everything correctly you should be prompted with a certificate alert – remember we made a note of the certificate details earlier. Validate the certificate subject and SHA1 fingerprint –

SSL/TLS Certificate Warning

Accept the certificate and you will be connected to your brand new 3PAR simulator –

3PAR Management Console

If you need to shutdown the simulator the correct process is detailed below.

Power Off the Simulator

Gracefully shutdown the HP 3PAR Simulator by first shutting down the cluster nodes, then stop the esd daemon and shutdown the esd VM. Use the following steps:

    • Login to cluster node 0
      • Username –  3paradm
      • Password – 3pardata
    • Run the command – shutdownsys halt

Note – In certain cases, it might be necessary to power off the VMs using the vSphere client after issuing the command to shutdown the system.

Once the cluster node VMs (node 0 and node 1) are powered down:

    • Login to the enclosure node
      • Username –  root
      • Password – root
    • Run the command – esd stop
    • Once the daemon is stopped, issue the command –  shutdown –h 0

     

    Powering On the Simulator

    • Start the enclosure VM first and then run the command esd – this will start the esd daemon. Remember there is only 1 enclosure node
    • Start the two cluster nodes

    Remote Copy

    If you want to make use of the RCIP feature you will need a second simulator deployed. This is why HP provide 2 serial numbers (remember we have only used one for this deployment). The process is the same as above for the second simulator, deploy the 3 VMs, configure them and test connectivity.

    Configuring RCIP is covered in this post – https://www.bytesizedalex.com/hp-3par-simulator-remote-copy-rcip/

     


     

    That’s it for this post – it was a long one but hopefully it will cover everything you need to know when deploying the 3PAR simulator on ESXi. If I have missed anything or clarification is required then leave a comment and I will do my best to assist.

    If you want to run the simulator on VMware Workstation then see my second post for a walkthrough – https://www.bytesizedalex.com/hp-3par-simulator-deployment-on-vmware-workstation/

    34 thoughts on “HP 3PAR Simulator Deployment on VMware vSphere ESXi”

      • Hi Trent,

        To gain access you need to contact Ivan Iannaccone (@IvanIannaccone) on Twitter – he is the HP 3PAR product manager and will provide you with a URL and credentials to gain access. It is a great product – I will be posting instructions for configuring RCIP between two simulators hopefully in the next week or so.

        Alex Bytes

        Reply
    1. Hi. Just got the link from Ivan, downloaded and followed your instructions. But when I start up the cluster node 1, it says “3PAR Console Menu (none) 3.1.3.230 The InServ Simulator license has expired!”. Any ideas on what that is about?

      thanks

      Reply
      • Hi Craig,

        I do know that the simulator indicates a license expiration date during the install. I tend to work on my live 3PAR arrays and only come to the simulator as and when required so I personally haven’t come across this issue. It might be worth checking with Ivan if there is one and if so how we get around it, possibly with a new download. I’ve also sent a message to Ivan, hopefully he can provide an answer.

        Reply
        • Thanks Alex, as a temporary workaround i’ve discovered i can set the clock back to this time last year and it works now with a 1 year license. I’ll check with Ivan too if there’s a better solution.

          Reply
        • Just to clarify my earlier post, the license expired issue appeared immediately after initial attempt to login and then you were immediately logged out. So there was no opportunity to enter the install stage and enter the supplied serial numbers into the appliance.

          Reply
          • Hi Craig,

            You may have already heard back from Ivan but if not here is the response – “I recently found this out. We are uploading a new version. For compliance reasons we must have a time license”

            Sounds like you already have one workaround (the good old change the date/time)!

            Reply
        • while installing the simulator today I also faced the issue of expired license but got a workaround ..changed the BIOS date for all the 3 VM’s and set the 1 year old date.After that login using console and cmp43pd credentials worked and i am able to use the 3 par simulator without any issue.

          Reply
      • Hi RN,

        Unfortunately this is a known issue confirmed by HP and Ivan. They are working to create a new simulator download which we will all have to grab a copy of. At the moment the best work around is to modify the date/time in the VM BIOS as per some of the comments by other users.

        When I wrote the deployment guides the license expiration date had not yet been reached and I must confess I didn’t expect it to actually cease working as there was no mention of this in my talks with HP.

        Alex Bytes

        Reply
    2. MONSTER !!! :) Its working for me, but now I have one problem, Im trying to export a volume to an ESX iSCSI adapter on a physical esx, but I can’t see anything, the 3PAR says me: Host “ESX” has no active paths. Template will be created but no active VLUNS will be created. And of course, the host has an iSCSI initiator added. Any ideas? Its just for a lab because this saturday I have a job with a real 3PAR resizing LUNS and I don’t wanna crash nothing xD Its a simple job but I wanna go trusting me :)

      Reply
      • Hi Miguel,

        Unfortunately you’re not going to be able to do that. If you log on with the management console and browse to the array ports (Systems -> Select System -> Select Ports -> Click iSCSI Tab) you will see there are no adapters. You will also see that the option to configure an iSCSI port is greyed out on the management console. Alternatively you can SSH to the array and use the showport -i command to see a list of ports. The only Ethernet ports are bound up for Remote Copy over IP. It is possible to create ‘fake hosts’ on the array but to be honest this isn’t going to help much in this situation.

        Exactly what are you doing? If it’s just a case of increasing the size of virtual volumes then you should be perfectly fine though of course I recommend checking host OS support for this sort of work. Recently I expanded 20 datastores presented to ESXi 5.5U2 from 2TB to 4TB on the fly without any downtime or IO interruption. I’ve also done the same with volumes presented to Microsoft servers directly through virtual fiber channel adapters, again without any impact. This was all done during primary production hours so that will give you an idea of my confidence in the system and process when done correctly.

        I agree with you though, it’s far nicer to be able to test this sort of thing in a lab setup first to be sure of your commands and process. Feel free to run this stuff by me and I’ll be happy to review.

        Alex Bytes

        Reply
    3. Hi Alex, thanks for your fast reply !

      Well, this is the case: An environment datacenter. It has some LUNS, but in this case, I need to delete 1 of those LUNS (development LUN), then increase DB LUN and then, create new development LUN, because they don’t have more space on 3PAR than the used. Once created at 3PAR, refresh/increase on vmware both LUNS. Now, with the DB LUN increased, on 2008 VM second disk (HD DB), increase the space, and finally on the Windows 2008 Server OS, increase space at Disk Management xD My boss saw the approach and he approved. Now I have a nice lab to try, but sadly I don’t have a physical FC and of course, I don’t have iSCSI adapters, so Im f*cked xD

      Anyways, many thanks again, of course we can try all features at least :)

      PD: Sorry for my english, I know its not perfect but I think its enough to explain by myself !

      Reply
      • All sounds good to me. I’d suggest making sure you unmount and delete the datastore from VMware then remove from the 3PAR -VMware should give you green ticks to show everything has been taken off the LUN and there are no issues removing it.

        As for adding and increasing sizes you should be fine with all of that and 2008 should allow you to increase the volume on the fly without any impact. It’s a shame when we don’t have proper lab kit to test all this on but unfortunately most places don’t. Your English is just fine, probably better than any attempts by myself at foreign language! My fingers are crossed for you but I’m sure it will go to plan :)

        Alex Bytes

        Reply
    4. Hello,

      Just downloaded and deployed HP 3PAR SIM. Everything is working OK, but i don’t have the Management Console in TOOLS iso. I have only the CLI. Any insight on this?

      Thanks

      Reply
    5. Hi,

      When I get the confirm certificate prompt, the OK button was gray out.

      I only have the option Cancel and Help button.

      Any idea?

      Reply
      • Hi Ang,

        I would suggest checking the date and time values on the 3PAR control nodes and the system you are accessing from. It may be that the date is wrong which could result in your system believing the certificate is either not yet valid or has expired. If the time is wrong then you need to SSH onto the control node with the account 3paradm (default password 3pardata) and use the setdate command.

        Let me know the results and if this isn’t the issue we can troubleshoot further.

        Regards,

        Alex

        Reply
    6. Having an issue with configuration. I have configured cn01 following the above process, once cn has reboot and up. I choose option 2 and keeps comes back with error cluster not configured choose option 1 to configure cluster node.

      Reply
      • Hi csypher,

        Can I just check that you completed the steps on cn02 as well as cn01? You have to do option 1 for both of them before you can continue. Once you’ve done that and rebooted both nodes you should be able to proceed.

        Regards,

        Alex

        Reply
      • Hi Peter,

        I’ll answer both comments in one response. If you follow the guide through you will have one configured 3PAR array. Remember, you are emulating a multi-controller storage array so the ‘configuration’ is done once and shared between the nodes. If you want to have a second array you just do the whole job again but this time use the second serial number and give a different name/IP etc. At this point you should have two instances running made up of 6 virtual machines and you can then (if you want) set them up for ‘Remote Copy’ replication.

        I hope that answers your questions, if not drop another comment.

        Regards,

        Alex

        Reply
    7. Sorry, another question, how can i export volumen to one host?. The simulator only have FC Adaprtar and not iSCSI. You can help me with the steps?

      Reply
      • Hi Peter,

        The HP notes indicate the simulator does not support export of volumes to hosts. This includes both FC and iSCSI so you won’t be able to do that. I know it’s frustrating but this is just a simulator and isn’t designed to provide full functionality.

        Regards,

        Alex

        Reply
      • Hi Lior,

        I did have a contact but unfortunately I had to take down that information. The best thing I can suggest now is you liaise with your HP representative or reseller to see if they can be of assistance. If in the future I do get a contact at HP again I will update this comment.

        Regards,

        Alex

        Reply
      • Hi Lior,

        I reached out to the HPE 3PAR product manager along with a few others to see what the current state of affairs is for the simulator. The good news is they are working on a public open version. They couldn’t give me a time frame for release but said to keep an eye out in the near future so fingers crossed people will have access to a version of the simulator soon.

        Alex

        Reply
    8. 3par simulator node restart again not able to login

      getting below error while loging in node system

      SIM: trying to bring up link to 1, addr

      trying to bring up link to node 1.

      can some one check the issue.

      Steps followd to install the 3PAR simulator.

      1) Deployed the VM [SIM_Node0, SIM_node1 and ESD_node ] using OVF file in VMware workstation.

      2) Once we deployed, we logged in ESD host using user name – Root, password- root. and configured the cages. then start the esd.

      2) then we setting up the node0 and restarted.

      after that we are facing this issue.

      Reply
    9. Please Need A Help!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

      While starting OOTB(Out Of Box Procedure) Its showing Error Like “No space Left for scp ” please referootb.log

      Im stucked Here so please sugget me a solution to get rid of this issue

      Reply
      • I assume you’re deploying the latest version downloaded from HPE? The first place to start is delete the virtual machines and try to deploy them again as sometimes that helps people with issues. I haven’t deployed the simulator again recently so I’d have to see if I encounter similar problems the next time I do an install of it.

        Reply

    Leave a Reply to Alex BytesCancel reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.