Monday, April 29, 2013

BMC - Out-of-Band Host Management on SCVMM 2012

Server Light-Out Management (LOM) enables you to manage a physical host remotely, such as Power On and Power Off. Now, you can also do so under SCVMM 2012 when you have defined the Baseboard Management Controller (BMC) settings under the managed host hardware setting. BMC is also required if you intend to use the new automatic Power Management feature in VMM. If certain hosts fall below certain load threshold (e.g. weekends), you can even allow VMM to power-off the host and power-on back when the demand is back.

Before enabling BMC, when you right-click on a host, you can't do anything to the host (i.e. grey-out), except restarting.

Once the host BMC setting is defined, you can power on and off the physical host under the SCVMM console.

System Prerequisites
According to Microsoft Technet, the host must have a BMC installed that supports one of the following out-of-band management protocols:

  • Intelligent Platform Management Interface (IPMI) versions 1.5 or 2.0
  • Data Center Management Interface (DCMI) version 1.0
  • System Management Architecture for Server Hardware (SMASH) version 1.0 over WS-Management (WS-Man)

Example: Dell Remote Access Controller (DRAC)
Dell DRAC supports IPMI 2.0. By default, it's not enabled. Hence, you must first enable it and configure the appropriate OOB IP address on the BIOS setting by pressing during system start-up.

Once the VMM server has connectivity to the Dell DRAC, you can now proceed to configure BMC setting under the Hardware properties as shown below.

Try power off and power on the host!

Thursday, April 25, 2013

How to setup Windows iSCSI with MPIO on Dual Controller Storage Target

Let's say you've acquired an iSCSI storage target with dual controllers (e.g. Dell MD3xxx-i series). And you wish to configure Windows iSCSI initiator with MPIO to enable multi-path connection for High Availability and Load Balancing. How should you go about it?

iSCSI Port Configuration
Let's begin with the iSCSI storage controller setup. Typically, each controller should have four iSCSI ports. Hence, you should configure 4 different VLANs on your iSCSI switches. Each port on each controller should connect to a VLAN (also an IP subnet). Jumbo frame (e.g. MTU 9000) is also recommended. Assign a valid IP address to each port. The port configuration should look something like this:

Create new LUN and assign the preferred path to either Controller
Typically, each LUN can only be accessed via a target controller at any one time. The connections to the preferred controller should be active and the other controller should be standby. Assign the new LUN to either preferred controller and remember its iSCSI port addresses.

Enabling MPIO on Windows host
Similarly, configure the four or more iSCSI network connections (enable Jumbo Frame) on the Windows host. Install the necessary provider software given by the storage vendor. Add new Windows MPIO feature by executing "Install-WindowsFeature Multipath-IO -IncludeManagementTools" on Powershell. Activate MPIO and restart the host by clicking on the red boxes as follows:

Windows iSCSI Initiator 
Start the Windows initiator by executing the command "iscsicpl". On Discovery tab, configure the Target Portals connecting to the iSCSI target directly. Go back to the Targets tab and click the "Connect" button.

By default, only 1 session connection is made. Disconnect the session. We can add more iSCSI sessions on clicking on the Properties button as above. Always check the "Enable multi-path" box whenever you see it.

Each session identifier represent each session to each controller. As you can see, I've already added two sessions to both controllers. You may add more by clicking on the "Add session" button. Check the "Enable multi-path" box and "Advanced" button. Assign the "Target portal IP" to the preferred controller address.

Multiple Connected Session (MCS) Policy
If you have more than 1 iSCSI NIC on the server for each session, click on the "MCS" button to add additional connections to each session. To add more connections to each session, click on the "Add" button and select the appropriate iSCSI initiator and target addresses.

You can also choose a load-balancing algorithm. By default, simple "Round Robin" is used to distribute the loads evenly among the multi-paths.

Verify Multi-Path for each connected LUN
Click on "Device" and then "MPIO" button. Verify that each connected LUN can be accessed by more than 1 session.

In summary, there are two levels of path redundancy defined. Firstly, the session to each controller. Secondly, within a session, you can also have multiple connections defined under the MCS Policy. You may define different load-balancing algorithms for each level. In this setup, I clicked on the Device MPIO and defined "Failover-Only" for session level (remember the LUN can only be accessed via one controller at a time). Under each session, I defined "Round Robin" for multiple connections under MCS Policy for load-balancing.

Thursday, April 18, 2013

Managing Dell MD Storage with SCVMM 2012

In the new SCVMM 2012, you can now centrally manage storage devices using SMI-S providers from various vendors. Till now, only a small handful of storage devices are supported in SCVMM 2012. Among Dell storage portfolios, only the expensive Compellent series are officially supported. There is actually a SMI-S provider (MD Storage Array vCenter Plug-in) for the cheaper Dell MD storage but it is meant for VMWare vCenter. Fortunately being open standard, you can use the same provider even for SCVMM 2012. Download either the x86 or x64 program. As this provider is a proxy type, install it on a Windows Server 2003 or 2008 (WS2012 is not supported) that is reachable by the VMM server. Run the executable to install. Ignore any vCenter settings, since we are using SCVMM here.

Post Installation of Dell MD SMI-S on Proxy Server
  1. Create a local user account on the proxy server. Run "cimuser -a -u username -w password"  (where username is the same local user account) using local Administrator credential (not just elevated prompt) to add it as a CIM user account. The command can be found under "C:\Program Files (x86)\Dell\pegasus\bin".
  2. Create/Edit ArrayHosts.txt in the directory "C:\Program Files (x86)\Dell\pegasus\providers\array". Add the management IP addresses of the storage array. Use a new line for each IP address.
  3. Restart cimserver service using "services.msc". Check TCP 5988 is running using "netstat -ano"
  4. Enable host firewall rule to permit TCP 5988
Configuration of SCVMM 2012 SP1 on VMM Server
  1. Create a RunAs account using the same credential that you created earlier using cimuser.
  2. Go to "Fabric". Right click on "Providers" to add storage devices. Choose the one with "SMI-S" option.
  3. On next page, choose "SMI-S CIMXML" protocol. Specify the proxy server on the Provider IP address and select the RunAs account.

Complete the remaining wizard step and the MD storage can now be managed as part of SCVMM 2012 Storage Fabric. You can create new LUN and assign them to the VMs directly under the same console seamlessly.