Saturday, August 25, 2012

Virtual Shared Storage in VMware Workstation

As we know, Windows clustering is a very important part of windows servers and for that formally we need shared storage and minimum two nodes to configure the cluster service on Windows servers. Now a day’s VMware workstation is in full swing to make/setup the testing environment while shared storage is easily not available for testing. Therefore you can follow this knowledge base and achieve shared storage requirement to make/setup your high available environment for testing purpose.
·         Let’s say we have cluster Node A and Node B on VMware to configure the windows clustering.
·         Complete your pre-requisite to configure the cluster service.
[Note: We are assuming, you have installed windows server operating systems with all updates and joined your nodes in domain. As well as domain account has been created with appropriate rights for cluster service and N/W cards have been configured properly as per the MS best practice.]
·         Shutdown both the machines and don’t turn it on.
·         Edit the settings of Cluster Node A and create two new SCSI hard disks with below persistent option into a new folder. Don’t use an existing folder in which one of the VMs resides.

Disk 1: This will be the quorum disk. Make this 0.5 GB and allocate all disk space now.

Disk 2: This will be the shared data disk. Make this at least 2.0 GB and allocate all disk space now.

When adding the disks make sure to manually add the file extension .vmdk
If you don't see the extension deletes and re-creates adding the extension.

·         Close VMware Workstation and edit the VMware Configuration File (.VMX) file for this machine and make the following changes:

Add the following lines to create a second SCSI channel:
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"

Modify the additional hard drives to be attached to the new SCSI channel. Example:
scsi1:1.present = "TRUE"
scsi1:1.fileName = "C:\Ajay\VMs\SAN\Exchange2003\Quorum\Quorum.vmdk"
scsi1:1.mode = "independent-persistent"

scsi1:2.present = "TRUE"
scsi1:2.fileName = "C:\Ajay\VMs\SAN\Exchange2003\Data\Data.vmdk"
scsi1:2.mode = "independent-persistent"

Add the following lines to disable disk locking and caching:
disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
·         Example: Please follow the below steps for better understanding:
                When you will open the .vmx file after adding the disks, will find entry like this

                Please delete this entry and change the .vmx file configuration according to the document
  •    Add same disks in Node B.

·         Make the same changes in .vmx file of Node B.
·         Start VMware Workstation and start Cluster Node A.
·         Configure the new disks as Basic disks, format them with NTFS and assign drive letters
(Suggestion use the Q: drive letter for the Quorum disk, 500MB).
·         Start Cluster Node B. Assign the same drive letters to the same disks.
·         Now you can configure your windows server cluster.
 

7 comments:

  1. Good article, worth mentioning that you may need to re-order the VM's boot sequence. When I followed this (very useful) guide, my VM's failed to boot. Took me a minute to realise it was trying to boot from the Quorum drive first :D

    ReplyDelete
  2. very good article. i could successfully create shared disks between two virtual machines.
    but still I cannot use these shared disks to create shared storage for the Windows Server Clustering, it persistently says that the disks should be iSCSI or iSNS and so on, normal SCSI storage is not acceptable.

    can you describe how you could install Microsoft Clustering on you shared storage? how did you configure the Quorum and Data partitions?
    thanks in advance.

    ReplyDelete
  3. Thanks. I got problem while creating Cluster machines. It's worked

    ReplyDelete
  4. Thanks for providing this informative information you may also refer.
    http://www.s4techno.com/vmware-training-pune-online/

    ReplyDelete
  5. Thanks for your information!

    I want to share my expirience:
    VmWare 12, created RAC Oracle Grid 11.2.0.1:
    Disks were created on Node1 and just cloned for creating Node2.
    For making the Disks shareable I added this configs into my "VMware Virtual Machine Configuration" file (in VMware machine folder), on both NODE 1 and 2:

    Add the following lines to disable disk locking and caching:
    disk.locking = "false"
    diskLib.dataCacheMaxSize = "0"

    And the prolem with running cluster fixed!
    The problem was like this:

    1. when I run the command "olsnodes" it showed me only one node;
    2. ASM Instance on Node2 could not start.
    3. Recieved this errors:

    PRCR-1079 : Failed to start resource ora.scan1.vip
    CRS-5005: IP Address: 192.168.2.106 is already in use in the network
    CRS-2674: Start of 'ora.scan1.vip' on 'rac2' failed
    CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy

    And the the command "srvctl status scan" showed me that SCAN is enabled, but not running.

    All the problems fixed after I made the Disks shareable.

    ReplyDelete
  6. It is a good article blog thanks for sharing. best coworking company in south delhi. these friendly shared office space in south Delhiare able to overcome the occasion via the exchange of problems, ideas, suggestions, and mutual trust.

    ReplyDelete