As we know, Windows clustering is a very important part of windows servers and for that formally we need shared storage and minimum two nodes to configure the cluster service on Windows servers. Now a day’s VMware workstation is in full swing to make/setup the testing environment while shared storage is easily not available for testing. Therefore you can follow this knowledge base and achieve shared storage requirement to make/setup your high available environment for testing purpose.
· Let’s say we have cluster Node A and Node B on VMware to configure the windows clustering.
· Complete your pre-requisite to configure the cluster service.
[Note: We are assuming, you have installed windows server operating systems with all updates and joined your nodes in domain. As well as domain account has been created with appropriate rights for cluster service and N/W cards have been configured properly as per the MS best practice.]
· Shutdown both the machines and don’t turn it on.
· Edit the settings of Cluster Node A and create two new SCSI hard disks with below persistent option into a new folder. Don’t use an existing folder in which one of the VMs resides.
Disk 2: This will be the shared data disk. Make this at least 2.0 GB and allocate all disk space now.
If you don't see the extension deletes and re-creates adding the extension.
· Close VMware Workstation and edit the VMware Configuration File (.VMX) file for this machine and make the following changes:
Add the following lines to create a second SCSI channel:
scsi1.present = "TRUE"
Add the following lines to create a second SCSI channel:
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
Modify the additional hard drives to be attached to the new SCSI channel. Example:
scsi1:1.present = "TRUE"
Modify the additional hard drives to be attached to the new SCSI channel. Example:
scsi1:1.present = "TRUE"
scsi1:1.fileName = "C:\Ajay\VMs\SAN\Exchange2003\Quorum\Quorum.vmdk"
scsi1:1.mode = "independent-persistent"
scsi1:2.present = "TRUE"
scsi1:2.fileName = "C:\Ajay\VMs\SAN\Exchange2003\Data\Data.vmdk"
scsi1:2.mode = "independent-persistent"
Add the following lines to disable disk locking and caching:
disk.locking = "false"
Add the following lines to disable disk locking and caching:
disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
· Example: Please follow the below steps for better understanding:
When you will open the .vmx file after adding the disks, will find entry like this Please delete this entry and change the .vmx file configuration according to the document
- Add same disks in Node B.
· Make the same changes in .vmx file of Node B.
· Start VMware Workstation and start Cluster Node A.
· Configure the new disks as Basic disks, format them with NTFS and assign drive letters
(Suggestion use the Q: drive letter for the Quorum disk, 500MB).
(Suggestion use the Q: drive letter for the Quorum disk, 500MB).
· Start Cluster Node B. Assign the same drive letters to the same disks.
· Now you can configure your windows server cluster.
Good article, worth mentioning that you may need to re-order the VM's boot sequence. When I followed this (very useful) guide, my VM's failed to boot. Took me a minute to realise it was trying to boot from the Quorum drive first :D
ReplyDeletevery good article. i could successfully create shared disks between two virtual machines.
ReplyDeletebut still I cannot use these shared disks to create shared storage for the Windows Server Clustering, it persistently says that the disks should be iSCSI or iSNS and so on, normal SCSI storage is not acceptable.
can you describe how you could install Microsoft Clustering on you shared storage? how did you configure the Quorum and Data partitions?
thanks in advance.
Thanks. I got problem while creating Cluster machines. It's worked
ReplyDeleteThanks for your information!
ReplyDeleteI want to share my expirience:
VmWare 12, created RAC Oracle Grid 11.2.0.1:
Disks were created on Node1 and just cloned for creating Node2.
For making the Disks shareable I added this configs into my "VMware Virtual Machine Configuration" file (in VMware machine folder), on both NODE 1 and 2:
Add the following lines to disable disk locking and caching:
disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
And the prolem with running cluster fixed!
The problem was like this:
1. when I run the command "olsnodes" it showed me only one node;
2. ASM Instance on Node2 could not start.
3. Recieved this errors:
PRCR-1079 : Failed to start resource ora.scan1.vip
CRS-5005: IP Address: 192.168.2.106 is already in use in the network
CRS-2674: Start of 'ora.scan1.vip' on 'rac2' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy
And the the command "srvctl status scan" showed me that SCAN is enabled, but not running.
All the problems fixed after I made the Disks shareable.
Great article
ReplyDelete