Category Archives: Clustering

SQL Server Clustering


Introduction

A server cluster is a group of independent servers running Cluster service and working collectively as a single system. Server clusters provide high-availability, scalability, and manageability for resources and applications by grouping multiple servers running Windows® 2000 Advanced Server or Windows 2000 Datacenter Server.

The purpose of server clusters is to preserve client access to applications and resources during failures and planned outages. If one of the servers in the cluster is unavailable due to failure or maintenance, resources and applications move to another available cluster node.

For clustered systems, the term high availability is used rather than fault-tolerant, as fault tolerant technology offers a higher level of resilience and recovery. Fault-tolerant servers typically use a high degree of hardware redundancy plus specialized software to provide near-instantaneous recovery from any single hardware or software fault. These solutions cost significantly more than a clustering solution because organizations must pay for redundant hardware that waits idly for a fault. Fault-tolerant servers are used for applications that support high-value, high-rate transactions such as check clearinghouses, Automated Teller Machines (ATMs), or stock exchanges.

While Cluster service does not guarantee non-stop operation, it provides availability sufficient for most mission-critical applications. Cluster service can monitor applications and resources, automatically recognizing and recovering from many failure conditions. This provides greater flexibility in managing the workload within a cluster, and improves overall availability of the system.

Cluster service benefits include:

  • High Availability. With Cluster service, ownership of resources such as disk drives and IP addresses is automatically transferred from a failed server to a surviving server. When a system or application in the cluster fails, the cluster software restarts the failed application on a surviving server, or disperses the work from the failed node to the remaining nodes. As a result, users experience only a momentary pause in service.
  • Failback. Cluster service automatically re-balances the workload in a cluster when a failed server comes back online.
  • Manageability. You can use the Cluster Administrator to manage a cluster as a single system and to manage applications as if they were running on a single server. You can move applications to different servers within the cluster by dragging and dropping cluster objects. You can move data to different servers in the same way. This can be used to manually balance server workloads and to unload servers for planned maintenance. You can also monitor the status of the cluster, all nodes and resources from anywhere on the network.
  • Scalability. Cluster services can grow to meet rising demands. When the overall load for a cluster-aware application exceeds the capabilities of the cluster, additional nodes can be added.

This paper provides instructions for installing Cluster service on servers running Windows 2000 Advanced Server and Windows 2000 Datacenter Server. It describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications, but rather to guide you through the process of installing a typical, two-node cluster itself.

********************************************************************************************************

 

What is Clustering?

 
A Microsoft SQL Server Cluster is nothing more than a collection of two or more physical servers with identical access to shared storage that provides the disk resources required to store the database files. 

These servers are referred to as “nodes”.  Each of the nodes talk to one another via a private network, sending a heartbeat signal between them.  Should one node not communicate its heartbeat to the other node in the cluster the secondary node will take ownership of any dependent services being run by the node that lost communication.  This process is referred to as “failover”. 

A failover can occur both automatically (a server’s heartbeat stops communicating) or manually.  A manual failover is beneficial in the event that patching or some other form of maintenance is required at the physical server level.  You would typically implement clustering in order to ensure that if you ever encounter hardware failure on the physical server hosting your SQL instance, your databases would continue to be available for dependent applications and their users. 

Unlike other clustering technologies that are implemented for better performance or for increased processing power via load-balancing, SQL clusters are designed for providing highly-available databases; eliminating downtime associated with hardware failure.  This architectural concept is referred to as “High Availability Clustering” or “HA Clustering” for short.  The service or groups of services that are hosted on a clustered node are respectively referred to as resources and resource groups.  Since these resources must be available to all nodes in a cluster then they must reside on a shared disk array in the form of SAN-NAS disk.  Each resource group will be mapped to a logical drive that is physically hosted on the shared disk array and will also have it’s own associated IP address and network name. 

The SQL Server Installation Process on a Cluster

 
The SQL Server installation process detects when an installation is being attempted on a clustered node and will prompt you as to whether you wish to configure the SQL instance as clustered or not.  If you proceed with creating a clustered instance of SQL Server, the instance will be hosted on a “Virtual” Server.  Resources such as data and log files will be created on the shared SAN-NAS disk for SQL Server, SQL Server Agent, and Full-Text Indexing.

If selected in the installation process, Notification Services and Analysis Services are also cluster-aware in SQL Server 2005.  Conversely, the associated program files for the instance will be installed on the local drives of each of the clustered nodes in an identical fashion and registry values are set identically across all clustered nodes.  Since the “Virtual” server resides solely on the SAN it can be “owned” by any of the nodes you allow.  Each of the nodes can run these resources in identical fashion because each physical server/node has the program files and identical registry settings necessary to run the SQL instance. 

Furthermore, the users are oblivious to the underlying fluidity of the server.  They connect to it as they would any other physical server:  by server name (virtual server name in this case) if the default instance or by virtual server name\instance name if a named instance.  This is key for application connectivity.  Since the SQL instance simply changes ownership during a failover, connection strings the applications rely on to connect to their databases need not be recoded; the physical server may become unavailable, but the virtual server persists after the failover.

 

Active/Active or Active/Passive Clustering

 
Clusters are often referred to as either Active/Active or Active/Passive.  Just as you would expect by the name, in an Active/Active cluster there will be two or more nodes, each one owning an instance of Microsoft SQL Server.  If one node fails, the instance it owns would fail over to the other node, running along side (and contending for resources with) the other instance.  An Active/Passive architecture  requires that no matter how many nodes make up the cluster, at least one node is not the owner of an instance of SQL Server.  It is “passive” and only exists to accept a failover of a node hosting a SQL instance in the event of a failover. 

Current Microsoft licensing policies require you to only license the active nodes running Microsoft SQL Server.  The passive node need not be licensed.

 

How Many Nodes?


Today’s clustering technology under Windows 2003 and Microsoft SQL Server 2005 Enterprise Edition allows for up to eight nodes to be combined into a single cluster.  The release of Windows 2008 and Microsoft SQL Server 2008 Enterprise Edition will bring with it the ability to double that to sixteen nodes.  (You are limited to two nodes if you utilize SQL Server Standard Edition.)  Do you want to cluster multiple databases of various SLAs within many nodes on a single cluster?  Is it beneficial to dedicate a two-node cluster to a single database?  The answer is:  “It Depends.”  We look into this in detail in a future tip.

 

 

Pros and Cons

 
While clustering protects you from hardware failure relating to the server hosting the SQL Server instance, it does not protect you from media failure.  Unlike replication, database mirroring, or log shipping there is only a single copy of your database.  If the SAN-NAS encounters a failure then you could not only conceivably incur downtime, but possibly data loss.  It is recommended that you incorporate redundancy of your SAN-NAS or database mirroring with your clustering configuration to protect you from media failures.  Hardware and licensing costs may be high.  In an Active/Passive clustering model you’ll purchase hardware you hope to never need to use.  The cluster build is more complex than a standalone server setup.  The physical build of the cluster is outside the scope of this discussion however.  Additional benefits for clustering include simplicity for installation of SQL and ease of administration and maintenance.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Setting up Networks

Note: For this section, power down all shared storage devices and then power up all nodes. Do not let both nodes access the shared storage devices at the same time until the Cluster service is installed on at least one node and that node is online.

Each cluster node requires at least two network adapters—one to connect to a public network, and one to connect to a private network consisting of cluster nodes only.

The private network adapter establishes node-to-node communication, cluster status signals, and cluster management. Each node’s public network adapter connects the cluster to the public network where clients reside.

Verify that all network connections are correct, with private network adapters connected to other private network adapters only, and public network adapters connected to the public network. The connections are illustrated in Figure 1 below. Run these steps on each cluster node before proceeding with shared disk setup.

Figure 1: Example of two-node cluster (clusterpic.vsd)

Figure 1: Example of two-node cluster (clusterpic.vsd)

Configuring the Private Network Adapter

Perform these steps on the first node in your cluster.

  1. Right-click My Network Places and then click Properties.
  2. Right-click the Local Area Connection 2 icon.Note: Which network adapter is private and which is public depends upon your wiring. For the purposes of this document, the first network adapter (Local Area Connection) is connected to the public network, and the second network adapter (Local Area Connection 2) is connected to the private cluster network. This may not be the case in your network.
  3. Click Status. The Local Area Connection 2 Status window shows the connection status, as well as the speed of connection. If the window shows that the network is disconnected, examine cables and connections to resolve the problem before proceeding. Click Close.
  4. Right-click Local Area Connection 2 again, click Properties, and click Configure.
  5. Click Advanced. The window in Figure 2 should appear.
  6. Network adapters on the private network should be set to the actual speed of the network, rather than the default automated speed selection. Select your network speed from the drop-down list. Do not use an Auto-select setting for speed. Some adapters may drop packets while determining the speed. To set the network adapter speed, click the appropriate option such as Media Type or Speed.Bb727114.cluste02(en-us,TechNet.10).gif
    Figure 2: Advanced Adapter Configuration (advanced.bmp)

    All network adapters in the cluster that are attached to the same network must be identically configured to use the same Duplex Mode, Flow Control, Media Type, and so on. These settings should remain the same even if the hardware is different.

    Note: We highly recommend that you use identical network adapters throughout the cluster network.

  7. Click Transmission Control Protocol/Internet Protocol (TCP/IP).
  8. Click Properties.
  9. Click the radio-button for Use the following IP address and type in the following address: 10.1.1.1. (Use 10.1.1.2 for the second node.)
  10. Type in a subnet mask of 255.0.0.0.
  11. Click the Advanced radio button and select the WINS tab. Select Disable NetBIOS over TCP/IP. Click OK to return to the previous menu. Do this step for the private network adapter only.The window should now look like Figure 3 below.

    Bb727114.cluste03(en-us,TechNet.10).gif

    Figure 3: Private Connector IP Address (ip10111.bmp)

 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 

Configuring the Public Network Adapter

Note: While the public network adapter’s IP address can be automatically obtained if a DHCP server is available, this is not recommended for cluster nodes. We strongly recommend setting static IP addresses for all network adapters in the cluster, both private and public. If IP addresses are obtained via DHCP, access to cluster nodes could become unavailable if the DHCP server goes down. If you must use DHCP for your public network adapter, use long lease periods to assure that the dynamically assigned lease address remains valid even if the DHCP service is temporarily lost. In all cases, set static IP addresses for the private network connector. Keep in mind that Cluster service will recognize only one network interface per subnet. If you need assistance with TCP/IP addressing in Windows 2000, please see Windows 2000 Online Help.

Rename the Local Area Network Icons

We recommend changing the names of the network connections for clarity. For example, you might want to change the name of Local Area Connection (2) to something like Private Cluster Connection. The naming will help you identify a network and correctly assign its role.

  1. Right-click the Local Area Connection 2 icon.
  2. Click Rename.
  3. Type Private Cluster Connection into the textbox and press Enter.
  4. Repeat steps 1-3 and rename the public network adapter as Public Cluster Connection.Bb727114.cluste04(en-us,TechNet.10).gif
    Figure 4: Renamed connections (connames.bmp)
  5. The renamed icons should look like those in Figure 4 above. Close the Networking and Dial-up Connections window. The new connection names automatically replicate to other cluster servers as they are brought online.

Verifying Connectivity and Name Resolution

To verify that the private and public networks are communicating properly, perform the following steps for each network adapter in each node. You need to know the IP address for each network adapter in the cluster. If you do not already have this information, you can retrieve it using the ipconfig command on each node:

  1. Click Start, click Run and type cmd in the text box. Click OK.
  2. Type ipconfig /all and press Enter. IP information should display for all network adapters in the machine.
  3. If you do not already have the command prompt on your screen, click Start, click Run and typing cmd in the text box. Click OK.
  4. Type ping ipaddress where ipaddress is the IP address for the corresponding network adapter in the other node. For example, assume that the IP addresses are set as follows:
    Node Network Name Network Adapter IP Address
    1 Public Cluster Connection 172.16.12.12.
    1 Private Cluster Connection 10.1.1.1
    2 Public Cluster Connection 172.16.12.14
    2 Private Cluster Connection 10.1.1.2

In this example, you would type ping 172.16.12.14 and ping 10.1.1.2 from Node 1, and you would type ping 172.16.12.12 and 10.1.1.1 from Node 2.

To verify name resolution, ping each node from a client using the node’s machine name instead of its IP number. For example, to verify name resolution for the first cluster node, type ping hq-res-dc01 from any client.

Verifying Domain Membership

All nodes in the cluster must be members of the same domain and able to access a domain controller and a DNS Server. They can be configured as member servers or domain controllers. If you decide to configure one node as a domain controller, you should configure all other nodes as domain controllers in the same domain as well. In this document, all nodes are configured as domain controllers.

Note: See More Information at the end of this document for links to additional Windows 2000 documentation that will help you understand and configure domain controllers, DNS, and DHCP.

  1. Right-click My Computer, and click Properties.
  2. Click Network Identification. The System Properties dialog box displays the full computer name and domain. In our example, the domain name is reskit.com.
  3. If you are using member servers and need to join a domain, you can do so at this time. Click Properties and following the on-screen instructions for joining a domain.
  4. Close the System Properties and My Computer windows.

Setting Up a Cluster User Account

The Cluster service requires a domain user account under which the Cluster service can run. This user account must be created before installing Cluster service, because setup requires a user name and password. This user account should not belong to a user on the domain.

  1. Click Start, point to Programs, point to Administrative Tools, and click Active Directory Users and Computers
  2. Click the + to expand Reskit.com (if it is not already expanded).
  3. Click Users.
  4. Right-click Users, point to New, and click User.
  5. Type in the cluster name as shown in Figure 5 below and click Next.Bb727114.cluste05(en-us,TechNet.10).gif
    Figure 5: Add Cluster User (clusteruser.bmp)
  6. Set the password settings to User Cannot Change Password and Password Never Expires. Click Next and then click Finish to create this user.Note: If your administrative security policy does not allow the use of passwords that never expire, you must renew the password and update the cluster service configuration on each node before password expiration.
  7. Right-click Cluster in the left pane of the Active Directory Users and Computers snap-in. Select Properties from the context menu.
  8. Click Add Members to a Group.
  9. Click Administrators and click OK. This gives the new user account administrative privileges on this computer.
  10. Close the Active Directory Users and Computers snap-in.

Setting Up Shared Disks

Warning: Make sure that Windows 2000 Advanced Server or Windows 2000 Datacenter Server and the Cluster service are installed and running on one node before starting an operating system on another node. If the operating system is started on other nodes before the Cluster service is installed, configured and running on at least one node, the cluster disks will probably be corrupted.

To proceed, power off all nodes. Power up the shared storage devices and then power up node one.

About the Quorum Disk

The quorum disk is used to store cluster configuration database checkpoints and log files that help manage the cluster. We make the following quorum disk recommendations:

  • Create a small partition (min 50MB) to be used as a quorum disk. We generally recommend a quorum disk to be 500MB.)
  • Dedicate a separate disk for a quorum resource. As the failure of the quorum disk would cause the entire cluster to fail, we strongly recommend you use a volume on a RAID disk array.

During the Cluster service installation, you must provide the drive letter for the quorum disk. In our example, we use the letter Q.

Configuring Shared Disks

  1. Right click My Computer, click Manage, and click Storage.
  2. Double-click Disk Management
  3. Verify that all shared disks are formatted as NTFS and are designated as Basic. If you connect a new drive, the Write Signature and Upgrade Disk Wizard starts automatically. If this happens, click Next to go through the wizard. The wizard sets the disk to dynamic. To reset the disk to Basic, right-click Disk # (where # specifies the disk you are working with) and click Revert to Basic Disk.Right-click unallocated disk space
    1. Click Create Partition…
    2. The Create Partition Wizard begins. Click Next twice.
    3. Enter the desired partition size in MB and click Next.
    4. Accept the default drive letter assignment by clicking Next.
    5. Click Next to format and create partition.

Assigning Drive Letters

After the bus, disks, and partitions have been configured, drive letters must be assigned to each partition on each clustered disk.

Note: Mountpoints is a feature of the file system that allows you to mount a file system using an existing directory without assigning a drive letter. Mountpoints is not supported on clusters. Any external disk used as a cluster resource must be partitioned using NTFS partitions and must have a drive letter assigned to it.

  1. Right-click the desired partition and select Change Drive Letter and Path.
  2. Select a new drive letter.
  3. Repeat steps 1 and 2 for each shared disk.Bb727114.cluste06(en-us,TechNet.10).gif
    Figure 6: Disks with Drive Letters Assigned (drives.bmp)
  4. When finished, the Computer Management window should look like Figure 6 above. Now close the Computer Management window.

Verifying Disk Access and Functionality

  1. Click Start, click Programs, click Accessories, and click Notepad.
  2. Type some words into Notepad and use the File/Save As command to save it as a test file called test.txt. Close Notepad.
  3. Double-click the My Documents icon.
  4. Right-click test.txt and click Copy
  5. Close the window.
  6. Double-click My Computer.
  7. Double-click a shared drive partition.
  8. Click Edit and click Paste.
  9. A copy of the file should now reside on the shared disk.
  10. Double-click test.txt to open it on the shared disk. Close the file.
  11. Highlight the file and press the Del key to delete it from the clustered disk.

Repeat the process for all clustered disks to verify they can be accessed from the first node.

At this time, shut down the first node, power on the second node and repeat the Verifying Disk Access and Functionality steps above. Repeat again for any additional nodes. When you have verified that all nodes can read and write from the disks, turn off all nodes except the first, and continue with this guide.

Install Cluster Service software

Configuring the First Node

Note: During installation of Cluster service on the first node, all other nodes must either be turned off, or stopped prior to Windows 2000 booting. All shared storage devices should be powered up.

In the first phase of installation, all initial cluster configuration information must be supplied so that the cluster can be created. This is accomplished using the Cluster Service Configuration Wizard.

  1. Click Start, click Settings, and click Control Panel.
  2. Double-click Add/Remove Programs.
  3. Double-click Add/Remove Windows Components .
  4. Select Cluster Service. Click Next.
  5. Cluster service files are located on the Windows 2000 Advanced Server or Windows 2000 Datacenter Server CD-ROM. Enter x:\i386 (where x is the drive letter of your CD-ROM). If Windows 2000 was installed from a network, enter the appropriate network path instead. (If the Windows 2000 Setup flashscreen displays, close it.) Click OK.
  6. Click Next.
  7. The window shown in Figure 7 below appears. Click I Understand to accept the condition that Cluster service is supported on hardware from the Hardware Compatibility List only.Bb727114.cluste07(en-us,TechNet.10).gif
    Figure 7: Hardware Configuration Certification Screen (hcl.bmp)
  8. Because this is the first node in the cluster, you must create the cluster itself. Select The first node in the cluster, as shown in Figure 8 below and then click Next.Bb727114.cluste08(en-us,TechNet.10).gif
    Figure 8: Create New Cluster (clustcreate.bmp)
  9. Enter a name for the cluster (up to 15 characters), and click Next. (In our example, we name the cluster MyCluster.)
  10. Type the user name of the cluster service account that was created during the pre-installation. (In our example, this user name is cluster.) Leave the password blank. Type the domain name, and click Next.Note: You would normally provide a secure password for this user account.

    At this point the Cluster Service Configuration Wizard validates the user account and password.

  11. Click Next.

Configuring Cluster Disks

Note: By default, all SCSI disks not residing on the same bus as the system disk will appear in the Managed Disks list. Therefore, if the node has multiple SCSI buses, some disks may be listed that are not to be used as shared storage (for example, an internal SCSI drive.) Such disks should be removed from the Managed Disks list.

  1. The Add or Remove Managed Disks dialog box shown in Figure 9 specifies which disks on the shared SCSI bus will be used by Cluster service. Add or remove disks as necessary and then click Next.Bb727114.cluste09(en-us,TechNet.10).gif
    Figure 9: Add or Remove Managed Disks (manageddisks.bmp)

    Note that because logical drives F: and G: exist on a single hard disk, they are seen by Cluster service as a single resource. The first partition of the first disk is selected as the quorum resource by default. Change this to denote the small partition that was created as the quorum disk (in our example, drive Q). Click Next.

    Note: In production clustering scenarios you must use more than one private network for cluster communication to avoid having a single point of failure. Cluster service can use private networks for cluster status signals and cluster management. This provides more security than using a public network for these roles. You can also use a public network for cluster management, or you can use a mixed network for both private and public communications. In any case, make sure at least two networks are used for cluster communication, as using a single network for node-to-node communication represents a potential single point of failure. We recommend that multiple networks be used, with at least one network configured as a private link between nodes and other connections through a public network. If you have more than one private network, make sure that each uses a different subnet, as Cluster service recognizes only one network interface per subnet.

    This document is built on the assumption that only two networks are in use. It shows you how to configure these networks as one mixed and one private network.

    The order in which the Cluster Service Configuration Wizard presents these networks may vary. In this example, the public network is presented first.

  2. Click Next in the Configuring Cluster Networks dialog box.
  3. Make sure that the network name and IP address correspond to the network interface for the public network.
  4. Check the box Enable this network for cluster use.
  5. Select the option All communications (mixed network) as shown in Figure 10 below.
  6. Click Next.Bb727114.cluste10(en-us,TechNet.10).gif
    Figure 10: Public Network Connection (pubclustnet.bmp)
  7. The next dialog box shown in Figure 11 configures the private network. Make sure that the network name and IP address correspond to the network interface used for the private network.
  8. Check the box Enable this network for cluster use.
  9. Select the option Internal cluster communications only.Bb727114.cluste11(en-us,TechNet.10).gif
    Figure 11: Private Network Connection (privclustnet.bmp)
  10. Click Next.
  11. In this example, both networks are configured in such a way that both can be used for internal cluster communication. The next dialog window offers an option to modify the order in which the networks are used. Because Private Cluster Connection represents a direct connection between nodes, it is left at the top of the list. In normal operation this connection will be used for cluster communication. In case of the Private Cluster Connection failure, cluster service will automatically switch to the next network on the list—in this case Public Cluster Connection. Make sure the first connection in the list is the Private Cluster Connection and click Next.Important: Always set the order of the connections so that the Private Cluster Connection is first in the list.
  12. Enter the unique cluster IP address (172.16.12.20) and Subnet mask (255.255.252.0), and click Next.Bb727114.cluste12(en-us,TechNet.10).gif
    Figure 12: Cluster IP Address (clusterip.bmp)

    The Cluster Service Configuration Wizard shown in Figure 12 automatically associates the cluster IP address with one of the public or mixed networks. It uses the subnet mask to select the correct network.

  13. Click Finish to complete the cluster configuration on the first node.The Cluster Service Setup Wizard completes the setup process for the first node by copying the files needed to complete the installation of Cluster service. After the files are copied, the Cluster service registry entries are created, the log files on the quorum resource are created, and the Cluster service is started on the first node.

    A dialog box appears telling you that Cluster service has started successfully.

  14. Click OK.
  15. Close the Add/Remove Programs window.

Validating the Cluster Installation

Use the Cluster Administrator snap-in to validate the Cluster service installation on the first node.

  1. Click Start, click Programs, click Administrative Tools, and click Cluster Administrator.Bb727114.cluste13(en-us,TechNet.10).gif
    Figure 13: Cluster Administrator (1nodeadmin.bmp)

    If your snap-in window is similar to that shown above in Figure 13, your Cluster service was successfully installed on the first node. You are now ready to install Cluster service on the second node.

Configuring the Second Node

Note: For this section, leave node one and all shared disks powered on. Power up the second node.

Installing Cluster service on the second node requires less time than on the first node. Setup configures the Cluster service network settings on the second node based on the configuration of the first node.

Installation of Cluster service on the second node begins exactly as for the first node. During installation of the second node, the first node must be running.

Follow the same procedures used for installing Cluster service on the first node, with the following differences:

  1. In the Create or Join a Cluster dialog box, select The second or next node in the cluster, and click Next.
  2. Enter the cluster name that was previously created (in this example, MyCluster), and click Next.
  3. Leave Connect to cluster as unchecked. The Cluster Service Configuration Wizard will automatically supply the name of the user account selected during the installation of the first node. Always use the same account used when setting up the first cluster node.
  4. Enter the password for the account (if there is one) and click Next.
  5. At the next dialog box, click Finish to complete configuration.
  6. The Cluster service will start. Click OK.
  7. Close Add/Remove Programs.

If you are installing additional nodes, repeat these steps to install Cluster service on all other nodes.

Verify Installation

There are several ways to verify a successful installation of Cluster service. Here is a simple one:

  1. Click Start, click Programs, click Administrative Tools, and click Cluster Administrator.Bb727114.cluste14(en-us,TechNet.10).gif
    Figure 14: Cluster Resources (clustadmin.bmp)

    The presence of two nodes (HQ-RES-DC01 and HQ-RES-DC02 in Figure 14 above) shows that a cluster exists and is in operation.

  2. Right Click the group Disk Group 1 and select the option Move. The group and all its resources will be moved to another node. After a short period of time the Disk F: G: will be brought online on the second node. If you watch the screen, you will see this shift. Close the Cluster Administrator snap-in.

Congratulations. You have completed the installation of Cluster service on all nodes. The server cluster is fully operational. You are now ready to install cluster resources like file shares, printer spoolers, cluster aware services like IIS, Message Queuing, Distributed Transaction Coordinator, DHCP, WINS, or cluster aware applications like Exchange or SQL Server.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Clustering SQL Server 2005 (Installation)

 

When the installation process does begin, the setup program recognizes all the nodes, and once you give it the go ahead to install on each one, it does, all automatically. SQL Server 2005 binaries are installed on the local drive of each node, and the system databases are stored on the shared array you designate.

In the next section are the step-by-steps instructions for installing a SQL Server 2005 instance in a cluster. The assumption for this example is that you will be installing this instance in a 2-node active/passive cluster. Even if you will be installing a 2-node active/active or a multi-node cluster, the steps in this section are virtually the same. The only real difference is that you will have to run SQL Server 2005 setup for every instance you want to install on the cluster, and you will have to specify a different logical drive on the shared array.

Clustering SQL Server

To begin installing your SQL Server 2005 cluster, you will need the installation CD or DVD. You can either install it directly from the media, or copy the install files from the media to the current active node of the cluster, and run the setup program from there.

To begin the installation, run Setup.exe. After an introductory screen, you will get the first install dialog box as shown in the figure below.

The Installing Prerequisites dialog box lists the prerequisites that need to be installed before installation of SQL Server 2005 can begin. The number of components may vary from the above figure, depending on what you have already installed on your nodes. What is interesting to note here is that these prerequisite components will only be installed immediately on the active node. They will be installed on the passive node later during the installation process. This is done automatically and you don’t have to worry about it.

Click Install to install these components. When completed, you will get a dialog box telling you that they were installed successfully, and then you can the click Next to proceed. On occasion, I have seen these components fail to install correctly. If this happens, you will have to troubleshoot the installation. Generally speaking, try rebooting both nodes of the cluster and try installing them again. This often fixes whatever caused the first setup try to fail.

Once the prerequisite components have been successfully installed, the SQL Server Installation Wizard launches, as you can see in the figure below.

Click Next to proceed.

The next step is for the SQL Server Installation Wizard to perform a System Configuration Check. This is very similar to the check that was performed with clustering services when you installed Windows 2003 Clustering. Ideally, you want all checks to be successful, with a green icon. If you get any yellow warning or red error icons, then you need to find out what the problem is, and correct it before proceeding.

In some cases, yellow warning icons can be ignored, but red error icons cannot. If you have any yellow or red icons, you may have to abort the setup process, fix the problem, then restart the setup process. Assuming all is well, click Next to proceed.

The next dialog box is Registration, where you enter your company name and license key, if applicable.

Next, you must select the SQL Server 2005 components to install. See below.

I want to point out the options to “Create a SQL Server failover cluster” and to “Create an Analysis Server failover cluster” (currently grayed out). Since we are creating a SQL Server 2005 cluster, you must select the “Create a SQL Server failover cluster.” If you are going to install Analysis Services (not covered in this example) then you must select “Create an Analysis Server failover cluster.” Once you have selected all the components you need to include, click Next.

As with any install of SQL Server 2005, the next step is to select the name of the instance to be installed. You can choose between a default instance and a named instance. Click Next to proceed.

Now, here is a very important step. This is when you enter the name of the virtual SQL Server 2005 instance you are currently installing. This is the name that clients will use to connect to this instance. Ideally, you have already selected a name to use that makes the most sense to your organization. Click Next to proceed. If you ever need to change this virtual name, you will have to uninstall and then reinstall SQL Server 2005 clustering.

This is also a very important step. This is where you enter the virtual IP address for this instance of SQL Server 2005. Like the cluster virtual name, it is used by clients to connect to this instance of SQL Server 2005. The IP address must belong to the same subnet as the IP addresses used by all of the nodes.

In addition, in this dialog box you must select the network to be used for the public network—the network used by the clients to connect to this instance of SQL Server 2005. All of the available networks will be listed in the drop-down box next to Network to use. If you have named the public and private networks Public and Private, respectively, it will be very easy for you to select the correct network, as I have above.

Once you have entered the IP address and selected the public network, click on Add, so that the information you just selected is in the Selected networks and IP addresses box. Then click Next.

In this dialog box, select the SQL Server Group as the group where you want to create the SQL Server resources. In addition, be sure that the Data files will be created on the correct logical drive of the shared array using the folder name you choose. Click Next to proceed.

Now, you specify which nodes you want to install this instance of SQL Server on. Because our example is for only two nodes, the default setting works for us. Notice that under Required node is SQL2005A, which is the name of the physical node where we are running the setup program. And under Selected nodes is SQL2005B, the second physical node in our 2-node cluster. Click Next to proceed.

In this dialog box, we must select an account (with password) that has administrative rights on all of the nodes where we want to install this instance of SQL Server 2005. This can be any domain account that is a local administrator of all the nodes. Click Next to proceed.

The Service Account dialog box is identical to the one you see when you install SQL Server 2005 on a non-cluster, and it is configured the same. Click Next to proceed.

In this dialog box, you must select pre-existing global domain groups that are used to contain the startup account for each clustered service. You can choose to add all three services to the same global domain group, or to create separate global domain groups, one for each service, as has been done above. Once you have selected appropriate domain groups, click Next to proceed.

The next four dialog boxes of the Installation Wizard, not shown here, are the same as for any other installation of SQL Server 2005. After you have completed these steps, the installation of this instance of SQL Server 2005 begins, and you see the following dialog box.

The installation process will take some time as it is installing the binaries on both nodes of the cluster, and installing the system data files on the shared array. The Setup Progress step shows the status of the first node’s install. If you want to see the status of the second node’s install, you can change the drop-down box next to Node to the second node and watch its progress.

As the installation proceeds, you will want to see all green icons next to each installation step. If any step should fail, then the entire installation process will need to be rolled back, any problems fixed, and SQL Server 2005 installed fresh. In most cases, cancelling a bad installation will uninstall what has already been installed, but not always.

Sometimes, if the installation breaks, it just dies and a rollback of what has been done so far will not occur. If this is the case you can either choose to reinstall on top of the existing bad install (which often does not work), manually uninstall the failed installation (check Microsoft’s Web site for assistance in this area), or rebuild your cluster from scratch (starting with the operating system).

It the install was a success, you will see a final dialog box, where you can click Finish. SQL Server 2005 clustering had now been successfully installed on the two cluster nodes.

Clustering Analysis Services

SQL Server 2005 Analysis Services can be clustered just like SQL Server 2005, and in fact, is installed using the same setup program used to install SQL Server 2005. Below are some points to keep in mind if you should decide to cluster SQL Server 2005 Analysis Services.

  • SQL Server 2005 Analysis Services can be installed by itself, or with SQL Server 2005. Because some of the features of Analysis Services require components of the SQL Server 2005 database engine, it is generally a good idea to install both of them in your cluster.
  • SQL Server 2005 Analysis Services is installed using the same setup program as SQL Server 2005. When running the setup program, you select, or not select, Analysis Services to be installed in the “Components to Install” screen.
  • Because SQL Server 2005 Analysis Services needs to store program files, data files, and shared files, you will have to specify the location of your shared array, where they will reside. These files must reside on a shared array if you want Analysis Services to run after a failover. To specify the location of the shared array, you must select the “Advanced” button from the “Components to Install” screen in the setup wizard.

Other than the above, installing SQL Server 2005 Analysis Services in a cluster is virtually identical to installing SQL Server 2005 in a cluster.

Installing the Service Pack and Hot Fixes

Once you have installed SQL Server 2005 clustering, your next step is to install the latest SQL Server 2005 service pack and hot fixes, which can be downloaded from Microsoft’s Web site. Installing a service pack or hot fix is fairly straightforward because they are cluster-aware. Once the service pack or hot fix setup program is started, it detects that you have a cluster and will upgrade all nodes simultaneously. Once setup is complete, you may need to reboot your servers and failover the nodes. Generally, once I have run the service pack, I like to reboot the active node first. Once it has rebooted, then I reboot the passive node. This way, failover and failback is automatic.

Checking the SQL Server 2005 Installation From Cluster Administrator

Once an instance of SQL Server 2005 clustering is installed, you can view its cluster resources by going to Cluster Administrator and opening up the SQL Server Group resource, as shown below.

This figure shows the cluster resources for the SQL Server 2005 cluster we just built. We see all of the names of the resources, their state, and which node the resources are running on. As I have already mentioned, Cluster Administrator is a great tool for seeing if all the resources are up and running and which node is the current active node.

Here is a brief rundown on each of the SQL Server 2005 cluster resources:

  • Disk F: This is the shared disk array where the SQL Server data files and logs are stored.
  • SQL Network Name (sqlcluster): This is the virtual SQL Server name used by clients to connect to this clustered instance of SQL Server 2005. The name “sqlcluster” is the name I have assigned this cluster instance, and will not be the same as your cluster, unless you name yours the same as mine.
  • SQL IP Address (sqlcluster): This is the virtual SQL Server IP address used by clients to connect to this clustered instance of SQL Server 2005. Again, the name “sqlcluster” is the name of the virtual server, and is the one I have used for this cluster. Your name will most likely be different.
  • SQL Server: This is the SQL Server service.
  • SQL Server Agent: This is the SQL Server Agent service.
  • SQL Server FullText: This is the SQL Server FullText service. Even though you may not use this service, it is automatically installed as a cluster resource.

Manually Initiate a Failover in Cluster Administrator

This test is also performed from Cluster Administrator. Select any of the resources found in the SQL Server Group resource group (not the group itself), right-click on it, and select Initiate Failure. Because the cluster service always tries to recover up to three times from a failover, if it can, you will have to select this option four times before a test failover is initiated.

As above, after the first failover, check for any problems, then failback using the same procedure. Then check again for problems.

Manually Failover Nodes by Turning Them Off

Turn off the active node. Once this happens, watch the failover in Cluster Administrator and the clients. As before, check for any problems. Next, turn on the node and wait until it boots back up successfully. Then turn off the now current active node by turning it off hard. And again, watch the failover in Cluster Administrator and the clients, and check for problems. Turn the node back on when done.

Manually Failover Nodes by Breaking the Public Network Connections

Unplug the public network connection from the active node. This will cause a failover to a passive node, which you can watch in Cluster Administrator and the clients. Check for any problems. Now, plug the public network connection back into the server. And unplug the public network connection from the now active node. This will cause a failover to the current passive node, which you can watch in Cluster Administrator. And again, watch the failover in Cluster Administrator and the clients, and check for problems. Once the testing is complete, plug the network connection back into the server.

Manually Failover Nodes by Breaking the Shared Array Connection

From the active node, remove the shared array connection. This will cause a failover, which you can watch in Cluster Administrator and the clients. Check for any problems. Next, reconnect the broken connection from the now active node, and remove the shared array connection. Watch the failover in Cluster Administrator. And again, watch the failover in Cluster Administrator and the clients, and check for problems. When done, reconnect the broken connection.

If you pass all of these tests the first time, it would almost be a miracle. But I do believe in miracles. If you run into problems, you have to figure them out.

Ready for Production

Once you have successfully tested your production SQL Server 2005 cluster, you are ready to go into production. If you have time, you might want to consider running the cluster in test mode for a while, “playing” with it to learn more about how it works under various conditions. But even if you don’t have any extra time to “play” with your cluster, it should be ready to go into production.

Advertisements
%d bloggers like this: