Use the guidelines in this section when planning your cluster solution for Dynamic Storage Technology shadow volumes.
In addition to the installation requirements described in Section 4.1, Installation Requirements for Dynamic Storage Technology, consider the guidelines in this section when setting up DST shadow volumes in a Novell Cluster Services cluster environment.
Each node that hosts shadow volumes in the cluster must be running OES 2 Linux (or later).
In a mixed-platform cluster (such as for rolling cluster upgrades or conversions), make sure that you specify only OES 2 Linux nodes as failover candidates for the shadow volume cluster resource.
The NCP (NetWare Core Protocol) Server and the Dynamic Storage Technology software are not cluster aware. They must be installed on every OES 2 Linux node in the cluster where you plan to migrate or fail over the cluster resource that contains shadow volumes. You do not cluster NCP Server or DST services. You can cluster the DST shadow volume pair as a cluster resource.
Dynamic Storage Technology supports shadow volumes created with pairs of shared Novell Storage Services (NSS) volumes. Install NSS on each node in the cluster. For information, see the OES 2 SP2: NSS File System Administration Guide.
You must create the two NSS pools and volumes on separate shared disks before you create the shadow volume relationship for the two volumes. The primary pool must be cluster-enabled. You can also cluster-enable the secondary pool, but its Cluster objects and IP address are not used while the two NSS volumes are in the shadow relationship.
Novell Samba is not cluster-aware and is not clustered by default. You must install and configure Novell Samba and ShadowFS for each node in the cluster. For information about setting up CIFS/Samba access on each node, see Section 5.0, Installing and Configuring Shadow File System (ShadowFS) for CIFS/Samba Users.
Additional commands for managing FUSE for the resource must be added manually in the cluster load/unload scripts.
The following lines are added in the load script of the primary NSS pool cluster resource to allow time for ShadowFS to start:
# If shadowfs is used, wait for shadowfs to start for (( c=1; c<=10; c++ )) do if [ ! -d /media/shadowfs/VOLUME/._NETWARE ]; then sleep 5; fi done
The following line is added to the unload script of the primary NSS pool cluster resource to unload the volume in FUSE:
#unload the volume in FUSE ignore_error fusermount -u /media/shadowfs/VOLUME
When using Novell Remote Manager for Linux to manage policies for the shadow volume, you typically connect to the IP address of the cluster resource for the primary storage location in the shadow volume. You can also connect to the IP address of the server node where the cluster resource is currently mounted.
The devices and pools that contain the primary volume and secondary volume in a DST shadow volume must be marked as shareable for clustering. The primary pool must be cluster-enabled for Novell Cluster Services. You can cluster-enable the pool that contains the secondary volume, but its individual pool resource IP address and Cluster objects are not used in the load and unload scripts for the DST shadow volume. The devices, pools, and volumes that are used for the clustered DST shadow volume are managed in the primary pool cluster resource load script and unload script, which allows the two pools (and their volumes) to be failed over together.
Make sure that the same global policies are configured on each node where you want to fail over the cluster resource. Set up the shadow volume and its DST policies on the first node in the cluster, then copy that shadow volume’s configuration and policy information from the /etc/opt/novell/ncpserv.conf file and the /etc/opt/novell/ncp2nss.conf file to the configuration files on each cluster node where you want to fail over the cluster resource. You do not copy the entire file contents, because each server’s configuration files contain information specific to the server and might also contain definitions for other shadow volumes that reside on or fail over to the server. For instructions, see Section 11.2, Preparing the Nodes to Support DST in a Cluster Environment.
When working with DST shadow volumes in a cluster, the individual shadow volume policies need to be able to fail over with the volume. You should create separate individual policies for each shadow volume, or make sure that the policy applies only to the shadow volumes that exist in a given cluster resource. A given policy can apply to multiple shadow volumes in the cluster resource. You can have multiple policies associated with a given shadow volume in the cluster resource.
Dynamic Storage Technology does not support using remote volumes in DST shadow volumes in a cluster. Both of the devices for the primary and secondary volumes must be attached as local drives (such as Fibre Channel storage or iSCSI storage) on the same OES 2 Linux server. They must be able to fail over or cluster migrate together to other OES 2 Linux nodes in the cluster. Thus, a single cluster resource is used to manage both volumes. The load script and unload script for the resource includes commands that manage both the primary and secondary devices, pools, and volumes.
In the cluster resource load script for the shadow volume pair, the shadow volume is mounted by mounting its primary volume for NCP Server. The secondary pool should be activated and the secondary volume must be mounted in NSS before you issue the command to mount the shadow volume pair.
IMPORTANT:If the secondary volume is not available when the shadow volume pair is mounted, the cluster load script does not fail and does not provide a warning. The DST shadow volume is created and appears to be working when viewed from Novell Remote Manager. However, until the DST shadow volume is mounted, the files on the secondary volume are not available to users and appear to be missing in the merged file tree view. After the secondary volume has successfully mounted, the files automatically appear in the merged file tree view.
If you observe that the pools are slow to mount, you can add a wait time to the load script before the mount command for the shadow volume pair.
For example, you add a sleep command with a delay of a few seconds, such as:
sleep 10
You can increase the sleep time value until it allows sufficient time for the pools to be activated and the volumes to be mounted in NSS before continuing.
IMPORTANT:If wait times are added to the load script or unload script, make sure to increase the script timeout settings accordingly. Otherwise, the script might time out while you are waiting for the action.