Before you add OES 11x nodes to an OES 2 SP3 cluster, you must take all of the CSM-based resources offline on the OES 2 SP3 nodes in the cluster. Modify their scripts to run on OES 11x nodes by adding csmport commands to activate, deactivate, or check the status of the CSM container. After you modify the scripts, the resources cannot be used on OES 2 SP3 nodes. They can be mounted successfully only on OES 11x nodes in the cluster.
If the CSM container has a segment manager on it, you must first convert the volumes in the container to compatibility volumes and deport them before you offline them and modify their scripts for OES 11 and later. See Section 14.3, Deporting the CSM Containers with Segment Managers.
IMPORTANT:If the CSM container does not have a segment manager on it, follow the instructions in Section 14.4, Modifying the Scripts for CSM Resources without a Segment Manager.
If the CSM container has a segment manager on it, the container can have one or more volumes in it. The volume device name for each volume is based on its order in the container and whether the device is multipathed. The partitions for the volumes in the container are named by adding a sequential number to the end of the container name. For example:
and so on.
If the container name ends with a number, the partitions are named by adding a p before the sequential number. For example:
and so on.
In addition to adding the csmport commands to the scripts, you must modify the definition fields in the scripts to have one entry for each of the volumes.
The sample scripts in this section use the following sample parameters. It assumes that the CSM container has a DOS segment manager on it and two Linux POSIX volumes with different file systems. Ensure that you replace the sample values with your values.
Parameter |
Sample Value |
---|---|
volume name for Linux POSIX volume 1 |
lxvolext3 |
volume name Linux POSIX volume 2 |
lxvolxfs |
RESOURCE_IP |
10.10.10.44 |
MOUNT_FS1 |
ext3 |
MOUNT_FS2 |
xfs |
CONTAINER_NAME |
csm4 |
VOLUME_DEV1 |
csm44p1 |
VOLUME_DEV2 |
csm44p2 |
MOUNT_DEV1 |
/dev/mapper/$VOLUME_DEV1 |
MOUNT_DEV2 |
/dev/mapper/$VOLUME_DEV2 |
MOUNT_POINT1 |
/mnt/lxvol44ext3 |
MOUNT_POINT2 |
/mnt/lxvol44xfs |
IMPORTANT:Perform the following tasks to prepare the CSM resources for an OES 11x node in a mixed-mode cluster. Do not add the OES 11x nodes at this time.
Section 14.5.1, Offlining the CSM Cluster Resources with a Segment Manager
Section 14.5.2, Configuring the Scripts for a CSM Cluster Resource with a Segment Manager
Section 14.5.3, Sample Load Script for a CSM Resource with a Segment Manager
Section 14.5.4, Sample Unload Script for a CSM Resource with a Segment Manager
Section 14.5.5, Sample Monitor Script for a CSM Resource with a Segment Manager on It
Offline every OES 2 SP3 cluster resource that manages a Linux POSIX file system on CSM containers with segment managers:
In iManager, select Clusters > My Clusters, select the cluster, then click Cluster Manager.
Select the check box next to each of the CSM-based cluster resources that you want to manage, then click Offline.
Wait until the resources report an Offline status.
Continue with Section 14.5.2, Configuring the Scripts for a CSM Cluster Resource with a Segment Manager.
In iManager, select Clusters > My Clusters.
Select the cluster you want to manage.
Select Cluster Options.
Select the check box next to the CSM resource, then click the Details link.
You can also click the CSM resource’s name link to view its properties.
Click the Scripts tab to view the Load Script page.
On the Load Script page, modify the script to handle the CSM container and multiple volumes on it.
For an example, see Section 14.5.3, Sample Load Script for a CSM Resource with a Segment Manager.
Add file system definition entries for each file system used by the volumes.
For example, if you have two volumes and each of them has a different type of file system, you create a definition for each:
#define the file system types MOUNT_FS1=ext3 MOUNT_FS2=xfs
Add a device definition for each volume:
#define the volume devices VOLUME_DEV1=csm44p1 VOLUME_DEV2=csm44p2
Add a mount device definition for each volume:
#define the devices MOUNT_DEV1=/dev/mapper/$VOLUME_DEV1 MOUNT_DEV2=/dev/mapper/$VOLUME_DEV2
Add a mount point definition for each volume:
#define the mount points MOUNT_POINT1=/mnt/lxvolext3 MOUNT_POINT2=/mnt/lxvolxfs
Add a csmport command to activate the CSM container:
#activate the container exit_on_error csmport -i $CONTAINER_NAME
Add a kpartx command to activate the partitions. The command should follow the csmport command in the load script.
#activate the partitions exit_on_error /sbin/kpartx -a /dev/mapper/$CONTAINER_NAME
If you use a mkdir command, create one for each mount point:
#if the mount path does not exist, create it ignore_error mkdir -p $MOUNT_POINT1 ignore_error mkdir -p $MOUNT_POINT2
Add a mount command for each volume:
#mount the file systems exit_on_error mount_fs $MOUNT_DEV1 $MOUNT_POINT1 $MOUNT_FS1 exit_on_error mount_fs $MOUNT_DEV2 $MOUNT_POINT2 $MOUNT_FS2
Click Apply to save the load script changes.
Click the Unload Script link to go to the Unload Script page, then modify the script to handle the CSM container and multiple volumes on it.
For an example, see Section 14.5.4, Sample Unload Script for a CSM Resource with a Segment Manager.
Modify the definitions as described in Step 6.a through Step 6.d in the load script changes.
#define the file system types MOUNT_FS1=ext3 MOUNT_FS2=xfs #define the container name CONTAINER_NAME=csm44 #define the volume devices VOLUME_DEV1=csm44p1 VOLUME_DEV2=csm44p2 #define the devices MOUNT_DEV1=/dev/mapper/$VOLUME_DEV1 MOUNT_DEV2=/dev/mapper/$VOLUME_DEV2 #define the mount points MOUNT_POINT1=/mnt/lxvolext3 MOUNT_POINT2=/mnt/lxvolxfs
Add an unmount command for each volume:
#unmount the volumes exit_on_error umount_fs $MOUNT_DEV1 $MOUNT_POINT1 $MOUNT_FS1 exit_on_error umount_fs $MOUNT_DEV2 $MOUNT_POINT2 $MOUNT_FS2
Add a kpartx command to deactivate the partitions. The command should come before the csmport command in the unload script.
#deactivate the partitions exit_on_error /sbin/kpartx -d /dev/mapper/$CONTAINER_NAME
Add a csmport command to deactivate the CSM container:
#deactivate the container exit_on_error csmport -e $CONTAINER_NAME
Click Apply to save the unload script changes.
Click the Monitor Script link to go to the Monitor Script page, then modify the script to handle the CSM container and multiple volumes on it.
For an example, see Section 14.5.5, Sample Monitor Script for a CSM Resource with a Segment Manager on It.
Modify the definitions as described in Step 6.a through Step 6.d in the load script changes.
#define the file system types MOUNT_FS1=ext3 MOUNT_FS2=xfs #define the container name CONTAINER_NAME=csm44 #define the volume devices VOLUME_DEV1=csm44p1 VOLUME_DEV2=csm44p2 #define the devices MOUNT_DEV1=/dev/mapper/$VOLUME_DEV1 MOUNT_DEV2=/dev/mapper/$VOLUME_DEV2 #define the mount points MOUNT_POINT1=/mnt/lxvolext3 MOUNT_POINT2=/mnt/lxvolxfs
Add a check for each volume:
#check the volumes exit_on_error status_fs $MOUNT_DEV1 $MOUNT_POINT1 $MOUNT_FS1 exit_on_error status_fs $MOUNT_DEV2 $MOUNT_POINT2 $MOUNT_FS2
Add a csmport command to check the status of the CSM container:
#check the container exit_on_error csmport -c $CONTAINER_NAME
Click Apply to save the monitor script changes.
Repeat Step 5 to Step 8 for each of the resources that you took offline in Section 14.4.1, Offlining the CSM Cluster Resources without Segment Managers.
Do not bring the CSM cluster resources online again until OES 11x nodes have joined the cluster, and each resource’s Preferred Nodes list has been modified to use only OES 11x nodes.
Continue with Section 14.6, Configuring and Adding OES 11x Nodes to the OES 2 SP3 Cluster.
Use the following sample load script to complete the fields for your CSM cluster resource on OES 11x:
#!/bin/bash . /opt/novell/ncs/lib/ncsfuncs #define the IP address RESOURCE_IP=10.10.10.44 #define the file system types MOUNT_FS1=ext3 MOUNT_FS2=xfs #define the container name CONTAINER_NAME=csm44 #define the volume devices VOLUME_DEV1=csm44p1 VOLUME_DEV2=csm44p2 #define the devices MOUNT_DEV1=/dev/mapper/$VOLUME_DEV1 MOUNT_DEV2=/dev/mapper/$VOLUME_DEV2 #define the mount points MOUNT_POINT1=/mnt/lxvolext3 MOUNT_POINT2=/mnt/lxvolxfs #if the mount path does not exist, create it ignore_error mkdir -p $MOUNT_POINT1 ignore_error mkdir -p $MOUNT_POINT2 #activate the container exit_on_error csmport -i $CONTAINER_NAME #activate the partitions exit_on_error /sbin/kpartx -a /dev/mapper/$CONTAINER_NAME #mount the file systems exit_on_error mount_fs $MOUNT_DEV1 $MOUNT_POINT1 $MOUNT_FS1 exit_on_error mount_fs $MOUNT_DEV2 $MOUNT_POINT2 $MOUNT_FS2 # add the IP address exit_on_error add_secondary_ipaddress $RESOURCE_IP exit 0
Use the following sample unload script to complete the fields for your CSM cluster resource on OES 11x:
#!/bin/bash . /opt/novell/ncs/lib/ncsfuncs # define the IP address RESOURCE_IP=10.10.10.44 # define the file system types MOUNT_FS1=ext3 MOUNT_FS2=xfs #define the container name CONTAINER_NAME=csm44 #define the volume devices VOLUME_DEV1=csm44p1 VOLUME_DEV2=csm44p2 #define the devices MOUNT_DEV1=/dev/mapper/$VOLUME_DEV1 MOUNT_DEV2=/dev/mapper/$VOLUME_DEV2 #define the mount points MOUNT_POINT1=/mnt/lxvolext3 MOUNT_POINT2=/mnt/lxvolxfs #del the IP address ignore_error del_secondary_ipaddress $RESOURCE_IP #unmount the volumes exit_on_error umount_fs $MOUNT_DEV1 $MOUNT_POINT1 $MOUNT_FS1 exit_on_error umount_fs $MOUNT_DEV2 $MOUNT_POINT2 $MOUNT_FS2 #deactivate the partitions exit_on_error /sbin/kpartx -d /dev/mapper/$CONTAINER_NAME #deactivate the container exit_on_error csmport -e $CONTAINER_NAME # return status exit 0
Use the following sample monitor script to complete the fields for your CSM cluster resource on OES 11x. To use the script, you must also enable monitoring for the resource. See Section 10.7, Enabling Monitoring and Configuring the Monitor Script.
#!/bin/bash . /opt/novell/ncs/lib/ncsfuncs # define the IP address RESOURCE_IP=10.10.10.44 # define the file system types MOUNT_FS1=ext3 MOUNT_FS2=xfs #define the container name CONTAINER_NAME=csm44 #define the volume devices VOLUME_DEV1=csm44p1 VOLUME_DEV2=csm44p2 #define the devices MOUNT_DEV1=/dev/mapper/$VOLUME_DEV1 MOUNT_DEV2=/dev/mapper/$VOLUME_DEV2 #define the mount points MOUNT_POINT1=/mnt/lxvolext3 MOUNT_POINT2=/mnt/lxvolxfs #check the IP address exit_on_error status_secondary_ipaddress $RESOURCE_IP #check the volumes exit_on_error status_fs $MOUNT_DEV1 $MOUNT_POINT1 $MOUNT_FS1 exit_on_error status_fs $MOUNT_DEV2 $MOUNT_POINT2 $MOUNT_FS2 #check the container exit_on_error csmport -c $CONTAINER_NAME # return status exit 0