Removing a clustered DST volume pair removes the shadow relationship between the primary and secondary storage area. You remove commands in the primary pool’s cluster scripts that you added to manage the secondary pool and volume. Removing the shadow relationship does not remove the underlying volumes themselves. The files remain on whichever storage area they are on at the time when you remove the shadow relationship.
Section 15.10.1, Planning to Remove the Shadow Relationship for a Clustered DST Volume Pair
Section 15.10.3, Removing the Shadow Definition and NCP/NSS Bindings Exclusion on All Nodes
Section 15.10.4, Preparing the Primary Pool Cluster Resource for Independent Use
Section 15.10.5, Preparing the Secondary Pool and Volume for Independent Use
As you plan to remove the shadow relationship for a clustered DST volume pair, consider the following short service outages that are involved:
The data on the DST volume pair will be unavailable to users while you remove the shadow relationship as described in Section 15.10.3, Removing the Shadow Definition and NCP/NSS Bindings Exclusion on All Nodes, and until you have performed the tasks necessary for the two volumes to function independently.
An NDSD restart is required after you remove the shadow volume and NCP/NSS bindings information on each node in turn. This results in a brief service outage for all NSS volumes and NCP volumes that are mounted on the node at that time.
The primary volume is available to users as an independent volume after you perform the procedure described in Section 15.10.4, Preparing the Primary Pool Cluster Resource for Independent Use.
The secondary volume is available to users as an independent volume after you perform one of the procedures described in Section 15.10.5, Preparing the Secondary Pool and Volume for Independent Use.
Removing a shadow relationship does not automatically move files in either direction between the two volumes. The files remain undisturbed. The volumes function independently after the shadow relationship is successfully removed. Ensure that the files are distributed as desired before you remove the shadow relationship.
To move files between the two volumes to achieve a desired distribution of files:
In OES Remote Manager for Linux, log in as the root user.
Select View File System > Dynamic Storage Technology Options, locate the volume in the list, then click the Inventory link next to it.
View the volume inventory for the shadow volume to determine the space in use and the available space for both the primary and the secondary areas of the shadow volume. Ensure that there is sufficient free space available in either location for the data that you plan to move to that location.
Use any combination of the following techniques to move data between the two areas:
Shadow Volume Policies: Run an existing shadow volume policy by using the Execute Now option in the Frequency area of the policy. You can also create a new shadow volume policy that moves specific data, and run the policy by using the One Time and Execute Now options in the Frequency area of the policy.
For information about configuring policies to move data between the primary and secondary areas, see Section 11.0, Creating and Managing Policies for Shadow Volumes.
Inventories: Use the detailed inventory reports or customized inventories to move specific files to either area.
For information about using the volume customized inventory options to move data between the primary and secondary areas, see Section 14.6, Generating a Custom Inventory Report.
(Optional) While the DST pool cluster resource is online, you can delete volume-specific policies as described in Section 11.8, Deleting a Shadow Volume Policy.
Shadow volume policies that you configured for the DST volume do not run after you remove the shadow volume relationship. The policy information is stored in the /media/nss/<primary_volumename>/._NETWARE/shadow_policy.xml file. Policy information is not automatically removed from the file when you remove the shadow relationship. If you later define a new shadow volume relationship for the primary volume, the policies apply to it.
Continue with Section 15.10.3, Removing the Shadow Definition and NCP/NSS Bindings Exclusion on All Nodes.
You must remove the shadow definition for the DST shadow volume pair and the NCP/NSS bindings exclusion for the secondary volume on each node in turn. This requires a restart of NDSD and NCP2NSS on each node, which creates a short service outage for all NSS volumes on the node. You can minimize the impact by cluster migrating the pool cluster resources for the other NSS volumes to other nodes while you are modifying the configuration files on a given node.
Log in as the root user to the node where the primary pool cluster resource is online, then open a terminal console.
Offline the DST pool cluster resource that is managing the clustered shadow volume.
cluster offline resource_name
This unloads the cluster resource and deactivates the cluster pools and their volumes so that the cluster is not controlling them. Do not bring the primary resource or secondary resource online, and do not locally mount the volumes on any node at this time.
Remove the shadow volume and NCP/NSS bindings exclusion information from each node in the cluster:
Log in to the node as the root user.
In a text editor, open the /etc/opt/novell/ncp2nss.conf file, remove the EXCLUDE_VOLUME line for the secondary volume from the file, then save the file.
EXCLUDE_VOLUME secondary_volumename
For example:
EXCLUDE_VOLUME ARCVOL1
In a text editor, open the /etc/opt/novell/ncpserv.conf file, remove the SHADOW_VOLUME line for the shadow volume from the file, then save the file.
SHADOW_VOLUME primary_volumename secondary_volume_path
For example:
SHADOW_VOLUME VOL1 /media/nss/ARCVOL1
Restart the eDirectory daemon by entering the following commands:
rcndsd stop (or) systemctl stop ndsd.service rcndsd start (or) systemctl start ndsd.service
Restart the NCP/NSS IPC daemon to synchronize the changes you made to the /etc/opt/novell/ncp2nss.conf file. At the terminal console prompt, enter
systemctl restart ncp2nss.service
Repeat these steps for each node in the cluster.
After the shadow and bindings information has been removed from all nodes, continue with Section 15.10.4, Preparing the Primary Pool Cluster Resource for Independent Use.
In the primary pool cluster resource scripts, remove (or comment out) the lines for the management of the secondary pool, secondary volume, and shadowfs. This allows the pool cluster resource to function independently.
In iManager, select Clusters, then select My Clusters.
Select the name link of the cluster you want to manage.
On the Cluster Manager page, click the name link of the primary cluster resource to view its Cluster Pool Properties page, then click the Scripts tab.
On the Scripts > Load Script page, modify the load script of the primary pool cluster resource:
Remove or comment out the activation command for the secondary pool and the sleep command you added for the pool activation:
#exit_on_error nss /poolact=ARCPOOL1 #sleep 10
Remove or comment out the ncpcon mount command for the shadow volume:
#exit on error ncpcon mount VOL1=254,shadowvolume=ARCVOL1
Add (or uncomment) a command to mount the NSS volume:
exit_on_error ncpcon mount <volume_name>=<volume_id>
Replace volume_name with the primary NSS volume name, such as VOL1.
Replace volume_id with a number that is unique across all nodes in the cluster, such as 254.
For example:
exit_on_error ncpcon mount VOL1=254
If shadowfs was used, remove or comment out the wait time for shadowfs to start.
# If shadowfs is used, wait for shadowfs to start #for (( c=1; c<=10; c++ )) do # if [ ! -d /media/shadowfs/VOLUME/._NETWARE ]; then sleep 5; fi #done
Click Apply to save your changes.
The changes do not take effect until the cluster resource is brought online.
On the Scripts > Unload Script page, modify the unload script of the primary pool cluster resource:
Remove or comment out the deactivation command for the secondary pool:
#ignore_error nss /pooldeact=ARCPOOL1
If shadowfs was used, remove or comment out the fusermount -u command.
# If shadowfs is used, unload the volume in FUSE #ignore_error fusermount -u /media/shadowfs/VOL1
Click Apply to save your changes.
The changes do not take effect until the cluster resource is brought online.
On the Scripts > Monitor Script page, modify the monitor script of the primary pool cluster resource:
Remove or comment out the status command for the secondary pool:
# Check the status of the secondary pool #exit_on_error status_fs /dev/pool/ARCPOOL1 /opt/novell/nss/mnt/.pools/ARCPOOL1 nsspool
Click Apply to save your changes.
The changes do not take effect until the cluster resource is brought online.
Click OK to return to the Cluster Manager page.
Online the revised pool cluster resource. On the Cluster Manager page, select the check box next to the pool cluster resource, then click Online.
The resource comes online as an independent pool and volume on a node in the resource’s preferred nodes list.
If the resource goes comatose instead of coming online, take the resource offline, check the scripts, then try again.
Continue with Section 15.10.5, Preparing the Secondary Pool and Volume for Independent Use.
When you defined the clustered DST shadow volume pair, you might have used a clustered pool or a shared-but-not-cluster-enabled pool.
Apply one of the following methods to use the pool and volume independently:
If you used a clustered secondary pool cluster resource, ensure that the volume ID is unique across all nodes in the cluster before you bring the pool resource online as an independent pool.
IMPORTANT:If you deleted the secondary pool cluster resource after you merged its information in the primary pool cluster resource scripts, the secondary resource no longer exists. You can cluster-enable the shared-but-not-cluster-enabled pool as described in Cluster-Enabling a Shared Secondary Pool, or you can unshare the pool as described in Unsharing the Secondary Pool to Use It Locally on the Node.
In iManager, select Clusters, then select My Clusters.
Select the name link of the cluster you want to manage.
Go to the secondary resource’s load script and verify that the volume ID is unique for the secondary volume.
If you have assigned the volume ID to another clustered volume while the secondary resource was unused, the duplicate volume ID will cause the secondary resource to go comatose when you try to bring it online.
On the Cluster Manager page, click the name link of the secondary cluster resource to view its Cluster Pool Properties page, then click the Scripts tab.
On the Scripts > Load Script page, check the volume ID to ensure that it is unique:
exit_on_error ncpcon mount ARCVOL1=253
Click OK to save your changes and return to the Cluster Manager page.
The changes do not take effect until the cluster resource is brought online.
Online the secondary pool cluster resource. On the Cluster Manager page, select the check box next to the primary pool cluster resource, then click Online.
The resource comes online as an independent pool and volume on a node in the resource’s preferred nodes list.
If the resource goes comatose instead of coming online, take the resource offline, check the scripts, then try again.
If the resource goes online successfully, you are finished.
You can cluster-enable the shared pool and volume as an independent pool cluster resource under the following conditions:
If you used a shared-but-not-clustered pool as the secondary pool.
In this case, the Pool object name and Volume object name contain the name of the node where they were originally created.
If you used a cluster-enabled pool as the secondary and deleted the secondary pool cluster resource after you copied its commands into the DST pool cluster resource scripts.
In this case, the Pool object name and Volume object name contain the cluster name, because the objects were recreated when you cluster-enabled them.
Before you attempt to cluster-enable the shared pool, you must update the Pool object and Volume object in eDirectory to use the hostname of the server where you took the primary pool cluster resource offline.
In NSSMU, update the eDirectory object for the shared pool and volume.
You can alternatively use the Storage plug-in for iManager to update the eDirectory objects. Select the server where you took the clustered DST pool resource offline. Ensure that you dismount the volume and deactivate the pool after you have updated their objects.
Log in as the root user on the node where you took the primary pool cluster resource offline, then open a terminal console.
Launch NSSMU. At the command prompt, enter
nssmu
Activate the pool and update its eDirectory object to create a Pool object that is named based on the hostname of the current node.
In the NSSMU menu, select Pools, then press Enter.
Select the secondary pool (ARCPOOL1), then press F7 to activate it.
Select the secondary pool, press F4 (Update NDS), then press y (Yes) to confirm that you want to delete the old Pool object and add a new Pool object.
Press Esc to return to the NSSMU menu.
Mount the volume, update its eDirectory object to create a Volume object that is named based on the hostname of the current node, then dismount the volume.
In the NSSMU menu, select Volumes, then press Enter.
Select the secondary volume (ARCVOL1), then press F7 to mount it.
Select the secondary volume, press F4 (Update NDS), then press y (Yes) to confirm that you want to delete the old Volume object and add a new Volume object.
Select the secondary volume, then press F7 to dismount it.
Press Esc to return to the NSSMU menu.
Deactivate the pool.
In the NSSMU menu, select Pools, then press Enter.
Select the secondary pool (ARCPOOL1), then press F7 to deactivate it.
Press Esc twice to exit NSSMU.
In iManager, select Clusters > My Clusters.
Select the name link of the cluster you want to manage.
Cluster-enable the shared pool.
For detailed instructions, see Cluster-Enabling an Existing NSS Pool and Its Volumes
in the OES 2018 SP1: Novell Cluster Services for Linux Administration Guide.
Click the Cluster Options tab, then click New.
On the Resource Type page, select Pool, then click Next.
On the Cluster Pool Information page:
Browse to select the secondary pool, such as <hostname>_ARCPOOL1_POOL.
Specify a unique IP address.
Select the NCP, AFP, or CIFS check boxes for the advertising protocols that you want to enable for the volume.
NCP is selected by default and is required to support authenticated access to data via the OES Trustee Model. If Novell CIFS or Novell AFP is not installed, selecting its check box has no effect.
If you enable CIFS, verify the default name in the CIFS Server Name field.
You can modify this name. The name must be unique and can be up to 15 characters, which is a restriction of the CIFS protocol.
Online Resource After Creation is disabled by default. This allows you to review the settings and scripts before you bring the resource online for the first time.
Define Additional Properties is enabled by default. This allows you to set resource policies and preferred nodes before the resource is brought online.
Click Next.
On the Resource Policies page, configure the policies for the start, failover, and failback mode, then click Next.
On the Resource Preferred Nodes page, assign and rank order the preferred nodes to use for the resource, then click Finish.
(Optional) Enable monitoring for the pool cluster resource.
On the Cluster Options page, select the name link for the resource to open its Properties page.
Click the Monitoring tab.
Select Enable Resource Monitoring, set the Polling Interval, Failure Rate, and Failure Action, then click Apply.
Click the Scripts tab, then click Monitor Script.
View the script settings and verify that they are as desired.
If you modify the script, click Apply.
Click OK.
Bring the pool cluster resource online. Click the Cluster Manager tab, select the resource check box, then click Online.
If the resource goes online successfully, you are finished.
You can unshare the shared pool and volume and use them locally on the node under the following conditions:
If you used a shared-but-not-clustered pool as the secondary pool.
In this case, the Pool object name and Volume object name contain the name of the node where they were originally created.
If you used a cluster-enabled pool as the secondary and deleted the secondary pool cluster resource after you copied its commands into the DST pool cluster resource scripts.
In this case, the Pool object name and Volume object name contain the cluster name, because the objects were recreated when you cluster-enabled them.
Before you attempt to use pool and volume locally, you must update the Pool object and Volume object in eDirectory to use the hostname of the server where you took the primary pool cluster resource offline.
To mount the secondary volume as an independent local volume:
Log in as the root user on the node where you took the primary pool cluster resource offline, then open a terminal console.
Launch NSSMU. At the command prompt, enter
nssmu
In NSSMU, update the eDirectory object for the shared pool and volume.
You can alternatively use the Storage plug-in for iManager to update the eDirectory objects. Select the server where you took the clustered DST pool resource offline.
Activate the pool and update its eDirectory object to create a Pool object that is named based on the hostname of the current node.
In the NSSMU menu, select Pools, then press Enter.
Select the secondary pool (ARCPOOL1), then press F7 to activate it.
Select the secondary pool, press F4 (Update NDS), then press y (Yes) to confirm that you want to delete the old Pool object and add a new Pool object.
Press Esc to return to the NSSMU menu.
Mount the volume, then update its eDirectory object to create a Volume object that is named based on the hostname of the current node.
In the NSSMU menu, select Volumes, then press Enter.
Select the secondary volume (ARCVOL1), then press F7 to mount it.
Select the secondary volume, press F4 (Update NDS), then press y (Yes) to confirm that you want to delete the old Volume object and add a new Volume object.
Press Esc to return to the NSSMU menu.
In the NSSMU menu, select Devices, then press Enter.
Disable sharing for the device. Select the device, press F6 to unshare the device, then press y (Yes) to confirm.
If NSSMU does not allow you to unshare the device, you can use the SAN management software to ensure that the device is allocated only to the current server, and then try again.
Before you unshare the device, ensure that the device contains only the pool that you are changing to local use. It should not contain other shared pools or SBD partitions.
In the NSSMU menu, select Pools, then press Enter.
Select the pool and verify that it is unshared.
Press Esc twice to exit NSSMU.