Ensure that you offline the cluster resource before attempting to delete either the cluster resource or the clustered pool.
WARNING:If you attempt to delete a cluster resource without first offlining it, deletion errors occur, and the data associated with the clustered pool is not recoverable.
To delete a resource and create a new one with the same name, you must wait to create the new one until eDirectory synchronizes all of the objects in the tree related to the deleted resource.
We strongly recommend that when you need to delete a cluster resource, that you do so only from the master node in the cluster. If the resource cannot be migrated to the master node, follow the procedure in Section 10.14.2, Deleting a Cluster Resource on a Non-Master Node.
You might want to delete the shared storage area if you no longer need the data.
WARNING:Deleting a pool or Linux POSIX volume destroys all data on it.
If the resource is on a non-master node in the cluster, migrate it to the master node.
If the cluster resource is online, offline it before continuing by using one of the following methods:
Enter the following at the terminal console prompt as the root user:
cluster offline resource
In iManager, go to
> , specify the cluster you want to manage, select the cluster resource, then click .Delete the resource on the master node by using the appropriate storage management tool:
For shared NSS pools and volumes, use NSSMU or the Storage plug-in to iManager. Deleting the pool automatically deletes the volumes on it.
For shared Linux POSIX volumes, use evmsgui.
In eDirectory, look at the Cluster Resource objects in the Cluster container to verify that the resource has been deleted from the Cluster container.
If necessary, you can delete the Cluster Resource object and its virtual server object manually. In iManager, go to
> , select the objects, then click .We strongly recommend that when you need to delete a cluster resource, that you do so only from the master node in the cluster. If the resource can be migrated, migrate it to the master node and follow the procedure in Section 10.14.1, Deleting a Cluster Resource on a Master Node.
You might want to delete a cluster resource on a non-master node when deleting NSS pool and volume resources (by using NSSMU).
If you must delete a cluster resource while it resides a non-master node, use the following procedure:
Log in as the root user to the non-master node where the cluster resource currently resides, then open a terminal console.
If the cluster resource is online, offline it by entering
cluster offline resource
At the terminal console prompt on the non-master node, enter
/opt/novell/ncs/bin/ncs-configd.py -init
Look at the file /var/opt/novell/ncs/resource-priority.conf to verify that it has the same information (REVISION and NUMRESOURCES) as the file on the master node.
Delete the resource on the non-master node by using the appropriate storage management tool:
For shared NSS pools and volumes, use NSSMU or the Storage plug-in to iManager.
For shared Linux POSIX volumes, use evmsgui.
In eDirectory, look at the objects in the Cluster container to verify that the resource has been deleted from the Cluster container.
On the master node, log in as the root user, then open a terminal console.
At the terminal console prompt on the master node, enter
/opt/novell/ncs/bin/ncs-configd.py -init
Look at the file /var/opt/novell/ncs/resource-priority.conf to verify that it has the same information (REVISION and NUMRESOURCES) as that of the non-master node where you deleted the cluster resource.
In iManager, select
> , then browse to select the Cluster object.Click
, select the tab, then click on the Priorities page.At the terminal console, enter
cluster view
The cluster view should be consistent.
Look at the file /var/opt/novell/ncs/resource-priority.conf on the master node to verify that the revision number increased.
If the revision number increased, you are done. Do not continue with Step 14.
If the deleted resource is the only one in the cluster, the priority won’t force the update. A phantom resource might appear in the interface. You need to restart Cluster Services to force the update, which also removes the phantom resource.
If the revision number did not automatically update in the previous steps, restart Novell Cluster Services by entering the following on one node in the cluster:
cluster restart [seconds]
For seconds, specify a value of 60 seconds or more. The cluster leave process begins immediately. Specify a value of 60 or more seconds as the time to wait before the cluster join begins. For example:
cluster restart 120