Updates were made to the following section. The changes are explained below.
Location |
Change |
---|---|
Device names support node names (sdc and mpatha), full Linux path names (/dev/sdc and /dev/mapper/mpatha), and keywords anydisk or anyshared (for commands that support keyword use). |
|
NLVM options can appear in any order in the command after nlvm. Previously, the NLVM options had to follow immediately after nlvm. |
|
-m option in Section 6.2, NLVM Options |
This option prevents pools that have been unmounted from being mounted. Pools are by design auto mounted. Therefore, running the nssmu utility, or running most nlvm commands without the -m option can cause an unmounted pool to be remounted if underlying devices and partitions still exist. To execute an nlvm command without mounting the unmounted pools, you must include the -m option. The nlvm mount command internally sets the -m flag, so only the specified pool is mounted. |
In a cluster, if the SBD does not exist or fails, you can use the -s option with NLVM commands to prepare a device and create an SBD partition. To minimize the risk of corruption, you must ensure that nobody else is changing any storage on any nodes at the same time. |
|
-t , --terse option in Section 6.2, NLVM Options |
This NLVM option is new. You can use the --terse option with nlvm list commands to display output in a format for parsing. |
This section is new. You can use the more or all options with nlvm list commands to display additional or detailed information about storage objects. |
|
You can check the status of a pool move by using the nlvm list move <move_name> command. You can issue the nlvm complete move <move_name> command to finalize the move. Other NSS utilities might also complete the move. For information, see |
|
If you use the ncp option, the volume name used for the name option must comply with the name limitations described in Section 5.2.4, NCP Volume Names. |
|
You can use the volid option in combination with the shared and ncp options to assign an NCP volume ID to a clustered LVM volume that is unique across all nodes in every peer cluster of a Business Continuity Cluster. |
|
You can use the part=<partition_name> option instead of the device and size options to specify an existing partition as the location for a non-clustered Linux volume. |
|
Specify an unshared initialized device. For OES 11 SP2 and later, you can alternatively specify a shared device with no data partitions or an uninitialized device. For a cluster-enabled LVM volume, issue the command from the master node in the cluster. |
|
The nlvm unmount <poolname> command also removes the Device Mapper object for the pool, the link to the Device Mapper object, and the mount point for the pool. This allows you to gracefully log out the server from an iSCSI device that contains a pool. |
|
Because a physical partition must end on a cylinder boundary, its size might be slightly different than the size you specify. |
|
Added: type=8e (partition type for Linux LVM) type=1ac (partition type for snapshots) |
|
A physical partition size might be rounded up or down to the next nearest cylinder boundary depending on the partition type, the specified size, and the amount of free space. |
|
Before you create a Novell Cluster Services SBD partition, you must take the cluster down, and stop Novell Cluster Services from running on all nodes. |
|
For a cluster-enabled pool, issue the command from the master node in the cluster. |
|
When mirroring a pool that consumes an entire MSDOS partitioned disk, you can use an MSDOS or GPT partitioned device of the same size. |
|
The type option is optional for mirroring existing partitions. The name option is optional for mirroring an existing SBD partition. |
|
Because a physical partition must end on a cylinder boundary, its size might be slightly different than the size you specify. |
|
For Novell type partitions, the physical partition size might be rounded down to the next nearest cylinder boundary. When you mirror an existing partition, the type option must precede the part option in the command. Before you create a Novell Cluster Services SBD RAID 1, you must take the cluster down, and stop Novell Cluster Services from running on all nodes. Added examples for creating a mirrored SBD RAID 1 device and for mirroring an existing SBD partition. |
|
The part option for the e nlvm create snap command allows you to specify a snap partition (type 1AC) as the target of the snapshot.The minimum snapshot size was increased from 1 MB to 50 MB. |
|
You can use the volid option in combination with a clustered NSS pool to assign an NCP volume ID to a clustered NSS volume that is unique across all nodes in every peer cluster of a Business Continuity Cluster. |
|
Before you delete a Novell Cluster Services SBD partition, you must take the cluster down, and stop Novell Cluster Services from running on all nodes. |
|
Before you delete a Novell Cluster Services SBD RAID 1, you must take the cluster down, and stop Novell Cluster Services from running on all nodes. |
|
This command is new. |
|
This command is new. |
|
This command is new. |
|
The more option prints additional information about the storage object. The all option prints detailed information about the storage object. This is the same information as for the specific nlvm list command for an object. The -t or --terse NLVM option can be used with nlvm list commands to print the output in a format for parsing. |
|
The nlvm mount command internally sets the -m flag, so only the specified pool is mounted. |
|
You can pause and resume a pool move. |
|
This command is new. |
|
This command is new. |
|
This command is new. |
|
Use the unmount command to temporarily unload a pool in order to manage underlying devices. Pools are by design auto mounted. Therefore, running the nssmu utility, or running most nlvm commands without the -m option can cause an unmounted pool to be remounted if underlying devices and partitions still exist. To execute an nlvm command without mounting the unmounted pools, you must include the -m option. The nlvm mount command internally sets the -m flag, so only the specified pool is mounted. |
Location |
Change |
---|---|
Section 7.4, Logging Out of an iSCSI Device that Contains an NSS Pool |
This section is new. |
Location |
Change |
---|---|
We recommend that you do not use Linux software RAIDs (such as MD RAIDs and Device Mapper RAIDs) for devices that you plan to use for storage objects that are managed by NSS management tools. The Novell Linux Volume Manager (NLVM) utility and the NSS Management Utility (NSSMU) list the Linux software RAID devices that you have created by using Linux tools. Beginning with Linux Kernel 3.0 in OES 11 SP1, NLVM and NSSMU can see these devices, initialize them, and allow you to create storage objects on them. However, this capability has not yet been fully tested. IMPORTANT:In OES 11, a server hang or crash can occur if you attempt to use a Linux software RAID when you create storage objects that are managed by NSS management tools. |
|
This section is new. |
Location |
Change |
---|---|
With the latest OES 11 SP1 patches, the NSS utility provides an /err switch that can be used from the command prompt to view error messages for NLVM commands error codes:
nss /err=<error_code_number>
|
|
Section 9.6, Error 20897 - This node is not a cluster member |
This section is new. |
|
This issue is fixed in OES 11 SP2. This issue was fixed in the November 2012 Scheduled Maintenance Patches for OES 11 and OES 11 SP1. |
Location |
Change |
---|---|
This section is new. |