A storage area network (SAN) solution provides a separate, dedicated network of storage media interconnected by high-speed connections. Instead of sharing the normal network bandwidth, data queries travel between servers and the storage media on these connections. Because SANs create a neighborhood in which vital corporate data resides, a secure SAN should be a gated community with restricted and verifiable access.
The top reasons for implementing SANs are as follows:
Improving backup and restore
Improving disaster recovery
Consolidating existing data capacity, typically as a result of server consolidation (Server Consolidation Utility)
Supporting data sharing and collaboration
Improving data access performance
Managing data growth
Improving storage management
Unlike conventional IP networking security issues, security breaches in a SAN can have a permanent and devastating effect. Corruption of current data on disk or tape is absolute and recoverable only to the latest snapshot or backup version. For the highest degree of data integrity, synchronous data replication at least ensures that a current copy of real-time data is secured elsewhere.
Access Control Lists (ACLs) are another option for providing rudimentary verification. For example, you can prevent a newly introduced server from automatically logging on to the SAN fabric.
Cryptographic techniques (authentication and data encryption) add an incremental level of security for data in transmission and at rest, but cannot provide an absolute safeguard for storage. For data in transmission, authentication and encryption can ensure that sniffing the SAN transport does not yield usable data. This is especially applicable to IP storage environments, where data might be traveling over untrusted local or wide area network segments.
For OES, the Linux operating system supports booting the server directly with the operating system on a local hard drive or on a Fibre Channel SAN if the hardware supports booting from a SAN. Booting from a Fibre Channel SAN allows administrators to immediately swap out server hardware in the event of a disaster and directly boot without reinstalling the operating system. The automatic hardware detection in the operating system allows for a new server to have updated or different controllers when booting from a Fibre Channel SAN.
For Linux, the file system for the system volume must be one that can be used as root and boot, such as Ext3, Reiser, or XFS. You cannot use NSS or OCFS2 file systems for booting Linux.
The SAN interconnect fiber is typically a technology that offers faster transmission (bigger pipes) than is available in direct-attached-storage buses or in the LAN bandwidth. In addition to Fibre Channel, NSS supports iSCSI (Internet SCSI). An iSCSI SAN typically uses Gigabit Ethernet interconnects, adapters, and switches and IP routing to connect storage devices. At present, the Fibre Channel equipment costs are many times that of standard Ethernet equipment that can support iSCSI traffic.
A Novell iSCSI SAN can operate at standard Fast Ethernet speeds, or you can implement a higher-speed infrastructure for the SAN. Typically, high-speed Gigabit Ethernet devices are necessary to meet SAN performance requirements. An iSCSI SAN can be a low-cost alternative SAN solution. It provides the long-distance storage connectivity for multiple applications, including disaster recovery for business continuity, storage consolidation, data migration, and remote mirroring.
The LinuxiSCSI solution uses a YaST interface to manage iSCSI resources. You can manage the SAN from anywhere with a separate management console or disk controller. Administrators use the same well-known methods in eDirectory for granting trustee rights and user file access.
For more information, see Mass Storage over IP Networks
in the SUSE Linux Enterprise Server 10 SP4 Installation and Administration Guide.