TSATEST is used to determine bottlenecks in the backup systems. For more information, see Section 5.2, Troubleshooting Backup Performance. Using this information, the following tunable switches can be used to identify sweet spots that helps improve throughput of your backup systems.
Configure the following basic tunable parameters to enhance the SMS performance. For more information about configuring the switches, see Section 3.5, Configuring the Target Service Agent for File System.
Table 5-1 Basic Tunable Parameters to Enhance SMS Performance
Task |
Purpose |
Field Name in the iManager Interface |
Command |
---|---|---|---|
Set the number of read ahead threads for a backup job |
This enables the TSA to read data ahead of the engine request during backup. This switch is based on the number of processors in the system and the system load due to other processes in the system. The default value is 4. This value ranges from 1 - 32. Set the read threads to a higher value if you have more processors or less system load during backup.Also, monitor the disk I/O performance and set the switch to higher values to check if the disk I/O improves and strike a balance between high backup performance and system utilization. |
Read Threads Per Job |
TSAFS /readthreadsperjob= value |
Set the read buffer size |
This is the number of data bytes read from the file system by a single read operation. This switch is based on the buffer sizes requested by the engine. For example, if the engine requests 64 KB of data for each read operation, set the buffer size to 64 KB to allow the TSA to service the engine better. Another aspect to consider while setting the buffer is the mean size of the data set being backed up. For example, if the mean size of the data set is 55 KB, set the buffer size to 64 KB so additional buffer is added to the mean size of the data set. This is required for backup of file characteristics and SIDF encoding. The default value is 65536 bytes. This value ranges from 32 KB to 256 KB. |
Read Buffer Size |
TSAFS /readbuffersize= value |
Set the percentage of server’s free memory to store cached data sets. |
This is used to specify the percentage of total server memory that the TSA can utilize to store cached data sets. This represents a maximum percentage value of total server’s free memory that the TSA uses to store cached data sets. The default value is 10% of the total server memory. Set it to a higher value to enable the TSA to cache more data sets and improve the backup performance of TSA. |
Cache Memory Threshold |
TSAFS /cachememorythreshold= value |
Enable or disable caching based on engine usage and workload being backed up |
This option is used to specify if TSA should do predictive caching during backups. Caching improves backup performance, on certain workloads, by prefetching files in memory. The default value is cachingMode. If the datasets are not requested in the order in which they were prefetched, backup performance may degrade for some engines and for certain workloads. To determine if caching will improve the backup performance, enable caching and load TSA with the following TSA debug options: smsdebug=800003c and smsdebug2=fffff100 The TSA debug log file displays the number of datasets opened by the engine and the TSA. If the difference in both the values is significant (>50%), then you are recommended to disable caching for optimal performance. For information on enabling debug logging, see Section B.0, Creating SMS Debug logs |
Enable Caching |
TSAFS /CachingMode | noCachingMode |
Configure the following advanced tunable parameters to enhance the SMS performance. For more information about configuring the switches, see Section 3.5, Configuring the Target Service Agent for File System.
Table 5-2 Advanced Tunable Parameters to Enhance SMS Performance
Task |
Purpose |
Field Name in the iManager Interface |
Command |
---|---|---|---|
Set the percentage of read threads to process a data set. |
This sets the maximum number of read threads that process a data set at a given time. This determines the percentage of readthreadsperjob that should be allocated to a data set before proceeding to cache another data set. This enables the TSA to build a cache of data sets in a nonsequential manner. Engines reading data sets simultaneously have the advantage of improved performance if the TSA builds a nonsequential cache rather than a sequential cache. The default value is 100. This sets all read threads to completely process a data set before proceeding to another data set. Set this value lower than 100 if the backup engine reads multiple data sets from the TSA simultaneously. |
Read Thread Allocation |
TSAFS /readthreadallocation= value |
Set the maximum threshold for data sets that can be processed simultaneously |
This sets the maximum number of data sets that the TSA caches simultaneously. This prevents the TSA from caching parts of data sets and enables complete caching of data sets instead. Use this switch along with the readthreadallocation switch. Set this value to reflect the number of data sets that the backup engine processes simultaneously. The default value is 2. |
Read Ahead Throttle |
TSAFS /readaheadthrottle= value |
Configure the following additional tunable parameters to enhance the SMS performance.
Table 5-3 Additional Tunable Parameters to Enhance SMS Performance
Task |
Purpose |
Parameter |
Command |
---|---|---|---|
Changing the I/O scheduler to deadline scheduler |
The Deadline scheduler sets a cap on per request latency and ensures good disk throughput. Service queues are prioritized by deadline expiration, making this a good choice for real-time applications, databases and other disk-intensive applications. In a multi-path environment change the I/O schedular on the multipath DM object. The I/O schedular setting has to be modified per device and per server. |
deadline |
echo deadline > /sys/block/{DEVICE-NAME}/queue/ scheduler This not persistent, rebooting the server sets the value to default. |
Modifying the slice_idle parameters of the CFQ scheduler |
Completely Fair Queuing (CFQ) I/O scheduler provides a good compromise between throughput and latency by treating all competing processes. Each process is given a separate request queue and a dedicated time slice of disk access. When a task has no more I/O to submit in its time slice, the I/O scheduler waits for a while before scheduling the next thread to improve locality of I/O. In a multi-path environment, disable idling on the CFQ queues on the multipath DM object. The slice_idle setting has to be modified per device and per server. |
slice_idle |
echo 0 > /sys/block/device/queue/iosched/slice_idle This not persistent, rebooting the server sets the value to default. The default value is eight millisecs, set it to zero to improve the backup performance. |