You should consider the following tasks for monitoring and tuning the PostgreSQL database:
A backup is a full or partial copy of the information in a database, held in a physically separate location. You should make regular backups of the ZENworks database. If the database becomes unavailable, you can restore it from the backup.
Backing up a running database provides a snapshot of the database where the data is in a consistent state, even though other users are modifying the database.
If the operating system or database server fails, or if the database server does not shut down properly, then the database must be recovered. On database startup, the database server checks to see if the database was shut down cleanly at the end of the previous session. If it was not, the database server runs an automatic recovery process to restore all changes up to the most recently committed transaction.
The rest of this section describes the different types of backups and how to back up the database:
Consider these tips when defining your database backup strategy:
Back up your database before and after every ZENworks upgrade.
Ensure that your database backup has the same name as your current database.
Check disk space. This can impact the best strategy for your backup.
Events and notifications can be scheduled through the database server itself.
An online backup is performed against a running database. Backing up a running database provides a snapshot of the database where the data is in a consistent state, even though other users are modifying the database. This is referred to as a logical backup because the data is backed up, not the database files.
You can use the PostgreSQL pg_dump command to do a full online backup. This extracts the PostgreSQL database into an archive file.
pg_dump -U zenadmin -p 54327 -W -d zenworks > zenworks_dump.sql
For more information about pg_dump, see pg_dump on the PostgreSQL website.
An offline database backup requires the database server to be stopped. Once the database is not running, you can make a backup by copying the database files to another location. You should only perform an offline backup when the database is not running and when the database server has shut down properly.
This method can be used in conjunction with a scheduling mechanism, such as Windows Task Scheduler or crontab, to automate the process. Offline backups can use incremental backup (transaction logs only) or full backup and lend itself better to do a full backup quickly. They are used with full backups at less frequent intervals.
For information about copying the database files, see File System Level Backup on the PostgreSQL website.
PostgreSQL does not include built-in commands for database validation. However, to help maintain the validity of the database, we recommend that active production databases be vacuumed frequently (at least nightly) in order to remove dead rows.
After adding or deleting a large number of rows, it is a good idea to issue a VACUUM ANALYZE command for the affected table. This command updates the system catalogs with the results of all recent changes and allows the PostgreSQL query planner to make better choices in planning queries.
For information about using the VACUUM commands, see Table Fragmentation.
The amcheck module provides functions that allow you to verify the logical consistency of the structure of relations. If the structure appears to be valid, no error is raised.
The pageinspect module provides functions that allow you to inspect the contents of database pages at a low level, which is useful for debugging purposes. All of these functions may be used only by superusers.
pg_resetwal resets the write-ahead log and other control information of a PostgreSQL database cluster.For more information, see, amcheck on the PostgreSQL website.
Tuning the database parameters listed below can improve the performance of ZENworks. The postgresql.conf file contains the parameters and is located in the following locations:
Linux: /var/opt/novell/pgsql/data/
Windows: %ZENWORKS_HOME%\database\pgsql\data
Restarting the PostgreSQL service is necessary after changing some parameters. If a restart is necessary, it is called out in the parameter description.
This parameter designates the amount of shared memory dedicated to the server for caching data.
Default Value: 128MB
Recommended Value: Set as follows:
Below 32GB memory, set the value of shared_buffers to 25% of total system memory.
Above 32GB memory, set the value of shared_buffers to 8GB
Restart Required: Yes
For more information about shared_buffers, see Resource Consumption on the PostgreSQL website.
This parameter specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. If a lot of complex sorts are happening, and you have enough memory, then increasing the work_mem parameter allows PostgreSQL to do larger in-memory sorts which will be faster than disk-based equivalents.
Default Value: 4MB
Recommended Value: Set as follows:
Start with a low value: 32 to 64MB.
Look for ‘temporary file’ lines in the logs.
Set the parameter to 2 to 3 times the largest temp file.
Restart Required: No
For more information about work_mem, see Resource Consumption on the PostgreSQL website.
This parameter specifies the maximum amount of memory used by maintenance operations such as VACUUM, CREATE INDEX and ALTER TABLE ADD FOREIGN KEY. Since only one of these operations can be executed at a time by a database session and a PostgreSQL installation doesn’t have many of them running concurrently, it is safe to set the value of maintenance_work_mem significantly larger than work_mem.
Default Value: 64MB
Recommended Value: Set as follows:
Set the value 10% of system memory, up to 1GB.
Set the value higher if you are having VACUUM problems.
Restart Required: No
For more information about maintenance_work_mem, see Resource Consumption on the PostgreSQL website.
The effective_cache_size should be set to an estimate of how much memory is available for disk caching by the operating system and within the database itself. This is a guideline for how much memory you expect to be available in the operating system and PostgreSQL buffer caches, not an allocation.
Default Value: 4MB
Recommended Value: Set as follows:
Set the value to the amount of file system cache available.
If you don’t know the amount of available file system cache, set the value to 50% of the total system memory.
Restart Required: No
For more information about effective_cache_size, see Query Planning on the PostgreSQL website.
This parameter sets the maximum number of temporary buffers used by each database session. The session local buffers are used only for access to temporary tables. These will be cleared when the connection is closed.
Default Value: 8MB
Recommended Value: 64MB, increasing value based on database size and usage
Restart Required: No
For more information about temp_buffers, see Resource Consumption on the PostgreSQL website.
The max_locks_per_transaction value indicates the number of database objects that can be locked simultaneously. By default, it's set to 64, which means that PostgreSQL is prepared to track up to 64 X number of open transactions locks. The reason to have a limit is to avoid using dedicated shared memory if you don't need more locks than that.
Default Value: 64
Recommended Value: In most cases, the default value of 64 is sufficient. However, when loading a large number of datasets (for example, several thousand) at once, the number of concurrent object locks for the transaction can exceed 64.
To see if you need to increase this value, check for "ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction" messages in the following PostgreSQL log files:
On Linux:
/var/lib/pgsql/data/pg_log/*.log
/var/lib/pgsql/data/log/*.log
/var/lib/pgsql/data/*.log
On Windows:
%ZENWORKS_HOME%\database\pgsql\data\*.log
%ZENWORKS_HOME%\database\pgsql\data\pg_log\*.log
%ZENWORKS_HOME%\database\pgsql\data\log\*.log
Restart Required: Yes
For more information about max_locks_per_transaction and max_pred_locks_per_transaction, see Lock Management on the PostgreSQL website.
PostgreSQL logging is enabled and customized via postgressql.conf parameters. The file is located in:
Linux: /var/opt/novell/pgsql/data/
Windows: %ZENWORKS_HOME%\database\pgsql\data
Restarting the PostgreSQL service is necessary after changing some parameters. If a restart is necessary, it is called out in the parameter description.
For more information about these and other logging parameters, see Error Reporting and Logging on the PostgreSQL website.
This parameter must be enabled in order to log any activity in PostgreSQL. The backend process responsible for logging database activity is called logger and it gets started when logging_collector is set to ON. Changing this parameter requires a PostgreSQL restart.
Restart Required: Yes
This parameter helps you customize every log line being printed in the PostgreSQL log file. You can log the process id, application name, database name and other details for every statement as required. The following log_line_prefix may be helpful in most scenarios:
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h'
The above setting records the following for every statement being logged:
%t : Time stamp without milliseconds
%p : Process id
%l-1 : Number of the log line for each session or process, starting at 1
%u : User name
%d : Database name
%a : Application name
%h : Remote host name or IP address
Restart Required: No
This parameter records the duration of every completed statement in PostgreSQL log, irrespective of any duration limit enforced by the log_min_duration_statement parameter. Note that, as with log_min_duration_statement, enabling this parameter may increase log file usage and affect the server’s general performance. For this reason, if you already have log_min_duration_statement enabled it is often suggested to disable log_duration unless you have a specific need to keep track of both.
Restart Required: No
This parameter defines the size limit for each log file; once it reaches this threshold the log file is rotated. The following example limits the size of each log file to 500 MB:
log_rotation_size = '500MB'
Restart Required: No
This parameter determines the maximum life span for a log file, forcing its rotation once this threshold is reached. This parameter is usually set in terms of hours or days; the minimum granularity is a minute. However, if log_rotation_size is reached first, the log gets rotated anyway, irrespective of this setting. The following example sets the rotation age to one day:
log_rotation_age = 1d
Restart Required: No
This parameter controls what type of SQLs are logged. The recommended setting is DDL, which logs all DDLs that are executed. Tracking DDLs allow you to later audit when a given DDL was executed and by whom. By monitoring and understanding the amount of information it may write to the log file you may consider modifying this setting.
Other possible values are none, mod (includes DDLs plus DMLs), and all.
Restart Required: No
This parameter defines the directory in which log files are created. Please note that if you have enabled detailed logging it is recommended to have a separate disk—different from the data directory disk—allocated for log_directory. Example:
log_directory = /this_is_a_new_disk/pg_log
Restart Required: No
On a Linux server, if you need to move the database files, such as to a new separate physical disk, this operation is as simple as a folder copy. When the folder has been moved, take a backup, and then modify the zenpostgresql file’s POSTGRES_DATADIR parameter to point to the new location. The file is located in /opt/novell/zenworks/share/pgsql/sysconfig/
On a Windows server, unregister the PostgreSQL service by using following command:
"C:\Program Files (x86)\Novell\ZENworks\share\postgres\bin\pg_ctl.exe" unregister -N "Novell ZENworks Embedded Datastore - PostgreSQL"
Re-registrer the service again with the new data folder location:
"C:\Program Files (x86)\Novell\ZENworks\share\postgres\bin\pg_ctl.exe" register -N "Novell ZENworks Embedded Datastore - PostgreSQL" -U "NT AUTHORITY\NetworkService" -D "C:\Program Files
The primary tool for monitoring database activity and analyzing performance is the PostgreSQL statistics collector. The statistics collector provides a rich set of views and functions for collecting and reporting information about server activity. This information falls into two main categories:
Dynamic statistics about the system’s current activity
Collected statistics (gathered since the statistics collector subsystem was last reset)
This section lists a few of the ways you can use the statistics collector. However, for detailed information, you should refer to Monitoring Database Activity on the PostgreSQL website. The website also explains how to use regular Linux monitoring programs such as ps, top, iostat, and vmstat. In addition, you can use the pg_stat_statements module to track executation statistics for all SQL statements executed by the server.
On Windows, you can use performance monitoring tools such as Process Monitor, Process Explorer, and FileMon. For more information about both Windows and Linux tools, see Performance Analysis Tools on the PostgreSQL Wiki site.
The PostgreSQL statistics collector is a subsystem that supports collection and reporting of information about server activity. The collector tracks the total number of rows in each table, and information about vacuum and analyze actions for each table. It can also count calls to user-defined functions and the total time spent in each one.PostgreSQL supports reporting dynamic information about exactly what is going on in the system right now, such as the exact command currently being executed by other server processes, and which other connections exist in the system. A couple of useful predefined views that you should be aware of are pg_stat_activity and pg_stat_progress_vacuum.
For these views to be enabled you need to make sure that the system configuration parameter track_activities is on. To have enable these views for all server processes, set the parameter in the postgresql.conf file. The file is located in:
Linux: /var/opt/novell/pgsql/data/
Windows: %ZENWORKS_HOME%\database\pgsql\data
You can also turn on the parameter for individual sessions by using the SET command.
You can use pg_stat_activity to view the current activity for the various backend processes. A sample SQL query is:
Select pid, usename, application_name ,client_addr, backend_start,xact_start,query_start,state, backend_xid, backend_xmin,query,backend_type from pg_stat_activity where usename ='zenadmin' and state='active'
By adding the wait_event_type and wait_event columns to the query, tpg_stat_activity can also be very helpful in determining blocked queries.
You can use pg_stat_progress_vacuum to view one row for each backend process that is currently vacuuming. A sample SQL query is:
SELECT p.pid, now() - a.xact_start AS duration, coalesce(wait_event_type ||'.'|| wait_event, 'f') AS waiting, CASE WHEN a.query ~ '^autovacuum.*to prevent wraparound' THEN 'wraparound' WHEN a.query ~ '^vacuum' THEN 'user' ELSE 'regular' END AS mode, round(100.0 * p.heap_blks_scanned / p.heap_blks_total, 1) AS scanned_pct, round(100.0 * p.heap_blks_vacuumed / p.heap_blks_total, 1) AS vacuumed_pct, p.index_vacuum_count, round(100.0 * p.num_dead_tuples / p.max_dead_tuples,1) AS dead_pct FROM pg_stat_progress_vacuum p JOIN pg_stat_activity a using (pid) ORDER BY now() - a.xact_start DESC;
PostgreSQL provides several built-in views that provide access to commonly used queries on the system catalogs. The pg_locks view provides real-time information about the current locks held by active processes in the system.
If you think that long lock waits are impacting performance, you can:
Set the log_lock_waits parameter for PostgreSQL logging to generate a log message whenever a session waits longer than the deadlock_timeout to acquire a lock. See Enabling PostgreSQL Logging and Error Reporting and Logging on the PostgreSQL website).
Use the following SQL query to identify the queries that are causing the locks:
SELECT a.datname, l.relation::regclass, l.transactionid, l.mode, l.GRANTED, a.usename, a.query, a.query_start, age(now(), a.query_start) AS "age", a.pid FROM pg_stat_activity a JOIN pg_locks l ON l.pid = a.pid ORDER BY a.query_start;
Collect the following logs to send to Micro Focus Customer Support:
On Linux:
/var/lib/pgsql/data/pg_log/*.log
/var/lib/pgsql/data/log/*.log
/var/lib/pgsql/data/*.log
On Windows:
%ZENWORKS_HOME%\database\pgsql\data\*.log
%ZENWORKS_HOME%\database\pgsql\data\pg_log\*.log
%ZENWORKS_HOME%\database\pgsql\data\log\*.log
Fragmented and bloated indexes are a top reason for performance degradation of the ZENworks database. A bloated index contains many empty or nearly-empty pages. This can occur with B-tree indexes in PostgreSQL under certain uncommon access patterns.
Micro Focus recommends that you check the indexes and rebuild in the following scenarios:
System Update
After enabling ZENworks Patch Management in the zone
After adding a large number of devices
After adding a large number of bundles or bundles with lots of content defined
You can use the REINDEX command to rebuild the indexes. For detailed information, see REINDEX on the PostgreSQL website.
PostgreSQL uses Multi-Version Concurrency Control to manage concurrent access to data. With this approach, reads don’t block writes because INSERT and UPDATE operations create a new version of the row every time. But these operations don’t immediately remove the old version of the row. Instead, old versions of rows are eventually removed by the VACUUM operation.
To check for table fragmentation that can be caused by this process, use the following query:
SELECT * FROM pgstattuple('public.zzenobject');
For more information about the pgstattuple command, see pgstattuple on the PostgreSQL website.
When fragmentation exists, you can use the VACUUM command to reclaim space still used by data that had been updated. In PostgreSQL, updated key-value tuples are not removed from the tables when rows are changed, so the VACUUM command should be run occasionally to do this.
VACUUM can be run on its own, or with ANALYZE. Common commands and examples are:
Frees up space for reuse.
Example: VACUUM tablename
Locks the database table, and reclaims more space than VACUUM.
Example: VACUUM(FULL) tablename
Performs a FULL VACUUM and gathers new statistics on query executions paths using ANALYZE.
Example: VACUUM(FULL, ANALYZE) tablename
Performs a FULL VACUUM and gathers new statistics on query executions paths using ANALYZE; provides VERBOSE progress output.
Example: VACUUM(FULL, ANALYZE) tablename
For more information about the VACUUM command, see VACUUM on the PostgreSQL website.
At its basic level, PostgreSQL is one giant append-only log. When you insert a new record, the new record is appended. When you delete a record, the record is simply flagged as invisible, it’s not actually removed from disk immediately. When you update a record, the old record is flagged as invisible and a new record is written.
If fragmentation exists on numerous tables and the database is large, a faster method for fixing fragmentation is to rebuild the entire database by using the following PostgreSQL applications:
A few tips when rebuilding the database:
Make sure to stop the PostgreSQL service before starting.
Use the “-j njobs” option to execute the vacuum or analyze commands in parallel by running njobs commands simultaneously. This option reduces the time of the processing but it also increases the load on the database server.
If you encounter problems with your database, collect the following information before engaging with Micro Focus Support:
The output from the log file that is enabled and specified in the postgresql.conf file.
The version of the database that you are running. Use “select version()” to find the version.
The date of your last backup.
Optional: Deadlock information, connection information, and profiling statistics from the logs.
PostgreSQL supports database mirroring as a method to implement high availability and fault tolerance. This technique requires multiple database servers and is therefore not set up by default with ZENworks.
NOTE:Mirroring should not be implemented as a means to distribute the database work to remote locations. Mirrored database servers should be connected over a high-speed link.
For more information, see High Availability, Load Balancing, and Replication on the PostgreSQL website.