The following sections provide tuning information for the service:
For modifying options related to JVM for the ZENworks Patch Management (ZPM) microservice patchsettings.sh should be modified.
This file is located at:
On Linux: /etc/opt/microfocus/zenworks/settings
All the configuration files for the ZPM microservice are located under:
On Windows: %ZENWORKS_HOME%/conf/patch-management
On Linux: /etc/opt/microfocus/zenworks/patch-management
This primarily will have the following configuration files:
application.properties: This file with all the needed configuration we have for the PLR processing and other important patch activities.
batch.properties: This file consists of the configuration related to the spring batch.
log4j2.xml: This file has all the logging-related configurations.
patch-c3p0.properties: This file has the pooling configuration needed for the persistence.
patch-ehcache.xml: This file has the caching configuration we use to improve the performance.
patch-hibernate-configuration.properties: This file has the hibernate configuration we use in the microservice.
The following sections explain all the files in detail:
For the very first time, the agents send a lot of metadata, but once the metadata is processed, the number of operations is reduced. Even with a higher load, we recommend using the following default configurations:
# ThreadPoolExecutor configuration for metadata processing job patch.metadata.batchjob.threadpool.maxPoolSize=1 patch.metadata.batchjob.threadpool.corePoolSize=1 patch.metadata.batchjob.threadpool.queueCapacity=20
# Poller configuration for metadata processing poller.metadata.frequency=40000 poller.metadata.maxMessagesPerPoll=100 poller.metadata.wait.beforeRelease=100
# Max no. of signatures that should be inserted/updated in one job max.metadata.per.iteration=5000
By default, keep only one thread to process metadata for all the PLR files, this helps in avoiding unnecessary locks and we do have a zookeeper lock mechanism at the Primary server level to avoid the database lock issues. If we need to slow down the metadata processing, we can reduce poller.metadata.maxMessagesPerPoll so that few files will be picked. Additionally, we can reduce the max metadata per iteration value to decrease processing speed. Increasing the poller.metadata.frequency will also help to increase the polling of the files in the patchlink folder under the collection directory.
NOTE:Do not increase both maxMessagePerPoll and frequency as it might not yield the optimum results. The above configuration in general would suffice.
Once the metadata is processed, the PLR files will be moved to a status folder under patchlink for the further processing of patch & CVE statuses.
The following are the default configurations:
# ThreadPoolExecutor configuration for status processing job patch.status.batchjob.threadpool.maxPoolSize=10 patch.status.batchjob.threadpool.corePoolSize=5 patch.status.batchjob.threadpool.queueCapacity=20
# Max no. of PLR files processed in 1 job execution patch.status.batchjob.statefile.batchsize=10
# patch state file poller spec configuration # frequency - fixed rate for PeriodicTrigger to poll for files # maxMessagesPerPoll - max no. of messages(PLR files) to receive for each poll. # wait.beforeRelease - group timeout for aggregatorSpec
# Poller configuration for status processing poller.status.frequency=60000 poller.status.maxMessagesPerPoll=100 poller.status.wait.beforeRelease=100
However, if the zone has less number of devices and you want to tune the settings accordingly by distributing the load. For example, for a zone with PostgreSQL with 5000 clients, the following configurations can be used:
patch.status.batchjob.threadpool.maxPoolSize=2 //only two threads will process status in parallel patch.status.batchjob.threadpool.corePoolSize=1 patch.status.batchjob.threadpool.queueCapacity=4
patch.status.batchjob.statefile.batchsize=10
poller.status.frequency=90000 //Instead of polling for 1 min, we are polling for every 1.5 minutes poller.status.maxMessagesPerPoll=20 //For each poll we pick up 20 instead of 100 files poller.status.wait.beforeRelease=100
This configuration can process up to 20 files per 90000 ms (15 per minute), which effectively can process 900 PLR files for status in one hour, provided all the 20 files are processed before 90 seconds. This can be decreased or increased based on the distribution.
We don't recommend changing anything in this file. However, if we are increasing the maxPoolSize for the status in application properties, ensure to increase the below parameter proportionally.
batch.datasource.max-pool-size=25
The spring batch used in the PLR processing uses embedded h2 internally and if the threads increase for status processing, we observed that h2 connections also need to be incremented, or else the files will be skipped.
We use the following configuration in the log42.xml for the appenders' log level:
<Properties> <Property name="LOG_PATTERN">[%-5p] [%d{yyyy-MM-dd HH:mm:ss}] [%t] [%pid] [Patch-Management] [%T] [%c{1}] [%m]%n </Property> <Property name="APP_LOG_ROOT">${sys:zenworks.log.directory}/patch-management</Property> <Property name="PATCH_LOG_LEVEL">DEBUG</Property> <Property name="HIBERNATE_LOG_LEVEL">WARN</Property> <Property name="SPRING_LOG_LEVEL">DEBUG</Property> <Property name="C3P0_LOG_LEVEL">WARN</Property> <Property name="TOMCAT_LOG_LEVEL">WARN</Property> </Properties>
Increase the log level only if needed, else setting the debug level will increase the verbosity and more logging. This will also impact the I/O and might lead to slow processing.
This file has the configuration to override any hibernate configuration. It is not recommended to change the configuration unless you need to increase the processing by a large scale and even with higher processing requirements the default values are adequate.
This file has the caching configuration and can be used to improve the performance.
The following are the configurations:
cvestatusps-cleanup-days-cache
device-repository-cache
cveid-within-ndays-cache
all-patch-cve-details
patch-signature-info-for-status-processing
patchlist-service-cache
existing-patch-lastmodifed-info
zone-guid-cache //Used while fetching opaque data entries.
patch-signature-repository-cache //Used for custom patches while deserializing PLR.
system-setting-repository-cache //Used to cache system settings by name & object ID.
custompatch-requirement-repository-cache //used in the construction of the DAU bundle.
Prefetching of content gives control over when the content gets downloaded from the web to the zone. During prefetch, the content gets downloaded to at least one of the OCM servers. The remediation of patches in agents will be relatively faster as agents need to download the content from Primary Servers and not from the web.
The customer can choose to precache the content in a few selected content servers. This way the content is available in content servers and remediation of patches will be faster than if we choose not to prefetch and not to precache the content. It is recommended to prefetch and precache the content for larger-size patches.
We have options on when to prefetch the content. The options are Manual Pre-fetch Patch Content action, Patch policy rebuild, and during Patch remediation deployment. It is recommended to prefetch the content during patch policy rebuild. This will speed up the content download part when the remediation of patches happens on the agent side.
It is recommended that you enable only the required languages which will consume disk space.
White-listing of external URL - https://forums.ivanti.com/s/article/URL-exception-list-for-Ivanti-Security-Controls?language=en_US
Setting a config value
On Managed devices, tuning parameters or configuring settings can be performed in different ways:
Setting a system variable effective to a device. We can set it at the device level(overriding the zone level variable) or zone level.
Creating the registry key on the device under HKLM/SOFTWARE/Novell/ZCM/ with key and value pair.
Scan
During scanning, we don't scan software installers by default as the number of patches will increase. Hence, they are excluded. However, if someone is interested in including the software installers for installing the application through ZENworks Patch Management, they can enable them by setting the following configuration value:
scan.software.installers - true
For metadata processing, we filter the superseded patches that are older than 2 years. However, this setting can be changed by tuning the following parameter:
superseded-years=2
The rest of the settings are best with the default configuration and no need to tune any other parameters.
Deployment
A patch can be deployed using the patch policy or a remediation bundle. During remediation, if we don't want certain patches to be installed, any of the following actions can be performed:
Create a config key using system variable/registry with a key as ExcludedPatchesList and value with patch IDs separated by space ( "ExcludedPatchesList", "patch1 patch2 patch3).
Create a registry key under HKLM/SOFTWARE/Novell/ZCM/ExcludedPatches/ with patch ID as the key and value be any nonnull value.
It is recommended to spread the bundles (remediation/patch policy) so that the remediation will not be triggered parallelly. This reduces the cases where msiexec is waiting/failed because another instance is running. Hence, it is recommended to schedule the remediation instead of the remediating during the refresh and even in the schedule-based deployments, schedule the bundle to run in sequence than in parallel.
For reboot required patches, post remediation it is always recommended that the managed device is rebooted, as it can affect remediation of the subsequent patches.
Download
Content can be downloaded while scanning and performing the remediation. While scanning the patch catalog content is downloaded ondemand and while remediation the actual content is downloaded. The patch catalog is smaller in size approximately 20 MB where are patch content can be a huge file. Hence, it is recommended to distribute the content before performing the remediation. This can be performed by setting a distribution schedule against the patch policy/remediation bundle. While downloading the content similar to the ZENworks Agent we need to handle the busy retries with incremental sleep. By default, we handle retries when the server returns 503(BUSY).
The following are the default settings:
max-busy-retries: 10 //max retries per server max-sleep-between-retries: 2 //this will sleep for 2 seconds incremental after all URLs are exhausted.
The default settings are sufficient to download content. If the server is busy and sending 503, then you can tune the above parameters.
Cleanup
The content under the ZPM directory will be cleaned if it is older than 7 days. For more aggressive cleanup, you can configure the cleanup using the following setting:
zpm-log-retain-days=7 //Content older than 7 days will be cleaned up.
Parameters can be tuned accordingly.
delete-results-folder=true //
By default, the results folder is not included in the cleanup for better debugging purposes. However, this can be overridden using the setting (not recommended).