We also have modified our Internal Audit Sink Policy to send us alerts via email. This provided us more convenience if services are failing throughout and a central policy to process them.
Good afternoon. When it comes to auditing the system, we recommend that auditing is done in a selective manner to reduce overhead on the gateway along with disk usage.
1) We have seen several different models deployed based on the system available including:
a) Local Database deploying the audit purge script for regular clean ups
Pros: Does not require additional external components and keeps database contained
Cons: Finite amount of disk space available and the audit purge can cause slow down. (Note: We are tracking a development incident to review this script for performance impact)
b) Write to a local syslog with a syslog forwarder to push to a central syslog environment where triggers can be incorporated into based on log entries
Pros: Central monitoring and triggering capabilities along with longer retention periods
Cons: Requires additional third party components to implement and maintain.
c) External Database using Audit Sink Policy
Pros: Centralize Audit database so multiple environments can push to a single DB which can allow for longer retention periods
Cons: Requires additional components to implement and maintain.
d) JMS Server using the Audit Sink Policy
Pros: Push any format using custom policy in the Audit Sink Policy to a JMS server where you do a one way sync. Back end system can pull from JMS queue to any type of holding environment.
2) For the control of the audits being created:
a) Add an Audit Assertion with the Record Audit event set to INFO at the end of the happy path through the policy to avoid writing for successful policy executions
b) Ensure that you remove any extra auditing assertions not required in the policy prior to promoting to production.
c) Include branches for if you wish to include debug turned on or look to use the Debug Tracing for the service on the Service Property window.
3) To avoid the internal audit database from getting filled and stopping the processing:
a) Ensure to deploy both the audit purge script to keep the number of audit down in the database and the manage_binlogs script to ensure that the replication logs are keep cleaned up on the hard disk.
b) In version 9.1, we implemented a new feature that allows the gateway to keep processing even if the DB filespace is filled by setting the following:
By default, the CA API Gateway stops processing messages when the database reaches a certain threshold. Now, you can specify that the Gateway stop writing audit messages once the threshold is reached but continue message processing.
To enable the bypass, modify the audit.managementStrategy cluster property (Audit Cluster Properties - CA API Gateway - 9.1 - CA Technologies Documentation ) in Audit Cluster Properties.
Specify how the Gateway should respond when the database exceeds the threshold defined in the audit.archivershutdownthreshold cluster property:
STOP: Gateway stops processing requests and terminates audit logging. BYPASS: Gateway continues processing requests but terminates audit logging. Internal Gateway logging continues, with a SEVERE-level message that audit logging has stopped.
Note: The value is case sensitive.
Director, CA Support