With the publication of PTF RO72667, CA SMF Director 12.7 can now use the ARCHIVE directive of IFASMFDL and is available for SMF Logstream dumping. IFASMFDL’s ARCHIVE directive allows customers to delete the SMF records from the SMF logstream after first copying them from the logstream to the output SMF file(s) named in IFASMFDL directives. In the case of CA SMF Director, only one output file is named, and that is the SMFDOUT DD statement.
In our original SMF logstream dump process solution, the records were not deleted once the data was dumped, as this was not delivered in the initial implementation of IFASMFDL. The reason that IFASMFDL did not initially have this feature was because it was expected that customers would set up the policies for their SMF logstreams with retention periods that were appropriate so that the data could be copied from the logstreams on a regular basis. The System Logger would then delete the data when it expired, presumably after the data had been copied. With ARCHIVE, IFASMFDL deletes the data from the logstream once the copy operation has completed successfully.
With RO72667 applied, there is a new ARCHIVE control statement for the SMFDLS program (the logstream dump process initiation program) and this indicates to IFASMFDL to use ARCHIVE when copying the SMF Data. To use ARCHIVE, just add an ARCHIVE statement to the SMFDLS control statements in SMFDLSIN in your logstream dump process. For example:
These control statements would tell CA SMF Director to invoke IFASMFDL to read the logstream with the ARCHIVE directive, so that records once dumped would be deleted.
This seems like a simple change, but there are other changes and considerations as well due to the requirements for using ARCHIVE in IFASMFDL:
1. The SMFDOUT file in the SMF logstream dump process JCL must be an actual file that is written out and all of the SMF records must be written to that file in addition to their being written to the CA SMF Director History Files. The file can be allocated to DASD or tape, but cannot be a DUMMY file (IFASMFDL detects that and abort the run). If the file is on DASD it needs to be sufficiently large to accommodate all of the records that are copied, and unfortunately this won’t be known until the operation completes. If the dump process fails due to their being insufficient space on the output file, it can be started again, but all of the data that was in the first dump will be dumped again along with any new data that has since been added to the logstream.
2. When ARCHIVE is in use with IFASMFDL, the time range(s) for the records is determined only by IFASMFDL, in that it copies and (if the copy is successful) deletes almost all of the data in the logstream that is available. It leaves some data in the logstream after setting a boundary for where it will stop. It is for this reason that the ARCHIVE control statement in SMFDLS is mutually exclusive with the STARTDATE, STARTTIME, ENDDATE and ENDTIME control statements.
3. IFASMFDL’s implementation of ARCHIVE requires that all records be written out regardless of their originating system/LPAR. CA SMF Director’s data management architecture is based upon managing the SMF data with the logstream data still being assigned to a single system. If you use multi-system logstreams for SMF Data, then the ARCHIVE control statement cannot be used in the SMF logstream dump process. Records from the system not being dumped are not selected by IFASMFDL, and this causes IFASMFDL to terminate, indicating that not all of the records are being copied to a file. IBM has recommended that SMF logstreams should contain data from only one system and they also recommend that ARCHIVE not be used with a multi-system SMF logstream.
4. There is an increase in the overhead of dumping using ARCHIVE and logstream dumps will likely run slower. This is due to two things:
a. The requirement to write all of the SMF records to the output file.
b. The need for IFASMFDL to go back and remove all of the records from the logstream when the dump completes successfully.
The ARCHIVE solution is being provided to data centers that are constrained on DASD space and might not have room to store at least two full days of SMF Data for all of their active systems, which is the maximum that would be needed with a logstream retention period of 1 day. If DASD is not constrained enough to hold two days of SMF data, then it is recommended that ARCHIVE not be used with CA SMF Director logstream dumping. This enables dumps to run much faster, and reduce the chances for errors in the dump process.
Thanks to the CA SMF DIRECTOR technical team and management-types for providing this critical enhancement.
This enhanced LOGR / SMF LOGSTREAM support with ARCHIVE specified significantly reduces the amount of SMF data that must be retained by LOGR due to the inherent RETPD(n) limitation of 1 day (that being current-day plus yesterday, actually) -- where at one site I support that could be up to 2 TB of SMF data.
Now if we could only get IBM to own-up to the poor judgment about not allowing an ARCHIVE request to have total control of the output generated, in this case with the SMF DIRECTOR exit-processing -- apparently in their infinite wisdom, IBM states that even though a DUMP request can have total control of SMF-record output handling, an ARCHIVE request must return control so that each SMF record is output by the IBM-directed code/support. So that is why there is a required of //SMFDOUT DD which we is sent to temp virtual tape allocation.