Not long ago one of our clients opened a Customer Support ticket related to the subject of this discussion. Once a response was provided and case was closed I considered important to share it with the rest of the community.
As you probably are fully aware the SSMAUDIT parameter controls whether changes to SSM resource tables and the directory table are logged to OPSLOG using the trace message OPS7914T. This message includes the SQL operation, the table name, key, and the job name and program name of the address space that issued the SQL command. For SQL updates, the first monitored column name changed and the first 16 characters of the new value are also displayed. In addition we also have five other parameters named SSMMONITOREDCOL1 to SSMMONITOREDCOL5. They are used in conjunction with SSMAUDIT when the value YES is set for OPSLOG recording purposes. Each SSMMONITOREDCOLn parameter can specify an eighteen-character SQL column name. Columns specified within these parameters are actively monitored by System State Manager for all SSM-managed tables. User-defined exits may be written to react to the global events generated by changes to these columns. Changes to these columns are logged to the OPSLOG using the trace message OPS7914T if SSMAUDIT is set to YES.
Hey Cesar, can OPS7914T be captured by a rule and written to a dataset, or does the message have to be modified?
It is not recommended as it could have side effects but yes it is possible to fire a rule upon a CA OPS/MVS product specific message.
You just need to change the severity level to O or J:
For message OPS7914, since it is modifiable, you need to add the following code in the OPSSPAxx member.
T = OPSPRM_Set("OPS7914","J")
Then use a sample rule to fire upon it and do what you require.
Hope this helps Chris and thanks for looking at our tech tips.
Just in case you are ok not getting this message traffic into a dataset in real time via a MSG rule what you could do is print to a dataset all the occurrences of the message after your profile your OPSLOG.
This is a step by step procedure how you could do that:
1.Visit OPSVIEW option 1 OPSLOG browse and profile it by message id in question.
2.Then turn on Message Number and Message ID (DI MSGNO MSGID)
3.Type PP over the message number4.Find ending time that you desire and type PP5.Type LIST from OPSLOG... take note of the data set specified (list data set)6.Select option 3 and this will write the information to the list data set
Please note: If more lines are needed, you will need to increase the BROWSEPRINTLIM parameter via 4.1.1
Hope this gives you better ideas on how to obtain this info.
I actually just thought about doing this and actually started implementing a solution. What I would like to be able to do is to monitor the changes being made to the tables and record them in an audit file. We have multiple staff members that can make changes to the table and I want a way to be able to research who changed what and when, in case something gets messed up. I was able to change my message severity code, set up a rule which snagged the message and even wrote a little REXX to process it and place it in the data set.
One issue I ran into however is that the audit messages do not cover every column. As you stated, I went and found the SSMMONITOREDCOLn parameters but there are only 5 of them. I'd like to be able to see the changes for each and every column and the tables have more than 5 columns that are not monitored natively. Is there any way to accomplish this without comparing tables all the time? I guess one option is to just pick 5 columns that I feel are most important to monitor and go with them but I would still like to monitor more columns, ideally the whole table under SSM control. In other words, if it is a SSM table and someone makes a change to it, I want to know about it.
P.S. I also noted that the SSMAUDIT message says SSMv2 even though I have SSMVERSION set to SSMv3. Guess that value is not inserted dynamically in the message text.
The best solution here is setting up SECurity rules that fire when updates (or all non SELECT SQL actions) are made by a 'user' (OPSVIEW 2.6/4.11.x) to a SSM managed table (or any RDF table if desired). The 'audit file' could be another RDF table such as STCTBL_HIST that the SEC simply inserts the update info into - (Userid, date,time,sql command) or you can generate a unique SSM auditmsg to the OPSLOG so that you can also filter this message from the OPSLOG when debugging SSM related issues (OPSSMAUDIT - Userxx Updated ACTMODE to 'COLDSTART' on 22 Feb 17 12:33pm) . The tricky part of SQL Security rules specifically if triggered from OPSVIEW 2.6 is getting to the sql variables used by the driving OPS/REXX table editor program (ASOTEAPI). Good news is that a few releases back, some changes were added so that you could dump out the exact variable 'value' rather than just end up with the sql variable name (:VAR001,:VAR002,etc) that is used by ASOTEAPI. I have some sample code of dumping these variables and how you can create this type of running audit trail (or at least enough to get you going to groom as you feel needed). Let me 'cleanup' so it makes sense, and I'll forward it to you.
Ah, security rules, I didn't think of that. That's a good idea. I will have to play around with them and see what I can come up with. Thanks.
Thank you for the example code. I am working through it now. The one thing I am noticing is that the )SEC SQL* specification doesn't seem to catch updates made via Option 4.11.2. I did find that )SEC OPSSMTBL* does capture these events but it really doesn't seem to give enough information to create a useful audit record. Maybe I am just missing something. Any thoughts?
OK...did all preliminary 'testing' from Table editor...looks like driving SSM editor pgms (OP411002) does not use SQL host variables like the table editor so I have to alter the sample code...Looking into it....
Emailing over 'addition' to the SSM History SEC rule that was originally sent. This should address updates from 4.11.2 as well as straight table editor.....
I would be interested in getting a copy of your sample rules and code for this as well.
Would it be possible to get a copy of this code too?Sounds like just what we are looking for.
Hi Andy – Let me know if you receive this email and I’ll send you over the sample sec rule….
This is actually Matt here, but I think I did request a copy of this code
at some point in the past.
I am still interested in it, if it is possible to send.
Bach. of Human Movement Studies (Ed)
Six Sigma Green Belt (MoreSteam University)
ITIL v3 Fundamentals Certified
ITIL v3 Intermediate Certificate Operational Support and Analysis
Technology Management Team
Systems Management Integration
IBM Global Technology Services
I have uploaded the 'sample' SEC rule (text and binary) as discussed in this thread. As stated in the comments, the sample SECSQLH rule of your opsmvs.CCLXRULB must be enabled.
I would like to have this the copy of this code too. Thank you!
Look here: Sample SEC rule in binary format to create audit trail of SSM table updates
or here: Sample SEC rule in text format to log history of SSM table updates
Marcel van Ek
Your rule is very similar to one I have created to keep a log (in an RDF table) of all tables that have been accessed and by what method (I don't record 'Browse' accesses from the RDF Table Editor) so that we can try to start eliminating old, disused tables.
However, one of my remote colleagues was very concerned about the potential overhead of this rule. What are your thoughts? I did add a 'FIRELIMIT' of 30, just in case.
Stephen, first of all , it is not my code , I simply linked to a post from member DG_BNYM who shared the code.
Secondly, we did not activate this, only periodically on our test environment.
We added some code to limit the registration to just a few tables (e.g. ssmtables, data info tables etc)
We skipped all SELECT requests
We added some data to recognize x-sys requests as well
But the main concern was to vast amount of data going into that RDF log table. That tends to get big, even though we discard all >1 year old entries...
I'm not logging all SSM accesses like an audit log, just the table name, whether it was 'accessed' or 'updated' and by what method (Address SQL, RDF table editor etc) so a max of 3 entries per table).
Agreed, but if you're dealing with tables that have no primary key (which we have a number of), every update (indert/delete row) will cause a full recreation of the table internally, generating SQL security events in large quantities. And thus noticeable performance degradation in your TSO session.
That's why we at least exclude non-keyed rdf tables here.
So you could hit performance problems if the tables you are monitoring don't have keys? My logging table has a key consisting of the accessed table concatenated with the access type and method. Do you think I'll be OK with a FIRELIMIT of 30?
Yes, especially if they are large tables, as I said: they're deleted and recreated completely if they are edited (via OPSVIEW 2.6), resulting in a SQL security event for each row in that table, which all will cause an update on your logging table.
A FIRELIMIT will prevent that, but will also prevent your logging to be complete...
Having a KEY in your logging table like the one that you described also will limit the log to only contain the LAST (keyed) event, so not complete.