We are working on installing the Feature 5 PTF for Sysview.About the logstream changes, in case of backout, does the logstream need to be deleted and redefined, or will the prior release, R14.1, just read past or over the Logstream data created while it was under Release 15.0?Also, about multi-block Logstream processing, can you briefly explain exactly what that means and how it is different from the prior logstream setups in say, R14.1?What types of performance improvements should we expect to see, or are these simply functionality improvements?Will this provide any drastic performance improvements for extremely large volumes of data, specifically the CICS Transaction Detail logstream?Please contact me directly if you need specific details about our site.
The log stream does not need to be deleted and redefined. There are, however a few points that should be noted:
Also, about multi-block Logstream processing, can you briefly explain exactly what that means and how it is different from the prior logstream setups in say, R14.1?
What types of performance improvements should we expect to see, or are these simply functionality improvements?
Will this provide any drastic performance improvements for extremely large volumes of data, specifically the CICS Transaction Detail logstream?
Can you please answer my one outstanding question about STG_DATACLAS, above #SYSVIEW #logstream
We are running some tests now in a sandbox before setting this up in production.Another question came up while looking at this.
Most of what I read refers to the setting for "LS_DATACLAS".
Should we also apply the dataclas change to "STG_DATACLAS", or is that not needed?
Thanks for the explanation. Not the best scenario in case we need to backoff, to have to delete monitoring data, but at least it does appear to be limited in scope.As you know, our CICS transaction detail is enormous, and only houses two to three days of production data for that reason, where a best case scenario would be three to 5 days, and preferably offloadable as you can do with a REPRO in a KSDS file.Still, we'll see where this leaves us.In your write up, it seems that the larger MAXBUFSIZE is more of a requirement than an option. Is that correct. We are currently defined at the smaller size from our original installs in 2013, so we would need to change that.
Apologies Peter. There was no notification for your response.
LS_DATACLAS should be changed to the CISIZE 24K as talked about previously.
STG_DATACLAS should NOT be changed to CISIZE 24K. In fact, you will get errors if you do this. The stage must have a CISIZE of 4K (the default), else the log stream will fail to allocate when SYSVIEW starts. This is a IBM system logger requirement.
I used the "require" word too strongly there. Neither the CI Size change nor the MAXBUFSIZE changes are strictly required, but we strongly recommend them as it further helps SYSVIEW read and write much more quickly. SYSVIEW will still write multi-record blocks with PTF5 to a CI Size 4K/MAXBUFSIZE 32K log stream at no performance detriment compared to your 14.1 install, but a CI Size 24K/MAXBUFSIZE 64K is more efficient/faster in comparison.
If there is something else I need to do to trigger a notification, please let me know.
It’s hard enough to find it every time, but once I got in there, I looked and could not find a setting to do that.
Peter T. Brown
We were looking at some of this and we noticed the dataclas settings that are being used are blank in ISMF. A LISTCAT on the logstream shows the CI-SIZE as 4K .Do you know if or where we can check to determine if this is some type of default? Can CI-SIZE be specified via the logstream definition, or is this purely a function of DataClass?Some of our LS_SIZE and STG_SIZE we well over 4K, actually approaching 4GB . I am taking it that the CI-SIZE is different?
I don't believe there is anything for you to do on your end to trigger a notification. On our end, we need to "follow" the associated Communities area or the specific thread.
I believe Jason DOES. That’s why this was odd.
If a default is not specified for a CI-SIZE in ISMF, then you will see a blank, however, the default is always 4K as you observed with LISTCAT.
The default CI-SIZE of 4K is documented in a few different places in IBM documentation, but it is also in the help for the "CISIZE DATA" column in ISMF as well.
CI-SIZE can not be specified via log stream definition. It must be specified via an SMS data class definition whose CI-SIZE is set to the desired size.
LS_SIZE is the size of an offload data set. For most log streams, there are a maximum of 168 of these.
STG_SIZE is the size of the in-memory (and on-disk in many cases) staging area logger uses to buffer data before it offloads the data to an aforementioned offload data set.
CI-SIZE, which has much less to do with the log stream and more to do with how the operating system asks for data from disk, is basically the amount of data the OS will ask disk for at a time. In other words, we are recommending that the CI-SIZE be changed from 4K to 24K to reduced the number of times the system logger needs to go to disk and ask for data. This is important to improve the rate of read from the log stream. This has no affect on how much data the log stream can store. It only affects how the log stream moves data around, in this case more efficiently.
We probably need to make the dataclas change separately. Can this be done via an override, or does the logstream need to be deleted and rebuilt? If a rebuild is not required, will the data with the old 4K CI-Size be accessible since the newer data will be at the new number?While our largest logstream regularly rebuild after about 3 days due to the amount of activity, smaller ones, such as the Auditlog might site for over a year or two before old data is purged.
The log stream can updated dynamically with an UPDATE command in the IXCMIAPU utility, detailed here:
There is no DELETE/DEFINE required.
After you update the log stream's LS_DATACLAS() definition, it will take affect the next time your log stream needs to allocate another offload data set. For your CICS transaction log stream, this might occur with hours/minutes. For your other low volume log streams, it could take some time as you suggested. (Also note, all log stream connections need to be broken with the log stream in order for the UPDATE to take affect. You can verify this from the LGCONN command. You will need to stop SYSVIEW for this to occur, and ensure no TSO users are on a log stream command as well.)
The old 4K CI-SIZE data is compatible with the 24K CI-SIZE data. The CI-SIZE is simply how much data is taken from disk at one time, it does not affect how the end-point application sees the data, nor does it affect the system logger.