I don't understand compression but I have one area that reports having compressed on my Analyzer report (PGM=USNDRVR). I am saving a whopping 2% with compression (or maybe it means that 2% of my records are compressed?) I wonder, though, if compression might be a quick way to resolve a problem when I run out of space in an area. (If compression would save me anything, maybe it would buy me time to prep an unload/reload and expand an area.) I guess this assumes that compression is done at an area/file level.
In IDMS all occurrences of a record type need to be compressed (unlike DB2). You cannot have a mix. In order to change a record type to be compressed you will need to do an unload/reload.
In order to avoid the unload/reload you can run expand page, it still requires you to read through the entire area to copy to the new page sizes, or you can use extend space, to add a spill-over at the end of the area, however, there needs to be available page numbers after the highest page number for the area. The tradeoff is a potential performance hit.
You can run expand page only once on an area, but you can extend space indefinitely as long as there is an available range of page numbers.
Tommy and Paul,
Compression does not require an unload/reload, it can be accomplished by running a restructure. I have done it this way many times, to hold off the need to reorganize an area that had a lot of dead space in the records. If you have presspack you have options of either using the built-in compression or using a custom DCT. Some offline testing will show you the best way to go.
A word to the wise, if you choose to go with PressPack and a custom DCT versus the built-in one, be sure to keep several copies of your DCT source in several places.
This is because, if you lose the source to your DCT, you have lost your data. There is nothing anyone can do to get your data back.
The reason for this is that custom DCTs are created by executing the PressPack utilities to analyze your data and create the custom DCT for it.
Charles (Chuck) Hardee<mailto:Chuck.Hardee@ThermoFisher.com>
Senior Systems Engineer/Database Administration
EAS Information Technology<mailto:DBA%20Engineering%20-%20DB2_IDMS>
Thermo Fisher Scientific
300 Industry Drive | Pittsburgh, PA 15275
Phone +1 (724) 517-2633 | Mobile +1 (412) 877-2809 | FAX: +1 (412) 490-9230
Chuck.Hardee@ThermoFisher.com<mailto:Chuck.Hardee@ThermoFisher.com> | www.thermofisher.com
WORLDWIDE CONFIDENTIALITY NOTE: Dissemination, distribution or copying of this e-mail or the information herein by anyone other than the intended recipient, or an employee or agent of a system responsible for delivering the message to the intended recipient, is prohibited. If you are not the intended recipient, please inform the sender and delete all copies.
John you are correct. Is it too late to plead temporary insanity?
No sweat, just a small oversight. It happens. Chuck's comments were good also. As long as you don't lose the load module the source shouldn't come into play after implementation of a custom DCT, but we keep the source in two PDS datasets along with Librarian, just in case. Also the load module starts in a secured DBA loadlib and then is copied to the run time loadlibs as needed so also have a spare copy of that. Also worth mentioning if the record(s) in question have keys somewhere deep in the record you can't expect much compression bang, but testing will bear that all out. I only go with a custom DCT when significant improvement is made over built-in compression.