Datacom

 View Only

 Potentially obsolete application code

Steve Roy's profile image
Steve Roy posted May 17, 2024 08:36 AM

I find code, in some of our older applications, that is called 'force delete.'  Today, I finally saw a comment explaining the reason for it.  I assume that the problem described is something ancient and no longer the case.

Could anyone tell me how old this is, from what release timeframe?

If it actually was a problem, it was probably fixed a couple decades ago.  Just a guess.

******************************************************************
* FORCE-DELETE(DXX) - WHEN THE DELETE OCCURS, ALL POINTERS IN OUR*
* DXX BLOCK(S) ARE DELETED.  HOWEVER, DATACOM/DB DOES NOT COMMIT *
* THAT DXX BLOCK UNTIL AN ATTEMPT IS MADE TO READ THE DXX BLOCK  *
* FROM WHICH THE POINTERS WERE DELETED.  BY DOING A READ FOR     *
* UPDATE WILL CAUSE DATACOM/DB TO RECLAIM THE INDEX SPACE.       *
******************************************************************

Kenneth Eaton's profile image
Broadcom Employee Kenneth Eaton

Steve

Due for the need to support roll back processses datacom needs to keep the low level index blocks ready to support that.    After comit or a close (which is like a comit) the DXX blocks are available for reuse.  Reuse of the DXX blocks will occur but what percent at what time is quite complex.  Datacom does not use resources to clean out the DXX pointers after the fact but as stated if reuse of the block or entry can occur we will do that.

For Sites processing what is called mass deletes or purge of many rows,  we built DBUTLTY DEFRAG which is to run after a mass delete or purge of rows processes.    That will clean up the blocks of interest.    


Datacom chooses not to add additional overhead to the delete of the rows to go clean up the DXX blocks as part of any transation boundary.  That part is true today as it was yesterday.    


As part of the DBA function, running DEFRAGs are like REORG, they run when needed on demand  by the DBA. If you are running DEFRAGs as part of your normal operations, then the above comment carries very little value.

regards

Ken Eaton

Kevin Shuma's profile image
Broadcom Employee Kevin Shuma

Roy,

The overall statement is correct.  When all the rows that have index entries in a IXX or DXX block are deleted the block cannot be freed (back to free pool) as we need to make sure the task that does the deletes has committed the deletes.  If it fails for some reason, then the rows are reestablished and we need the IXX or DXX block to still be there.  When the commit happens (could be hours later) we would have to run the index looking for every block that needs to be deleted. (The reason your comments only talk about DXX blocks is they are much more likely to have all index entries deleted.)

So instead of wasting all the CPU to scan the index, we allow the next task after the commit that hits that index block (typically a sequential read) to trigger the return of the IXX/DXX block to the free pool. The reason for the UPDAT command is that MUF must have the base in UPDAT mode to alter the index.  So a READ for UPDATE guarantees it is open in MUF for update.

For most databases this is efficient enough to keep the number of "empty but not freed" index blocks to a minimum.  However, some sites may have a database table that is only read by exact key and has a periodic DELETE process that deletes large numbers of rows. Since the read key exact will only use index blocks where the entries exist, " the empty but not freed" never get touched, so they lay dormant taking up space in the index.  

The easiest way to recognize this is an index (IXX) that is 90% in use before a offline reorg and 10% in use after the reorg.

We recognized this as a possible issue back in Release 11 and we delivered an early version of a tool that in R12 we delivered as  DBUTLTY DEFRAG.

DEFRAG will free all the "empty but not freed" DXX blocks. It also has the added value of compressing two (or more) lightly populated DXX blocks into one.  This keeps the number of DXX blocks in-use to a minimum and improves performance by having more index entries available in memory (buffers) avoiding IOs.

If you are regularly running DEFRAG on the tables being hit by the READ NEXT UPDATE program you mentioned, then it is no longer needed.  If you are not using DEFRAG, I would consider replacing the READ NEXT UPDATE program with a DBUTLTY DEFRAG.  DEFRAG is an online utility with very light CPU usage.  It will perform all the work the READ NEXT UPDATE is doing (plus more) for less CPU and IO. 

In addition, DEFRAG shows the different key values and how much has been freed and compressed.  If  you see that then cycles of DEFRAGs are regularly freeing up 10-20%, I might consider doing it more frequently.  If you see the DEFRAG only frees a few percent, you may decide to run it less frequently.

I know some shops that just schedule a rotating DEFRAG on all bases.  Once a rotation completes (couple of days) they start another one.  I think that might be overkill, but if it makes it easier to manage and has minimal cost. I guess it is an easy way to manage the process.

Sorry for the long response, but hopefully details help ...

Kevin

Steve Roy's profile image
Steve Roy

Ken and Kevin,

Thank you so much for your detailed explanations.

We implemented DEFRAG pretty quickly after it was made available, so we have the cleanup of index files pretty well controlled.

We perform the DEFRAG on some heavy add/delete tables daily, others weekly, monthly or quarterly, depending on need.

Steve