Our RH on Oracle is 12 million records and rather fragmented.
Does anyone know of a way to defragement (aka reorg, aka compact) the RH table on a production system without requiring a downtime? RH is linked with a primary key to RT, which has 221 million records, because every line of every database-homed report is a single record in RT (perculiar design decision, this ...). Our DBA tells me he can't compact RH in online mode. It takes three hours, then fails because it can not get a lock on RT.
Later, we will need to compact/reorg RT also, which we suspect will run into the same sort of issues, just with many more hours before failing.
how do you check the fragmentation level and what threshold have you decided on ?
as Carsten_Schmitz already explained we are using a "four" step method to get rid of these large tables. As far as I know this is the only working method for such highly frequently as well as large tables, since the typical one liner
does not provide any possibility to change the timeout parameter. The documentation says that redef_table() is only passing more or less fixed arguments to the following procedures:
The only thing you additionally need is a temporary table that has the same column and lob definitions as the original table has.
1. Create DDL of the table you are going to reorg:
dbms_metadata.get_ddl('TABLE', 'RH', '<SCHEMANAME>') from dual;
2. Edit the DDL and remove all constraints and "not null" definitions. The edited DDL might look like this:
CREATE TABLE "<SCHEMANAME>"."RH" ( "RH_CLIENT" NUMBER(*,0), "RH_AH_IDNR" NUMBER(*,0), "RH_TYPE" VARCHAR2(4 CHAR), "RH_TIMESTAMP1" DATE, "RH_TIMESTAMP4" DATE, "RH_DELETEFLAG" NUMBER(*,0), "RH_ARCHIVEFLAG" NUMBER(*,0), "RH_BLKSIZE" NUMBER(*,0), "RH_TITLE" VARCHAR2(200 CHAR), "RH_XMLFLAG" NUMBER(*,0), "RH_FILEFULLPATH" VARCHAR2(512 CHAR), "RH_FILEONAGENT" NUMBER(*,0), "RH_FILEINDB" NUMBER(*,0), "RH_USERSLOGIN" NUMBER(*,0), ) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "<TABLESPACENAME>"
!!! Caution: When reorganizing tables containing LOBs you need to take care of the lob definition, because it might say "STORE AS SECUREFILE" even when you want BASICFILE to be used !!!
3. Rename the table within your ddl and create it. It will be used as interim table later in step 4 (i.e RH_GUIDE_$TEST)
4. Run the following PL/SQL block (and don't forget to change schema and tablespace names):
exec dbms_redefinition.can_redef_table (uname => '<SCHEMANAME>', tname => 'RH', options_flag => dbms_redefinition.cons_use_pk);
exec dbms_redefinition.start_redef_table (uname => '<SCHEMANAME>', orig_table => 'RH', int_table => 'RH_GUIDE_$TEST', options_flag => dbms_redefinition.cons_use_pk);
set serverout on size 1000000
declare nr_errors number;
begin nr_errors := 0; dbms_redefinition.copy_table_dependents (uname => '<SCHEMANAME>', orig_table => 'RH', int_table => 'RH_GUIDE_$TEST', num_errors => nr_errors); dbms_output.put_line ('ERRORS := '||to_char (nr_errors));end;/
exec dbms_redefinition.sync_interim_table (uname => '<SCHEMANAME>', orig_table => 'RH', int_table => 'RH_GUIDE_$TEST');
-- Here comes the part where regular dbms_redefinition.redef_table() exits with an error every time, so we set timeout to a higher valueexec dbms_redefinition.finish_redef_table (uname => '<SCHEMANAME>', orig_table => 'RH', int_table => 'RH_GUIDE_$TEST', dml_lock_timeout => 300);
--- Update: You can drop the interim table later on
how do you check the fragmentation level
Well, the DBA tells me
We have a dedicated DBA. I honestly don't know how exactly he determined the fragmentation level, but he told me about the most fragmented tables and RH/RT being towards the top. I suppose he did something like this. It would also make sense that RT is highly fragmented, because it gets almost all its content rotated all the time, i.e. old reports are wiped by the UC4 reorganisation utility, and new reports are constantly written.
In the mean time, he also resolved the problem by (this is what he told me) running the table reorganisation in four individual steps, allowing him to get the required lock in the end.
Sorry I don't have any more details at this time.