Service Operations Insight

 View Only

Tech Tip: HOTFIX for ‘Data Migration takes too long due to the large data volume in the ’SERVER_FACT’ table 

Mar 21, 2017 07:14 AM

Problem: The database migration takes a long time to complete due to a huge amount of data stored in the SERVER_FACT table. The queries that are currently used for migration take up a lot of resources and are not efficient.

Solution: The following manual update fixes the Load_Metrics package and enhances the LOAD_SERVER_METRICS procedure. Once you deploy the updated package on your system, the migration process will be faster. Follow these steps:

  1. Open SQL Developer.
  2. Go to Package and open the Load_Metrics PACKAGE body.
  3. Go to the LOAD_SERVER_METRICS procedure.
  4. Comment out (disable) the following query by entirely enclosing it in comment signs:

    /*

    SELECT   count(*) into dupCount

             from (

             select /*+  INDEX(a server_fact(GMT_METRIC_DATE)) use_hash(a b)*/  /* 1

             from

             server_fact a

             where

             a.gmt_METRIC_DATE BETWEEN

               (select min(gmt_metric_date)  from server_fact_tmp   )

               AND

               (select max(gmt_metric_date)  from server_fact_tmp   )

             and

             exists

             ( select 1

               from

               server_fact_tmp b

               where

               a.gmt_metric_date = b.gmt_metric_date

               and

               a.entity_id = b.entity_id

              )

              ) where rownum = 1;

    */

  5. Manually add or copy and paste the following query:

    SELECT   count(1) into dupCount

               from server_fact a

               where exists ( select 1 from server_fact_tmp b

                              where a.entity_id = b.entity_id

                              and a.metric_id = b.metric_id

                              and a.source_type = b.source_type

                              and a.gmt_metric_date = b.gmt_metric_date);

    To facilitate tracking, consider adding a clarifying remark to the new query.

  6. Comment out (disable) the following query by entirely enclosing it in comment signs:

/*   delete /*+  INDEX(a server_fact(GMT_METRIC_DATE))  use_hash(a b) */  /*  from server_fact a

           where

           a.gmt_METRIC_DATE BETWEEN

           (select min(gmt_metric_date)  from server_fact_tmp   )

           AND

           (select max(gmt_metric_date)  from server_fact_tmp   )

           and

           exists

           (

             select 1

             from

             server_fact_tmp b

             where

             a.entity_id = b.entity_id

             and a.gmt_metric_date = b.gmt_metric_date

             and a.metric_id = b.metric_id

             and a.source_type = b.source_type

           ); */

7. Manually add or copy and paste the following query:

delete  from server_fact a

           where exists ( select 1 from server_fact_tmp b

                           where a.entity_id = b.entity_id

                             and a.gmt_metric_date = b.gmt_metric_date

                             and a.metric_id = b.metric_id

                             and a.source_type = b.source_type

                           );

To facilitate tracking, consider adding a clarifying remark to the new query.

8. Open the Compile tool and compile the updated package.

9. Verify that the deployment was successful:

  • Connect to the SQL Developer.
  • Go to Package and open the Load_Metrics PACKAGE body.
  • Go to the LOAD_SERVER_METRICS procedure and check if the updated sql queries are available.
  • Verify that Load_Metrics is compiled without errors.

 

Written By :

   Pradeep Rai 

Statistics
0 Favorited
3 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Comments

Oct 25, 2018 08:22 AM

Hi Tomas

 

SERVER_FACT_TMP is a temporary table used during migration - it should never get so large as for there to be performance issues without indexing.

 

Are you running Oracle SE or Oracle EE?  I have seen issues with poor execution plans in Oracle SE with the temp tables (this was ENTITY_FACT though, not SERVER_FACT)

Oct 25, 2018 08:17 AM

 SERVER_FACT_TMP  has no indexes at all. Is it normal or mistake or subject of potential performance improvement?

Mar 28, 2017 10:07 AM

Hi Guys

 

Could this fix also help in my issue where I have migration issues when I enable disk collection metrics on the eHealth data cources?

 

Regards

David

Mar 24, 2017 08:31 AM

Thanks for your valuable comments!

The two queries provided in the hot-fix have been tested in our development environment. The queries were consuming too much resources during the ACTION

‘Action “Load Fact” performed on the table “SERVER_FACT”. Status of action is “Delete Fact”’

Server_Fact table in the development environment was having around 33 million records. Before optimization migration process was taking around 3 hours however after optimization the time reduced to less than 2 hours. 

 

Thanks,

Pradeep.

Mar 24, 2017 03:27 AM

Sorry for delay response, I have responded.

 

Thanks,

Pradeep.

Mar 24, 2017 01:35 AM

I don't think it will help in long running analysis and there are many enhancement possible for improvement but we can work on them according to priorities decided by product management. For now we can add the request in product backlog and proceed ahead accordingly.

 

Thanks,

Pradeep Rai 

Mar 24, 2017 01:24 AM

Thanks for your comment.

Mar 24, 2017 01:20 AM

Hi Adrian,

 

We didn't found any such kind of scenario. If you have found any scenario this process taking long time,please share with us.We will optimize those queries.

 

Thanks,

Pradeep.

Mar 24, 2017 01:07 AM

Thanks for your input, I have updated the document.

 

Regards,

Pradeep.

Mar 23, 2017 10:53 AM

Hi David

 

On close inspection, the code in the original article has been updated as per the suggestion (- add the extra opening comment after the hint:), which has been confirmed to works in my test lab.

 

Adrian

Mar 23, 2017 10:49 AM

One more thing.

 

Adrian commented twice on this post, so far I have seen no reply to his concerns.

 

David

Mar 23, 2017 10:48 AM

Wouldn't this HOTFIX be more meaningful if you provided an example of what you mean by "Long Running" migrations?

Also, the amount of time saved by this improvement... isn't that important to understand the benefit of this fix?

 

If the time saved by this fix is only 5 seconds for a long running job takes that 120 minutes. 

Why should I install it?

David

Mar 23, 2017 10:42 AM

What about Rollups delayed by long running "Analyzing data tables" in the "Precalculations and Rollus"?

--- Server Fact is especially long running ---

 

When Rollups are started each night the LOG file entries do NOT capture the time required by ANALYZING DATA TABLES. 

This can require hours!   Why are there not records of how log analysis runs?

 

Should the Log file be updated PRIOR to Analysis and then updated again when Rollup actually starts?

 

Will this query change help the long running analysis? 

 

David

Mar 23, 2017 05:33 AM

Does this slow Migration of Server_Fact happen when the Server_Fact table is partitioned?

Yes, migration process will be slow even if the server_fact table is partitioned. Partitioning basically improves the performance of the process when huge volume of data involved. In the current scenario i.e. migration process, huge volume of data in Server_fact is only one of the reasons of slow migration therefore only partitioning only might not help in the performance improvement. Other than this, weak performance of the query itself leads to delay in completion of migration process.

We had identified some parts of the procedure and provided an update for the same.

 

Is this Slow Migration only an issue on Oracle databases where Partitioning has NOT been used?

Yes, this can be one of the reason of slow migration on Oracle database. To make performance wise better the query should be efficient, required in both the scenarios i.e. with or without partition.

Mar 21, 2017 12:04 PM

Does this slow Migration of Server_Fact happen when the Server_Fact table is Partitioned?

Is this Slow Migration only an issue on Oracle databases where Partitioning has NOT been used?

Thank you,

David

Mar 21, 2017 08:21 AM

An additional question: Some sites also load a lot of data into DISK_FACT and PROCESS_FACT - are there similar efficiencies to be gained in LOAD_DISK_METRICS and LOAD_PROCESS_METRICS?  I can see that the dedup queries in those two procedures have a different form - are they already optimized?

 

Thanks

Mar 21, 2017 08:14 AM

I can see how the new queries could be more efficient so that looks like a good thing.  However, I question if the ‘commenting out’ steps 4 & 6 will work as written – note that the index hints will cause the comments to terminate early.

 

The following should work better:

4. Comment out (disable) the following query by entirely enclosing it in comment signs - add the extra opening comment after the hint:

/*

SELECT   count(*) into dupCount

         from (

         select /*+  INDEX(a server_fact(GMT_METRIC_DATE)) use_hash(a b)*/ /* 1

         from

         server_fact a

         where

         a.gmt_METRIC_DATE BETWEEN

           (select min(gmt_metric_date)  from server_fact_tmp   )

           AND

           (select max(gmt_metric_date)  from server_fact_tmp   )

         and

         exists

         ( select 1

           from

           server_fact_tmp b

           where

           a.gmt_metric_date = b.gmt_metric_date

           and

           a.entity_id = b.entity_id

          )

          ) where rownum = 1;

*/

6. Comment out (disable) the following query by entirely enclosing it in comment signs - add the extra opening comment after the hint:

/*   delete /*+  INDEX(a server_fact(GMT_METRIC_DATE))  use_hash(a b) */ /* from server_fact a

           where

           a.gmt_METRIC_DATE BETWEEN

           (select min(gmt_metric_date)  from server_fact_tmp   )

           AND

           (select max(gmt_metric_date)  from server_fact_tmp   )

           and

           exists

           (

             select 1

             from

             server_fact_tmp b

             where

             a.entity_id = b.entity_id

             and a.gmt_metric_date = b.gmt_metric_date

             and a.metric_id = b.metric_id

             and a.source_type = b.source_type

           ); */

Related Entries and Links

No Related Resource entered.