Let me share with you one effective performance tuning tip (or bug resolution - up to your opinion) which I haven't seen here neither docs. Since performance of any SQL query at Oracle is only as high as actual optimizer statistics are available, we are facing significant inefficiency on Data Load / Staging with consequent post-etl queries configured at Data Source which can leads running Data Load for many hours even days while burning DB server's CPU.
Daily purge job optionally calculate complete (Oracle Schema? ) optimizer statistics on daily basis just after staging tables are purged. Following next staging job inserts possibly millions of rows into staging tables while right away running post-etl queries on this tables leads to hardly poor execution plan. Statistics has been collected on empty tables!
For all huge staging tables which are affected by post-etl queries gather the statistics beforehand.
Inspect data source definition xml file:
for example C:\Program Files\CA\Capacity Command Center 2.9.4\ApacheTomcat\webapps\DM\WEB-INF\classes
at .<dataSource_parameters> <parameter>
add call DBMS_STATS.GATHER_TABLE_STATS to gather table,column and index statistics for all tables which are affected by post_etl queries. For example of CA UIM Physical DA can be tables: SERVER_STAGING and SERVER_METRICS_STAGING:
<param name="post_etl_query0" value="CALL DBMS_STATS.GATHER_TABLE_STATS('your_CCCUSER','SERVER_STAGING')" />
<param name="post_etl_query1" value="CALL DBMS_STATS.GATHER_TABLE_STATS('your_CCCUSER','SERVER_METRICS_STAGING')" />
and reorder post_etl_queries numbering.
Data Load overall rows/s can rise up from 10x-100x to 1000x
Share your opinions and experience.