Hello Poornima,
what exact release version did you switch to and what patch set (i.e. JUL2019 Bundle Patch)? I am asking since there have been a lot of performance related issues been reported regarding the parameters "optimizer_adaptive_features" and "optimizer_dynamic_sampling". In later patchsets (patch 22652097) and from 12r2 on the "optimizer_adaptive_features" parameter has been deprecated and replaced by "optimizer_adaptive_plans" and "optimizer_adaptive_statistics" (Oracle Doc ID 2187449.1).
Why did they split these parameters?In most cases the adaptive statistics are responsible for the performance issues. Even with actual statistics the optimizer might gather new statistics during the parse-phase of a SQL command. Depending on the objects being queried the statistic gathering can take up to a several minutes!
How to find out if you are affected?
You can i.e. check your AWR-Reports (when having performance pack licensed) or Statspack-Reports as well as traces and check for high parse times. If you are monitoring your systems by Nagios, Zabbix or other performance monitoring tools you should see (much) higher CPU load after the switch to 12r1.
What possibilities do you have to fix this?
It depends of the patchset you have installed. If you have not applied patch 22652097 yet, you can simply set "optimizer_adaptive_features=FALSE" in your SPFILE and memory. This parameter can also be set online, without the need of restarting the database. If the patch has already been applied, you can try to set different values for the parameters "optimizer_adaptive_statistics" and "optimizer_adaptive_plans". In our case we needed to set both values to "FALSE" to get the best results. In addition we needed to set "optimizer_dynamic_sampling=0" but I cant remember why (just keep that option in mind :-) ), sorry!
(If you speak German, you can read the following blog post of Herrmann & Lenz, a German database consulting agency. This is where I got most of the information:
https://blog.hl-services.de/2017/01/16/optimizer-adaptive-features-in-der-oracle-database-version-12-1/)You can also find lots of community and support threads on support.oracle.com.
Have a nice day.
Greets
Andreas
Original Message:
Sent: 07-30-2019 02:07 PM
From: Kanneganti
Subject: Metadata migration
Hello Andreas,
Thanks much for all the details! We had used parallel=32 for the export and imported the same in the new DB (12c r1). It is working fine now.
Original Message:
Sent: 07-30-2019 07:15 AM
From: Andreas Biesenbach
Subject: Metadata migration
Hello Poornima,
as Carsten already mentioned we tried both ways:
1. Upgrade of an existing database instance (11g to 12c)
2. Upgrade via expdp and impdp into an existing 12c instance (schema export, please see the example at the end of my message)
From the database perspective there is nothing additional to consider. Since the 12c database is even compatible with 10g Clients, there is no reason to upgrade the existing Clients on the Automic Application Server (unless otherwise prescribed by CA/Broadcom).
So your approach should work without any additional steps. But for better support of the database vendor I would recommend also upgrading the clients to 12/18c.
BTW: If you are using an Oracle Enterprise Edition Database you can use the "parallel=n" clause to speed up the ex- and import. Setting the number of parallel workers to 4 did best for us on 8-core single processor.
expdp system directory=YOURDIRECTORY schema=AESCHEMA dumpfile=DUMP.FILE logfile=LOG.FILE parallel=4
If you have any further questions do not hesitate to contact me.
Greets
Andreas
Original Message:
Sent: 07-26-2019 06:38 PM
From: Kanneganti
Subject: Metadata migration
Hello,
DId any one perform metadata DB migration from Oracle 11g to Oracle 12cR1 for Automic? I am looking for documentation on this and vendor had confirmed that there is no documentation available at this moment.
Here is the approach we are trying to follow. Currently, we are on AE 12.2.0
Shut down the services-->Export the schema in consistent state--> Change the ini files with new DB connection-->Import the schema in new DB-->start the services
Is there any thing else we need to do in the above process apart from these steps?