Automic Workload Automation

Expand all | Collapse all

Metadata migration

  • 1.  Metadata migration

    Posted 07-26-2019 06:38 PM

    DId any one perform metadata DB migration from Oracle 11g to Oracle 12cR1 for Automic? I am looking for documentation on this and vendor had confirmed that there is no documentation available at this moment.
    Here is the approach we are trying to follow. Currently, we are on AE 12.2.0
    Shut down the services-->Export the schema in consistent state--> Change the ini files with new DB connection-->Import the schema in new DB-->start the services

    Is there any thing else we need to do in the above process apart from these steps?

  • 2.  RE: Metadata migration

    Posted 07-29-2019 06:25 AM
    ​We went from 11g to 12c using the process you describe. Shut down Automic server processes, dump db, load db, modify connection string if needed, start Automic server processes. We didn't have to do any additional steps IIRC.


  • 3.  RE: Metadata migration

    Posted 07-29-2019 12:54 PM
    Thanks for the clarification Carsten..
    load Db means loading the dump which was taken from old schema (with oracle 11g) to the new schema (with oracle 12c), right?

  • 4.  RE: Metadata migration

    Posted 07-30-2019 03:36 AM
    ​Hi Poornima Kanneganti,

    yes, correct. I spoke to our DBA rcently and he told me he made the dump from 11g with "export datapump" (or "exdp") and imported it to 12c with "import datapump".

    Hope this helps, best regards,

  • 5.  RE: Metadata migration

    Posted 07-30-2019 05:36 AM
    Hi Carsten Schmitz,
    what kind of dump did your DBA do?
    -Full export
    -Schema export
    -Table export
    -Tablespace export

    Do you use the Oracle client? If so, did you still use the 11g client after import or did you switch to 12c?

    best regards
    Stefan Fast

  • 6.  RE: Metadata migration

    Posted 07-30-2019 06:43 AM

    > ​what kind of dump did your DBA do?

    Schema export.

    > Do you use the Oracle client? If so, did you still use the 11g client after import or did you switch to 12c?

    It depends on what component, but for most practical uses where client libraries are pulled in via LD_LIBRARY_PATH, I switched to the 12c client at the time.


  • 7.  RE: Metadata migration

    Posted 07-30-2019 06:50 AM
    Also, in case you're wondering about the in-place upgrade in the other tread :)

    IIRC, we did an in-place upgrade in one environment and a dump/load in another. We also did an other, unrelated Oracle move to new hardware with no Oracle update, so memory admittedly is a bit sketchy as to when we did what procedure exactly. So take it with a grain of salt, but we definetly did both, a dump/load and an in-place upgrade.

  • 8.  RE: Metadata migration

    Posted 07-30-2019 07:15 AM
    Hello Poornima,

    as Carsten already mentioned we tried both ways:

    1. Upgrade of an existing database instance (11g to 12c)
    2. Upgrade via expdp and impdp into an existing 12c instance (schema export, please see the example at the end of my message)

    From the database perspective there is nothing additional to consider. Since the 12c database is even compatible with 10g Clients, there is no reason to upgrade the existing Clients on the Automic Application Server (unless otherwise prescribed by CA/Broadcom).

    So your approach should work without any additional steps. But for better support of the database vendor I would recommend also upgrading the clients to 12/18c.

    BTW: If you are using an Oracle Enterprise Edition Database you can use the "parallel=n" clause to speed up the ex- and import. Setting the number of parallel workers to 4 did best for us on 8-core single processor.

    expdp system directory=YOURDIRECTORY schema=AESCHEMA dumpfile=DUMP.FILE logfile=LOG.FILE parallel=4

    If you have any further questions do not hesitate to contact me.


  • 9.  RE: Metadata migration

    Posted 07-30-2019 02:07 PM
    Hello Andreas,

    Thanks much for all the details! We had used parallel=32 for the export and imported the same in the new DB (12c r1). It is working fine now.

  • 10.  RE: Metadata migration

    Posted 08-05-2019 03:18 AM
    Hello Poornima,

    what exact release version did you switch to and what patch set (i.e. JUL2019 Bundle Patch)? I am asking since there have been a lot of performance related issues been reported regarding the parameters "optimizer_adaptive_features" and "optimizer_dynamic_sampling". In later patchsets (patch 22652097) and from 12r2 on the "optimizer_adaptive_features" parameter has been deprecated and replaced by "optimizer_adaptive_plans" and "optimizer_adaptive_statistics" (Oracle Doc ID 2187449.1).

    Why did they split these parameters?
    In most cases the adaptive statistics are responsible for the performance issues. Even with actual statistics the optimizer might gather new statistics during the parse-phase of a SQL command. Depending on the objects being queried the statistic gathering can take up to a several minutes!

    How to find out if you are affected?
    You can i.e. check your AWR-Reports (when having performance pack licensed) or Statspack-Reports as well as traces and check for high parse times. If you are monitoring your systems by Nagios, Zabbix or other performance monitoring tools you should see (much) higher CPU load after the switch to 12r1.

    What possibilities do you have to fix this?
    It depends of the patchset you have installed. If you have not applied patch 22652097 yet, you can simply set "optimizer_adaptive_features=FALSE" in your SPFILE and memory. This parameter can also be set online, without the need of restarting the database. If the patch has already been applied, you can try to set different values for the parameters "optimizer_adaptive_statistics" and "optimizer_adaptive_plans". In our case we needed to set both values to "FALSE" to get the best results. In addition we needed to set "optimizer_dynamic_sampling=0" but I cant remember why (just keep that option in mind :-) ), sorry!

    (If you speak German, you can read the following blog post of Herrmann & Lenz, a German database consulting agency. This is where I got most of the information:

    You can also find lots of community and support threads on

    Have a nice day.