We have two sets of Clarity instances which we are merging and as part of that we are migrating huge set of data using XOG. including Projects and all related dat, Ideas and all custom object data, OBS, partitions.
and the XOG is taking lot of times even when we divide and did it and getting timed a lot in the middle.
Just want to know if there is any recommendations or best practices at the server capacity level or database level or at config level to improve the XOG performance considerably to reduce the deployment time of data migration.
Any information is really appreciated
As you may know, design of XOG is changed after ppm13.3.
The XOG response is sent back to the client in chunks using pagination.
Please check the following manual.
XOG Governor Node Limit
There were presentations given in UK UG meetings on improving performance with large data sets.
The improvement was obtained with doing integration with spawned processes in order to reduce the time spend for just waiting. Another improvement was obtained with reducing writes to log.
Thank you for your responses!
Could you please elaborate on "The improvement was obtained with doing integration with spawned processes in order to reduce the time spend for just waiting." what exactly needs to be done for this.
I have attached a copy of the presentation I made to the UK User Group back in 2015.
For my approach to work you will need to convert your loading scripts to run as GEL scripts in a process. These scripts will run on the source systems, build the XOG files, and then call XOG on the destination system.
Also, the XOG data should be batched into multiple object instances in a single XOG file. E.g. 10/50/100 Projects/Resources/Custom Instances/etc per XOG file. Your mileage will vary so you will need to instrument your code and find the sweet-spot for your PPM system configuration.
I have also has some success with data loads where my GEL script ran outside of PPM/Clarity and used Threads within the script to send the XOG files. Throughput was impressive BUT getting all the Threads to synchonize properly to a queue of available XOG files and catch errors was intermittent. I feel sure this approach would work if converted to Java but I have not had the time to investigate any further.
What all my experimentation has shown is that the XOG endpoint ('.../niku/xog') is capable of taking many simultaneous XOG files without choking as the limitation appears to be the raw processing speed of XOG and not any inherent limitations in the App code or it's interaction with the database.
Thank you very much!
Thank you very much for your reply!