Plex 2E

 View Only
Expand all | Collapse all

Plex Local Model extract duration

  • 1.  Plex Local Model extract duration

    Posted Apr 29, 2015 06:39 AM

    Hi,

    does anyone have a problem with long time waiting for Local Models extract?

    We have our application splitted over 26 Plex models. A particular model comprises all other models as libraries. Additionally it includes Plex “Class Libraries” (our application was born in 1995) and also Plex “Pattern Libraries”.

    Extracting from that 26 models to an existent Local model is taking 1h 15m!!!

    We noticed that in each library extraction Plex take a long time in “Synchronizing Objects and Triples” (showed in status bar). It’s in this step that Plex spends almost all the time in the extracting process. Even from a empty library (recently created model without any objects) Plex takes the same 2 and a half minutes to extract as it takes for the other models!

     

    We also noticed that “Synchronizing Objects and Triples” elapsed time is directly related with number of the objects that exists in the local model. If we extract a new local model, the extract from the first libraries are very quick (since there are few objects in the local model in the begging). As a long as the local model gets more objects, each library takes more time to extract than the previous one.

     

    This Local Model has approx. 300MB. We know other Plex users that have Local models with 300MB and don’t take 1h15m extracting a local model.

    Why is this?

    That’s because we have our data splitted over many Plex group Models?

    We work intensively with Plex versioning (levels and versions). Does the use of versioning has any impact on this?

    Please share you experience!

    Thanks!

    Bruno



  • 2.  Re: Plex Local Model extract duration

    Posted Apr 29, 2015 10:50 AM

    Is that 26 Host models? If there are 26 application models that you are actively working on then there is no way round extracting all.

     

    But presently the local I am working on has 27 libraries which only the host and one Standards Model I extract from the others are static libraries like Pattern and Class supplied libraries.

     

    We create a template local with all the libraries extracted which we copy locally to become our new local, after extracting only from 2 (Host and Standards Lib)

     

    Have also just recently automated this using AutoIt - AutoIt

     

    Copy template local from shared drive to local drive

    Open Local

    Login to Group model

    Extract Host

    Extract Standards Layer

    Close Group model login

    Save Local

     

    All while i am making my tea!



  • 3.  Re: Plex Local Model extract duration

    Posted Apr 30, 2015 01:49 PM

    I didn't understand the strategy of using local model templates. When you try to update the group model with one of those local models, don’t you get that error saying that “a copy of local model was used to update this group model…”?

    However I understood the tea thing…



  • 4.  Re: Plex Local Model extract duration

    Posted Apr 30, 2015 01:53 AM

    But if you need to extract from 26 models there are some tips.

     

    Edge Forum Archive - [Edge] Advantage Plex Family - Memory Usage in Plex IDE

     

    or always like the subversion route but never had bad performance

     

    https://communities.ca.com/message/2262724#2262724



  • 5.  Re: Plex Local Model extract duration

    Posted Apr 30, 2015 01:50 PM

    Yes, we need to extract from 26 models.

    Regarding the tips in Edge Forum Archive - [Edge] Advantage Plex Family - Memory Usage in Plex IDE:

    1)     Group model compression – already tested. We forced compression for all models and check that were dat files with considerable size reduction (in some cases the final size was around 15% the initial size!). However the extract time remained the same.

    2)     We never investigate the use of sub-models we want all model data. Nevertheless I don’t think that could be a solution because we had an empty library (no object at all) and it took the same time as libraries with many objects

    3)     “Improve performance by extracting objects that were modified and not entire model” How can I do that? Selecting “Show all modified objects”? That only work for host model…

    4)     Increasing virtual memory -  I changed my desktop last year from an Intel(R) Core(TM)2 Duo CPU 2.83 GHz 3GB RAM to an Intel Core i5 with Windows 7 (64 bits) and 4GB RAM. The extract times were similar (less then 10% faster)



  • 6.  Re: Plex Local Model extract duration

    Posted Apr 30, 2015 02:27 AM

    how long does it take if you do this all locally and not over a server as that would probably be the quickest you could ever expect removing virus checking and server coms



  • 7.  Re: Plex Local Model extract duration

    Posted Apr 30, 2015 01:50 PM

    That was my first test: copy group models to my desktop and perform local model extraction. The extract time was the same.



  • 8.  Re: Plex Local Model extract duration

    Posted Apr 30, 2015 01:51 PM

    thanks for all the help and tips!



  • 9.  Re: Plex Local Model extract duration

    Posted Apr 30, 2015 03:08 PM

    good work you have done your research.

     

    The local model template method is where you create a new local by  extracting all from the attached libraries but do not extract anything from the host group model. You then save the local. Most likely also creating the perfect build file and setting the correct configuration as well. Now develpers can copy this local and build file and start work...they need to login into the group model with their own login if different and Extract ALL from the Host group model (and any attached libs that have changed...eg other application models and standards models etc) you dont get the “a copy of local model was used to update this group model…”  error this way



  • 10.  Re: Plex Local Model extract duration

    Posted May 03, 2015 05:28 AM

    Bruno,

    We are facing the same problem and our model is more than 1GB.

    If you want to do it from an existing local model with all the libraries already extracted, it will take the same amount of time for each extrated libraries (2 and a half minutes in your case, half an hour in our case).

    When you extract, do you extract all libraries? If yes, you should only extract the modified libraries (check the message log window, it should tell you which libraries have been modified since the last time you logged in). Never re-extract the CA and third party libraries (except if you installed a new version of course :P).

     

    As Lucio mentioned, it can help creating templates for each group models only containing CA or third party librairies, then use them as a starting point to extract your group models.

     

    If you start from a brand new local model or a template, there is a way to improve the extraction process (based on my calculation, it should take you around 30 minutes).

    The rule is to extract the libraries (including the host) sorted by size (the smallest first). The reason is when Plex is synchronizing the objects and triples, it's synchronizing all objects and triples already extracted. So the more objects and triples extracted, the longer it will take to synchronize them.

    The main issue with this solution is to know which group models to extract first. In our case, it's simple because we only have 9 group models.

    How do you know the size of a group model? In fact, it's the size of the lprop.dat, prop.dat and object.dat files.

    In Plex, in the group model window, the libraries are sorted by group model name (the folder containing the group model).

    If you select all libraries in the group model window and do a "Extract all", it will extract all libraries in the grid sequentially. The only way to sort the libraries is to rename them. They should have a sequence id prefix. And this id should reflect the size of the group model (1 for the smallest group model, 26 for the largest). The problem is you have to update all libraries dependencies (if any) and the name of the objects from these libraries will be affected.

     

    CA, it would be nice to be able to sort the group model grid based on diferent criteria and the size should be one of them.

     

    In our case, we extract our libraries one by one. In your case, it's a little it more tricky.

    You could use this approach: select and extract a batch of small libraries, then a batch of medium libraries and finally the largest libraries.

     

    Because you have only one model containing all other models as libraries, you could easily automate the extraction process. You can use a keyboard macro. Open the group model window, position the cursor at the top of the grid and using the arrow keys (up and down), select the smallest library, use the shortcut to extract it (Ctrl+Shift+E), then select the next smallest library and repeat this operation until the the largest library has been extracted. To avoid any lost of focus, the message log window should be in quiet mode.

    It's not a perfect solution (it might fail for various reasons) but it should work in most cases.

     

    Another solution is to use an automation tool and create a script that will reproduce the steps that you are doing manually (extraction the libraries in the correct order).

     

    Copying the group models locally will not help because most of process is done locally. Plex first load the group model into the memory then the synchronization is done without accessing the network. And the time taken is only depending on the clock speed of your processor.

     

    I don't think that intensively using versioning has an impact on the extraction process.



  • 11.  Re: Plex Local Model extract duration

    Posted May 04, 2015 01:06 AM

    And here is the tool and script I use to automate it

     

    https://communities.ca.com/message/241781179#241781179



  • 12.  Re: Plex Local Model extract duration

    Posted May 04, 2015 06:03 AM

    Half an hour each model?!? Plus 9 models, you take 4,5 hours to extract. I thought I had a problem…

    We have 26 application libraries plus 17 Plex libraries (Class Libraries + Pattern Libraries). We only extract from our 26 application models.

    Because we have the data spread through many modules, we have to update and extract regularly. For example, we have group models that only contains data base definitions (fields and Entities). If we create a new field in a table, we need update that model and extract it in the model where the business logic that use that table is.

    The model that contains all the others as libraries is the model where is the GUI from our application. We also use it to generate our versions/builds/PTFs (instead of generate from each host models) and to perform impact analysis.

     

    I can point you some disadvantages from extracting new local models instead of extracting to an existent local model:

     

    1) It is necessary to configure the local model (Model Configuration);

    2) It is necessary to configure the bld file (or copy from the previous local model);

    3) It is necessary to configure the List object for logging objects (in Models Options);

    4) since we are constantly discarding Local models, we have the risk of losing changes if we forget to update the local model before discarding it;

    5) We are starting to believe that Plex can't handle our "intensive" use of it.... :-( (we have open issues in CA Support for a long, long time…). In that matter, and since we know that information of each Local Model is stored in the Group model (at least for checking update attempts from copy of previous versions of the local model), we are afraid to begging to create many local models (every time anyone needs to extract from that model) and to have problems in the future with that approach.

    Of course with that very nice tip of using templates some of these disadvantages are minimized.

     

    I don’t think the use of macro should be very helpful. First off all, since we take 1h15m extracting to that local model is very frequent to have group model locking errors due to updates/Extracts from other team members. That messes with the macro steps. Second, we assume that the extract from this local model takes 1h15 and simply go to do anything else. But we are afraid for the increasing of extract time.

     

    Checking messages from message log to see which libraries have changes is also time consuming and can lead to human errors.

     

    Meanwhile we started the following approach: try to clean our models from unused objects.

    Every group model have obsolete objects from discontinued functionality, previous versions of reworked objects, etc…

    Clean these objects may not have significantly impact in the extract time. But not if we are talking about triples.

    We use lists to gather functions that make part of each version/build/PTF. Over the years we have created hundreds of lists with many thousands of triples (LST contains FNC). So we started to delete lists from old versions/Builds. The reduction of number of triples, had significant impact in extract time.

    The usage triples are also a problem. This triples should be inherited but somewhere in time the triple “FNC generated call to FNC” stopped to be inherited and Plex started to generate a large amount of unnecessary triples. Almost all our functions inherit from a function named Function(T). This template function was changed last year and 7 called functions were substituted from another 7 new functions. This change made that 180.000 new triples (FNC generated call to FNC ) were generated.

    I recently opened a CA Support issue to solve this.

    Since we work with versions/levels, the generated calls triples for the functions that stopped of being used in the template remained (we have the generated call in one level and the …stop triple in another one). So we had to delete the function objects to automatically delete thousands and thousands of triples.

    The extract time is currently around 45 minutes (reduced from 1h15). I believe we can reduce this when CA resolve the FNC generated call to FNC inheritance issue.

    Of course, I don’t know how can this be applied to your 1GB model.

    In any case, CA should work on this “Synchronizing Objects and Triples” step.

     

    Thank you very much George and Sabastien for your detailed support and very nice tips!

     

    I’m still working in an experience of merging Configuration levels using XML export/import and levels deletion. I will keep you informed of my findings.



  • 13.  Re: Plex Local Model extract duration

    Posted May 04, 2015 08:15 AM

    you are going to hate this but if this is really affecting development and the belief in the tool and developers belief in the tools then I would not wait on CA helping, what can they really do?

     

    I would look at minimizing the number of models

     

    I have written extensively on merging and xml before and in my opinion will not end well.

     

    I would suggest being practical here and cease development in 25 of the models....and choose one to continue development in and scope nicely.


    I am sorry but this thread I will add to my memory on why splitting the models is a bad idea, I used to split models and thought it great but over the years see it as a problem and come back to having just one model.


    I am sorry for you since you have done what should be possible and was advised to do in many circumstances and the sites that took arguably a more monolithic approach are rewarded for seemingly a less decoupled, abstract, Plex approach.


    It is now more important in my opinion to get your developers working and not waiting and frustrated



  • 14.  Re: Plex Local Model extract duration

    Posted May 05, 2015 09:49 AM

    Hi Bruno, the CM MatchPoint Model Manager uses model templates, too, to create model extracts. It is a great tool to standardize the extract process and have the correct model configuration and build files in place for each of your models. Message me if you like to get more information about it.

     

    cheers,

    Christoph



  • 15.  Re: Plex Local Model extract duration

    Posted May 07, 2015 10:33 PM

    Working with less group model ia always a good idea. But 25 grouo models seems high. Whats the best practice ?. To split information in different group models or to have all related in one model?



  • 16.  Re: Plex Local Model extract duration

    Posted May 28, 2015 12:42 PM

    hi smrajibe

     

    One reason to have more than one group model is for licence issues for Websydian for example. But don't see much reason in 2015 to have separate models where as I did in 2000. A new site I'd probably advocate a model for standards layer and one for everything else.

     

    Present client there were the beginnings of new group model for a service tier and consisted of 30 functions, again I tried to export etc to our main model but in the end they were not good enough to save so were rewritten in the main model and boy are we glad we did as there are 250 services now and growing



  • 17.  Re: Plex Local Model extract duration

    Posted May 13, 2015 03:17 AM

    Hi Bruno

     

    This issue maybe of interest Struggling with a duplicate group model  as I think consolidation models into one model is what would help you.



  • 18.  Re: Plex Local Model extract duration

    Posted May 27, 2015 02:40 PM

    Hello all!

    Sorry for my long silence!

    But I was occupied in my (few, not to say none) spare time to make the exercise to reduce levels by merging data with XML import and I wanted to share that with you.

     

    In all our 26 models I did the following:

    1) configured model in the last configuration level;

    2) exported all model data with XML export functionality;

    3) I deleted all levels except the first (base level) and the last;

    4) I imported the XML exported in step 2);

     

    That was very painful! The XML export takes along time. The XML import even more.

    I add 2 Plex crashes while importing 2 action diagrams. I was only able to proceed by delete the large properties for those functions, and repeat the export and import for those models; I add several situations were Plex "hanged" while importing action diagrams. That inconceivable "Parsing expression" that in same situations makes us split instructions like A = A + B + C in:

    A = A + B;

    A = A + C;

    Then the XML import generates lots of errors! Hundreds each model! I didn´t analyze any of those errors; besides this, after import I had information messages in message log stating something like "x objects and y triples were imported". Sometimes I got this massages and sometimes I didn't. I didn't quite understand if the import ran until the end.

     

    After all this I tried extracting a local model from that group model that includes all others as libraries.

    Extracting a new local model that took 31 minutes, started taking 1h41m!

    Extracting to an existing model (extracting everything except Plex Pattern Libraries and Class Libraries) that took 42 minutes, started taking 2h13m!

    The local model that used to have 309MB, become a 515MB file.

     

     

    With this exercise we concluded that:

    - We can't "merge" levels and expect to reduce extract time.

    - We can't have must "faith" in the XML import/export functionality.

     

    Meanwhile we were able to delete some more functions (and hundreds of generates call triples with then) and the extract time for an existing model is now around 37 minutes. Not bad!!

     

    It´s now important that CA solves the issue of inheritance of Generated call triples. With that we will be able to delete much more.

     

    Back to your comments, George,

    Yes, we stopped creating new models. But the “damage” is done. But if XML doesn’t work well, how can we “minimize the number of models”? Re-work all by “hand”?

    Regarding what CA Support can or cannot do, if we think that Plex  stops several minutes “synchronizing triples and objects” while extracting from a completely empty library model, makes us wonder if they can’t do something about that, don’t we?

     

     

    Thank you George and Christoph for your help!

    Regards!

    Bruno