I have always shied away from using doQuery because it obviously holds some resources on the server but I was wondering exactly where the impact lies. Does the object list get stored in memory on the server or is it kept in some database table temporarily? If it is in memory, which process is it that we should be keeping an eye on the memory use of if we felt we had to use doQuery?
As to may knowledge, a doQuery result is comparable to a stored query result.Internally it will generate a so called domset, an object which holds the result set, provides sortable access, caches data for faster access, and providing other internally used functionality.
The domset reference, which is returned by doQuery will still be available after soap logout. That means, you have to make sure to free up the reference before ending your webapi session.otherwise you will observe a growing resource usage on your server.
In fact I am quite sure, that a doSelect will create a domset as well because this structure is a common element when fetching data from db.But I assume that this one gets destroyed after api call completeness, at least I hope and expect. I never observed related resource usage growing related to doSelect.
The method of choice, is a matter of expected amount of data result.If you need to get access to less than 100 records you might use doSelect, if you need to handle millions of records, use doQuery and its paging capabilities,.But take care about the references!
Depending on what data you need in a domset, and just as important, how often you need it, there are situations where neither option is ideal. This is especially true in the case of custom integrations. If for example, you have an employee accessible website with multiple concurrent Users, each Object List generated with doQuery may need to sit open for a few minutes until the user completes their transaction. They may also abandon the action by closing the browser or experience a crash which pushes allocations into more of a memory leak situation as you may hit the 2GB limit quickly.What we have found is that for cerian frequent mass queries, pulling that data from SQL directly from a View, whether on the CA SQL instance or on another Linked SQL application server eliminates the potential for these issues.
Please take a look at
List/Query Methods - CA Service Management - 17.1 - CA Technologies Documentation
where it states "
Important! The object list is stored on the CA SDM server and consumes system resources. The caller is responsible for freeing the list with freeListHandles(). Leaving a list in memory may increase memory for the process beyond the 2GB limit, resulting in memory leaks and can cause system failure.
that is, you would need to freeListHanles() once process the list, otherwise the resource would not get released.
Add on:A domset is hold in memory in the domsrvr process(es).
Hmm, thanks for this opinion, and experiences.But unless you handling it on your own, you have no role management, no data partition, no contact related access control to data, no dotted attributes support....hmmm....
Sure, direct access to db is the fastest one, and maybe the one with lessest overhead, but as always, it is not reflecting application logic.
Might be ok for read access only, with the disadvantages mentioned above, but a no go for updates anyhow.
SOAP, in my understanding , is a machine to machine API , where exception handling is a must, and easily possible.If you are looking for a UI related interface, take REST.As always , it always depends on what you need to achieve.
Absolutely. The best solutions are always dependent on individual circumstances. When it can be avoided, we generally do not write directly to the mdb. Our web applications read from views we've built of the most common ca objects we use(cnt, cr, loc, and position). We write to SD via web services and any major movement of data in SD occurs with custom .net assemblies we've developed in house, for example syncing our employee record data with contacts. We also have a dedicated app server for automation as about 50% of our ticket volume is created via custom integrations. That is probably one of the more useful things people can do that need to run large queries to keep them from causing issues with the SD analyst app server.
Great conservation! And inputThanks