Rally Software

 View Only
  • 1.  Lookback API Compressed Results and Pagination

    Posted Jun 27, 2019 04:20 PM
    Hi,

    My intent is to query the lookback API to pull back a large number of records for User Story - Schedule State snapshots. I am setting Compress = True to limit the number of results, but they are still larger than the max page size, so I am needing to do multiple calls to retrieve all the data. I found the following in the lookback documentation:

    The default behavior of the CA Agile Central Lookback API is to compute the number of items in the result of a query and return this as TotalResultCount. For large queries that involve traversing multiple pages, it may be more efficient for your application to simply issue subsquent requests for latter pages based on the value of "HasMore" and exclude the computation of the total result count.


    My assumption is that if I set include TotalResultCount= false and in my result set HasMore = true, for my second query, I then set the Start equal to the value of the prior CompressedResultCount, and for subsequent queries, set the Start equal to that running total of the prior CompressedResultCounts. Am I on the right track there? It is seeming to work to a certain point but then I get an error: {"_rallyAPIMajor":"2","_rallyAPIMinor":"0","Errors":["Server Error: A problem occurred while processing your query."],"Warnings":[]}

    If I reduce the page size, I can keep going, but eventually I get the error again. Any thoughts on what I am doing wrong?

    Any help would be much appreciated.
    Thanks,
    Michael


  • 2.  RE: Lookback API Compressed Results and Pagination
    Best Answer

    Broadcom Employee
    Posted Jun 27, 2019 11:16 PM
    Hi Mike,

    It's not exactly as you're writing. The 'start' argument for your subsequent pages should be based on the pagesize you specified. Let's take an example. This example is with small numbers but it is to make the point, obviously the issue is more relevant with much higher numbers.

    So, to get subsequent pages you'll need to use multipliers of your pagesize. For example, if your page size is 100 then:
    - "pagesize": 100  - gets you the first page of 100 results.
    - "start": 100 and "pagesize": 100 - returns the second page of 100 results (between 101-200)
    - "start": 400 and "pagesize": 100 - returns the fifth page of 100 results (between 401-500)  etc etc...

    Now, here is how it is all supposed to work:

    When you elect not to compute the Total Result Count (includeTotalResultCount = false) then you're directing the server not to bother with this computation. The reason you may want to do that will be to earn better performance and server response time where this computation will never take place. What will then happen is that you'll need to use paging: You'll need to decide on a pagesize that makes sense (isn't too large) and iterate over the returned pages increasing your 'start' argument by the pagesize every time (as in the example above). To know when all is done you need to consider the 'HasMore' returned field. If set to 'true' then there are additional pages and you may want to continue. When the last page is returned to you it shall have 'HasMore' set to 'false' to indicate you reach the last page.

    Now, here is the larger point:
    When asking to compress snapshots it doesn't change the computation or the query. It only changes the snapshots returned to you while others are 'suppressed'. They're still computed. For example: say you query for all snapshots of a certain object and let's say there are a total of 1000 snapshots. If you choose to compress and indicate that you're interested with the 'State' field then only these snapshots where the 'State' field changed will be returned, but still the query returns 1000 snapshots. Let's say that only 3 snapshots out of the 1000 are for changes to the 'State' . Let's say these are snapshot numbers: 237, 545 and 976.

    But: 

    If you elected to hide the TotalResultCount (to save on performance) then you wouldn't know there are only 3, nor will you know which they are. Therefore you will use this paging mechanism to look into each page and if there are snapshots interesting to you in that page.

    So, in this example, when you get the first page of 100 results, it will be empty. The second page will also be empty.
    The third page that has snapshots 201-300 will have one result (snapshot number 237). And so on and so forth you'll iterate over all pages, gathering/processing the results you're getting while checking if there are more pages to check until you reach the last page where 'HasMore' is 'false' and you can stop.

    I hope this clears things up. I realize I'm providing a more extensive reply than to your immediate question. I wanted to make sure all around this mechanism is clear.
    The answer to your particular question is that your 'start' argument is a multiplication of your selected 'pagesize' as explained above.

    Let me know if that helped.

    Thanks,
    Sagi










  • 3.  RE: Lookback API Compressed Results and Pagination

    Posted Jun 28, 2019 04:40 PM
    Hey Sagi,

    Thanks so much for the detailed response, that helps me understand the Total Result Count and Compress much better, as well as the pagination. I still seem to be running into an error after a few pages. It doesn't really give much information as to what is causing it, although if I reduce the pagesize further, I can keep going, to a certain point, then I can't get past it. Is this a result of too large of a page size? And if so, is it trial and error to find the right size?

    {"_rallyAPIMajor":"2","_rallyAPIMinor":"0","Errors":["Server Error: A problem occurred while processing your query."],"Warnings":[]}

    Thanks again for your help, it is much appreciated!
    -Michael


  • 4.  RE: Lookback API Compressed Results and Pagination

    Broadcom Employee
    Posted Jun 28, 2019 05:29 PM
    Hey Mike,

    Reading the error isn't giving us enough details. I believe the problem here is that you may be trying to access a page that doesn't exist which may result in this error. The fact it works until a certain point also suggests perhaps your exceeding the page limit.

    Can you send us:
    a. a screen shot of a JSON request showing the request that's failing including all the parameters/arguments you're including.
    b. at least one (hopefully two) full JSON responses of the working responses.

    What this will be able to show me is what is the information on the working responses, then I'll see what you're requesting in the non-working request and see if that helps me figure out the problem.

    Thanks,
    Sagi