The page size is 20 and isn't configurable in this case. You can set the index of the first edge to receive on the first page in the getEdges(connection, index) call. It appears that if you specify an index higher than the total number of edges, it will restart at index 0. This means that if the total # of edges in your environment is a non-zero multiple of 20 (i.e. totalEdges % 20 = 0), the edge page returned will always be full.
Here is some code that should cover this case. Basically, we're paging through the returned edges 20 at a time. As long as the amount returned is equal to 20, we'll grab another page by incrementing index by 20 and going back to the server with the new index. If the number of edges returned is less than 20, then the page wasn't full and we reached the end of the list. To cover the case where the total edges is a multiple of 20, which would otherwise cause an infinite loop, we keep a dictionary of all unique edge IDs and perform a lookup on each new edge to make sure it's not in the dictionary. If a duplicate is found, assume we've started over at index = 0 and break out of the loop.
Change maxIndex = 400 to a higher number if you think you will be pulling more than 400 edges. I don't know what the NSX config maximum is on edges... but I wanted to make sure this code doesn't execute for too long or get into an infinite loop scenario.
var keepLooping = true;
var index = 0;
var fullEdgeList = new Array();
var maxIndex = 400;
var edgeIDDictionary = {};
//maxIndex will limit the total number edges to return and prevent an infinite loop
// in case a condition exists that would cause one
while(keepLooping && index < maxIndex) {
//get edges starting at current index
var edges = NSXEdgeManager.getEdges(conn, index);
var duplicateFound = false;
//add each edge to the full edge list for consumption later
for(var e in edges) {
//Check for a duplicate by seeing if the edge ID is already in the dictionary
if(edgeIDDictionary[edges[e].id.toString()] == edges[e].id.toString()) {
duplicateFound = true;
System.log("Found duplicate; exiting loop - " + edges[e].id);
break;
}
//add the edge ID to the dictionary so we can find duplicates later
edgeIDDictionary[edges[e].id.toString()] = edges[e].id.toString()
fullEdgeList.push(edges[e]);
}
//Only continue looping if the page was full and a duplicate wasn't found
keepLooping = (edges.length == 20) && (!duplicateFound);
//increase the index if we're continuing
if(keepLooping) index += 20;
}
for(var e in fullEdgeList) {
System.log(e + ". " + fullEdgeList[e].name + " - " + fullEdgeList[e].id);
}
I think it would be possible for an edge to be added while this code is executing (the more edges there are, the more round trips are required, and the longer it takes to fully execute). In that case, I don't know exactly how it would respond. My assumption is that any new edges are appended to the list in the order they were added. This has held true during my testing, but I don't know this is always the case.
In any case, we're quitting the loop as soon as the first duplicate is found, so if the list is modified in some kind of way while we're pulling pages, it shouldn't be an issue except that it may not be up to date, especially for larger lists of edges and in cases where edges are being created and destroyed often. This would be true of any solution however - the data is basically stale the second you grab it from the source.
The problem is probably academic, but it's maybe something to keep in mind.