Hi,
So you want to spread the action of downloading these files over multiple servers? That would imply that the bottleneck you're trying to address is the I/O on the downloading server itself, and not the actual network link. While possible, this is a rather unusual scenario. Are you sure about this?
Not to impose, but unless you are, I'd start by identifying the bottleneck. Especially with huge numbers of small files, ftp, scp and to some extend also sftp can be orders of magnitude to slow (not considering the RA agent, just by virtue of the protocols and file handling). If you're on UNIX or otherwise able (i.e. with a windows port), and have full SSH access on the remote side, I'd run a benchmark outside of UC4 with something like rsync, or even pipe your stuff through tar on the remote and local end. That might already solve much of your problem. That, or look into a potential I/O problem on that current server :)
If you still find multiple downloading servers are faster even with the RA agent out of the picture and using a well-performing transfer tool, are you sure that's not just because you're now using a greater number of tcp connections? If tcp connections is your bottleneck, that probably could be rectified on a single server as well without spreading out to multiple servers.
Failing all of that: if you're on UNIX (hint: it would really help a lot to know what OS this is on ;) I could possibly give you some pointers how to separate the ls output from ssh debug info (but why is there debug info in the first place?) and split it into parts to be used, if you'd post an example of the listing. But I doubt this alone will help much: You'd end up with a static split based on number or names of files, which will still not guarantee an even load distribution through to the end. Also, not to bash on UC4, but even if you put that into a variable and have multiple RA agents parse that, I doubt that would be the racecar option.
I'd personally think instead about putting my file listing (obtained from the "ls") into an sqlite database and have multiple servers lock one (or more) records, download the respective files, then remove the records from the table. These "worker" scripts which process the database records and do the actual downloading (couple of lines of shell script) could then easily be triggered from UC4. Poor man's message queue :) but very scalable.
Hope this helps.
edit: there's also
https://www.gnu.org/software/parallel, which could be used to parallelize downloads across multiple machines as well. Haven't used it yet, but reportedly works like xargs, so should be able to achive proper distribution of load over a list of filenames as well.