I have never seen anything that would throttle based on a NBD traffic vs other traffic. vSphere does have the ability to set network reservations and limits when using vCenter and a VDS, but default everything is wide open.
There are many things that will cause this behavior that are due to configuration. For example if any of the points are using the E1000 vNIC the speed is limited to 1Gbs for that driver. Switch to the vmxnet3. Also there are RSS and ring buffer configurations that will impact network throughput as well as TCP offload settings and others. Then there is the storage of where the NBD. Are the reads and writes continuous or not will make a difference. Also the performance of the array to handle the whole load it is responsible for, not just the backup job. Even with separate arrays each need to be fast enough. Then there is the TCP/IP overhead that will reduce the actual data written on the array.
Based on what you are saying, I suspect an E1000 vNIC somewhere in the datapath chain, but when talking about going faster than 1Gbs there is a lot more tuning that needs to be done from the guest OS all the way through to the end storage array. Good place to start is reading the performance guides the guest OS, ESXi, Rubrik backup and the Storage array. It is rare that the default configurations would be best when trying to achieve over 1Gbs network speeds.
There are some network test out there that will use a ramdisk to isolate the storage from being the bottleneck. No these are not very accurate for backup speeds cause the kind of reads and writes are critical and very, but atleast you can confirm the network path can perform over 1Gbs to start.