I would generally separate backup and vMotion traffic even though both types of traffic are bursty, event-driven and maybe rarely actually active (in smaller, underutilized DRS-clusters, vMotions tend to happen not too often)
If you're not interested in multi-NIC vMotion and one physical NIC is enough for your backup load, you could setup something like this:
vmk0 Management vmkernel interface - vmnic0/active, vmnic1/standby (doesn't really matter here though)
vmk1 vMotion vmkernel interface - vmnic1/active, vmnic0/standby
VM backup port group - vmnic0/active, vmnic1/standby
But with 8 NICs available I'd probably go for this kind of example configuration:
Team1, Management+vMotion:
vmk0 Management vmkernel interface - both links active
vmk1 vMotion vmkernel interface 1 - vmnic1/active, vmnic0/standby
vmk2 vMotion vmkernel interface 2 - vmnic0/active, vmnic1/standby
Team2, NFS:
In detail this depends heavily on whether you have multiple NFS arrays/target IPs, can run Etherchannel/LACP etc. In any case, at least 2 dedicated vmnic2 and vmnic3 uplinks will go here.
Team3, VM public traffic:
VM public port groups - vmnic4 and vmnic5, both links active with load-based teaming
Team4, VM backup traffic
VM backup port group - vmnic6 and vmnic7, both links active with load-based teaming
Note that this is just an example. It also depends a lot on if you think you need 4 NICs for public/NFS traffic.
Btw, is there a specific reason why you have a dedicated backup network directly attached to your VMs through a 2nd NIC (if I understood you correctly) instead of leveraging virtualization/array-side backup solutions? Are you doing that for all VMs or just some special cases like databases or mailbox servers?