Switch CPU fabric on the pair of 5400zl's peaks at 2% utilization
~ # vmkping -s 8972 10.0.0.247 -d
PING 10.0.0.247 (10.0.0.247): 8972 data bytes
8980 bytes from 10.0.0.247: icmp_seq=0 ttl=64 time=1.082 ms
8980 bytes from 10.0.0.247: icmp_seq=1 ttl=64 time=0.614 ms
8980 bytes from 10.0.0.247: icmp_seq=2 ttl=64 time=0.809 ms
Each of the P4500 nodes responded as the above did.
Current cluster is running 2x1GB ALB per node with 4 nodes at site A and 4 at site B in a multi-site cluster.
Checking via ESXTOP shows:
vmhba34 - 22 407.43 172.60 202.33 1.83 1.07 1.35 0.01 1.36 0.00
CMDS/s peaked at 415 but usually stayed under 200.
After digging further with the Active/Active clustering of the P4500 cluster I needed to adjust the LUN I/O balancing beyond round robin via:
for i in `esxcli storage nmp device list | grep naa.600` ; do esxcli storage nmp psp roundrobin deviceconfig set -t iops –I 1 -d $i; done
Now the two iSCSI vmkernel ports are balancing more evenly. Peak CMDS/s is up to around 900 and normal average is hovering slightly above 200. There is still a delay but it a little bit better than before.