A little brainteaser
When doing an pure NBD Backup from a physical veeam server to a esxi 4.1 machine to backup a vm located on eql san (ALL equiped with 10 GB interfaces, the veeam 5.0 b+r server, the esxi 4.1 and also the equallogic system - sometimes THIS appears in the Equallogic Logs:
INFO 02.12.10 16:39:38 10eql2 iSCSI session to target '172.16.150.234:3260, iqn.2001-05.com.equallogic:0-8a0906-cd5e5a007-ed2000000524c8f7-10eql1esxsata1' from initiator '172.16.150.35:59312, iqn.1998-01.com.vmware:esx12-27bd5df6' was closed. iSCSI initiator connection failure. Connection was closed by peer.
Exactly four to six seconds later it reconnects.
INFO 02.12.10 16:39:43 10eql2 iSCSI login to target '172.16.150.234:3260, iqn.2001-05.com.equallogic:0-8a0906-cd5e5a007-ed2000000524c8f7-10eql1esxsata1' from initiator '172.16.150.35:60326, iqn.1998-01.com.vmware:esx12-27bd5df6' successful using standard-sized frames. NOTE: More than one initiator is now logged in to the target.
Now this only happens during extremely high bandwith operations, e.g. when about 40% of the 10 GB is used. It seems the ESXi 4.1 iSCSI Software initiator can´t take more and is failing for a very short period of time.
I wondered (as i have enterprise plus) if it would help if i´d create a dvswitch, map it to the vkernel and enable NIOC on it. Did it - BUT this problem persistet. The LUN is disconnected during the nbd backup job for a very short period of time and then reconnects. The 10 GB NIC in the ESXi 4.1 is an intel dualport using ixgbe esxi standard out of box driver.
Any thoughts? And please: This is research. Don´t tell me to use SAN mode - I am curious why that here is happening. And NBD mode via vstorage API (which veeam uses) should easily handle this kind of NBD traffic.
Best regards,
Joerg