VMware vSphere

 View Only
Expand all | Collapse all

dvSwitch problem on reboot

AndreTheGiant

AndreTheGiantDec 11, 2011 09:49 AM

  • 1.  dvSwitch problem on reboot

    Posted Dec 07, 2011 05:10 PM

    Not my first rodeo when it comes to dvSwitches, but this is a problem I've never seen before and it's a nasty one.  Installing 5 into my test environment before upgrading my production 4.1.  Initial installation, configuration and reboots when the hosts are on a standard vSwitch are A-OK.  The hosts/guests are all able to communicate and the reboots are quick.  However, the minute I migrate over to the dvSwitch everything goes to Hell.  On a reboot the host will take forever to come back up (~10 minutes) and it always seems to be stuck at the point where it's scanning for the iSCSI datastores (different places depending on the host and whether it's using hardware/software iSCSI).  When the system eventually comes up to the console I have zero network connectivity.  The correct IP for vmk0 is listed but I can't ping it from any other entity.  When I go into the shell I can't ping or vmkping anything external to the box (it can ping it's own vmk0 IP but that's it).

    If I restart the management network via the console or I use esxcli to "modify" vmk0/vmk1 (something as simple as setting the MTU to the same value it's already got) my networking comes back and the host connects to vCenter again.

    Obvious assumption: there is a problem with the networking.  But what?  And why only after I move to the dvSwitch?  I've been impatient and not simply sat and waited to see if it comes back by itself, but there's obviously a limit to how long I could wait.

    Any ideas/solutions?  Thanks for your time!

    PS - I've been running dvSwitches in my 4.1 environment for a long time, never any problems like this.  Same hardware in both environments.

    PPS - it's definitely not a storage problem.  Rebooted a host that had been migrated to dvSwitch but hadn't yet had its iSCSI configuration put in place.  The reboot was "normal" speed (very quick...not hung up scanning for the iSCSI storage) but the networking was still gone and required a management network restart to function again.  My slow boot would seem to be a downstream effect of networking not coming up.



  • 2.  RE: dvSwitch problem on reboot

    Posted Dec 07, 2011 05:59 PM

    Check out the patch section for ESXi 5.0.  I downloaded and applied the patch and the ESXi 5.0 host now boots with 3mins instead of 10 - 20 mins.

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2008017  This is the bulletin

    http://www.vmware.com/patchmgr/download.portal  Patch portal.

    Frank



  • 3.  RE: dvSwitch problem on reboot

    Posted Dec 07, 2011 06:24 PM

    I'll give it a shot Frank...but the patch seems to be more of a storage discovery timeout thing than the idea of the networking completely crapping out.  I'll post back after I try it.

    Thanks,

    Matt



  • 4.  RE: dvSwitch problem on reboot

    Posted Dec 08, 2011 04:28 PM

    Unfortunately that patch didn't help.  I reverted all the hosts to standard vSwitches (everything began working again) and applied the patch.  I then created an entirely new dvSwitch and began migrating the hosts over.  I added the pNIC/vmk to the port group that is set up for my storage network and rebooted a host.  The host came back up relatively quickly but had lost connection to the storage network.  iSCSI datastores for that host had dropped offline and I couldn't ping that pNIC anymore.

    I went into the dvSwitch and blocked/unblocked the port that vmk was assigned to...low and behold I regained network connectivity on that interface (could ping again and datastores came back).  For laughs I then added the management pNIC/vmk to the dvSwitch and rebooted again.  Now I've completely lost connectivity to that host.

    There's something wrong here but for the life of me I can't figure out what it is.  This all worked fine with the 4.x dvSwitch.



  • 5.  RE: dvSwitch problem on reboot

    Posted Dec 08, 2011 04:42 PM

    Well this is just amusing as hell.  I went into the physical switch and disabled/re-enabled the physical port that one of the pNICs is plugged into.  And everything began working again.

    This must be related to my network design (probably a bit sketchy on my part).  I'm thinking a VLAN/trunking issue of some kind.  If anyone has advice I'd love to hear it.  Beyond that...Frank, thanks for the suggestion it was very much appreciated.

    Matt



  • 6.  RE: dvSwitch problem on reboot

    Posted Dec 08, 2011 04:47 PM

    Matt-

    Let me say wow...  I am glad the patch at least helped you in the fact your hosts will not take forever to boot.  With regards to your network config, if its a cisco infrastructure I could lend you some eyes and hands with regards to configuration.  I am using a 5 esxi 5.0 host with vDS 5 an a standard switch just for management.

    If the vlans and trunking is not set up properly, yeah this will be a headache.

    Frank



  • 7.  RE: dvSwitch problem on reboot

    Posted Dec 08, 2011 05:09 PM

    I've got Cisco in production but this is just a test environment so I'm rolling with castoffs...an old Netgear managed switch. What's kicking my tail is that whatever is happening wasn't an issue with the 4.x dvSwitch and configuration...which was straight vanilla out-of-the-box (just like this). Setting up and configuring VLANs is something I'm weak on...looks like I'm about to learn it though. Frank, thanks again for the advice. I'm sure I can get it from here with a bit of RTFM. At least for the time being I've got a way to get the hosts back up and live.

    The difference between a successful person and others is not a lack of strength, not a lack of knowledge, but rather in a lack of will.



  • 8.  RE: dvSwitch problem on reboot

    Posted Dec 08, 2011 05:16 PM

    sounds good  Matt.

    Enjoy and learn from the lab..

    Frank



  • 9.  RE: dvSwitch problem on reboot

    Posted Dec 08, 2011 06:33 PM

    Well, I've figured out what's happening (keep in mind this worked OK in 4.x).  As this is a test system and all castoff hardware I've got a bunch of  hosts with 2 pNICs.  My dvSwitch is configured for 2 uplinks per host and has 2 portgroups (VM and Storage).  In 4.x I had assigned...in teaming and failover...a single pNIC/dvUplink to each port group as appropriate to my switch configuration (1 active and 1 unused).  What does that mean?  Well, I've got the pSwitch set up so ports 1-23 are for VM traffic and the uplink to the router (kind of a pseudo-VLAN) and ports 24-48 are for the iSCSI storage network (another pseudo-VLAN...keep in mind there's no tagging that I'm aware of, just the switch itself keeping these separate).  The idea is for the VM portgroup to only use vmk0/pNIC0 and for the storage port group to only use vmk1/pNIC1 (each being piped into the appropriate port set on the pSwitch).

    So, this all worked previously.  What I'm seeing at the moment (if I look in the MAC table on the switch) is that on reboot the MAC for the "storage" vmk1 is being fed into the port on the pSwitch that's configured for the VM network and the MAC for the VM vmk0 is being sent into the pSwitch port configured for the "storage" network.  This is exactly the opposite of the way teaming and failover is set up in the dvSwitch port groups and it's killing my host's connectivity.  iSCSI can't find configured targets because that group of ports on the pSwitch has no connectivity to them...same goes for the VM (and management) network, as it ends up in a group of ports that's all iSCSI boxes and nothing else.

    This seems to only be a problem during reboot, as once the system is up I can reset the pSwitch port and the correct vmk MACs show up in the address table and everything begins working again.

    Any ideas?



  • 10.  RE: dvSwitch problem on reboot

    Posted Dec 09, 2011 01:21 PM

    I understand what you are saying.  I wish I really had spare equipment to set up what you are doing.  I can only ask again what is the configuration on the physical switchport for trunking.  If you sending down vlans to VMware, some type of vlan trunking must be done.  But that still does not explain the config on the Vcenter flip flopping as you explained.

    I think I did notice when i reboot my host as well I have to go back in an manualy add the storage vmk ports back to the hardware iscsi iniators.  Once that happens then things seem fine.

    Frank



  • 11.  RE: dvSwitch problem on reboot

    Posted Dec 09, 2011 02:27 PM

    I think the issues was mostly my switching skills (lack thereof).  Coupled with the horrific documentation for an older Netgear switch it resulted in a lot of confusion around how to set up VLANs in this particular device.  Some educated guessing and a lot of experimentation (more coffee for God's sake!) seems to have got things squared away, or at least mostly workable.

    Boot times are back to what they should be and my connectivity is back up while on a dvSwitch...so I'm happy for the moment.  I'm still puzzled at the change in behavior from 4.x to 5 in terms of how the dvPortGroups were firing up.  Given that I'd specifically assigned NICs as active or inactive for a given portgroup/vmk I would not expect to see the MAC of that portgroup/vmk being sent down the inactive NIC to the switch during startup...but that's what was happening (and boy did my switch not like that).

    Thanks again for the counsel Frank, it was most appreciated.



  • 12.  RE: dvSwitch problem on reboot

    Posted Dec 09, 2011 02:30 PM

    I dont use the ative/standby model for my dVS.  I like to put my nics in an etherchannel.

    Not sure how much counseling I have done, but you are welcome.  i always enjoy chatting with other engineers on how they do things in their own environment.

    Frank



  • 13.  RE: dvSwitch problem on reboot

    Posted Dec 08, 2011 08:55 PM

    I'm having what appears to be the same symptoms but with slightly different conditions. I took a fresh install of ESXi 5 and configured a standard vSwitch and dependent hardware iSCSI adapters (Broadcom 5709's) connected to a new iSCSI target (Drobo B800i). After completing all that configuration according to the ESXi 5 Storage Guide, it appeared to be working. But a reboot caused a very long delay in booting, but more importantly once it finally did boot, I have zero connectivity to the host... cannot even ping, yet physical layer is connected with all lights showing link and even some activity . My host is connected to an HP ProCurve 2910al-48g-POE, but the iSCSI SAN ports are connected directly to the physical adapters on the server. The only solution I found was to reset to default config, after which I have connectivity again. This is the start of a new setup, so the only thing above the factory config is the iSCSI config. So the problem has to be something related to those steps (creating the vSwitch/kernel adapters, binding the iSCSI adapters and the kernel adapters, setting dynamic discovery, setting CHAP, or creating the new datastore)... or some issue with those steps and the hardware.

    We do have a simple VLAN structure in place on the switch. Do I have to set the VLAN settings in the host of in the adapters when I create them? I believe I just left those at the default of none.

    Anyone have any more info or thoughts on the problem given this new info?

    Thanks,

    DJ



  • 14.  RE: dvSwitch problem on reboot

    Posted Dec 08, 2011 10:17 PM

    DJ, if you happen to try again and get into the loss of connectivity, try two things for me and let me know what happens (both of them post-reboot when you've lost connectivity)...

    1. In the host console do a simple restart (not reset) of the mangement network.  Connectivity back?

    2. In the switch (the HP) management interface, do an up/down of the switch port the host management network is plugged into.

    Either of these would "cure" my problem until the next reboot.  Which I don't view as any kind of solution or long-term fix, but was at least educational in terms of a quick workaround.



  • 15.  RE: dvSwitch problem on reboot

    Posted Dec 09, 2011 02:38 PM

    DJ-

    To solve your long rebott wait times, check out my second post on this topic.  Whn you patch it, that will solve one problem.

    For your iSCSI connections can you describe more in detail how may pNICs you have comming out of your ESXi host to your iSCSi storage?  I am assuming you are running dedicated iSCSi over this?

    For your other connections ie VMtraffic, Fault tolerance, vMotion..  if these are running from your HP switch then yes you need to configure the HP procurve for vlan trunking.  ie For example..

    vlan500 = VM Traffic 1

    vlan501 = VM Traffic 2

    vlan502 = vMotion

    vlan503 = Fault Tolerance

    Well all these need to be trunked and sent down to ESX host where you can configure your individual nic for trunking or multiple pNICs for teaming and trunking...

    Frank



  • 16.  RE: dvSwitch problem on reboot

    Posted Dec 09, 2011 02:40 PM

    DJ, did you apply the patch that Frank linked above?   I'd give that a shot first.  Besides the other items I mentioned in terms of trying to temporarily restore connectivity, check the MAC table on the switch during and immediately after boot.  Is the MAC address for the iSCSI vmk showing up in your switch MAC table?  (Check the port that the host is plugged into)



  • 17.  RE: dvSwitch problem on reboot

    Posted Dec 09, 2011 04:23 PM

    Matt and Frank,

    I have not applied the patch yet b/c I'm still trying to figure out how to do that! I have more than a few years experience in IT but am brand new to virtualization -- plus I'm going in many different directions right now also trying to work with Drobo on these iSCSI issues. I didn't see anything in vSphere Client or in the patch areas of the vmware website about how to patch. I've found old docs for versions 3 and 4, but there doesn't seem to be much for 5. I did find and have been researching their update manager, but their documentation makes it sound like you have to have vCenter running. I just have a single host with ESXi, so I'm wondering how that's going to work. I know that's not the point of this thread, but if you have a simple answer, that'd be great.

    More to the point. I did recreate the problem, and as I look at my ProCurve MAC table, there are 2 different MAC addresses listed for the port to which my host is connected (but I only have ever had one configured for that as far as I remember). Disabling and re-enabling the port does not fix the problem for me. Both MAC addresses still show in the table even after that disable/enable. Was that not the case with you? Also, I had previously tried restarting the management network on the host console, and I also just tried it again after the disable/enable, and that does not help.

    I'm also looking into the VLANs some more, but we just have the VMware Essentials pkg and will not be doing any of the advanced stuff Frank mentioned. Our VLANing is simple at this point and is mainly to be able to exclude in the router access from a wireless, internet-only vlan to our main comporate vlan. So we have a trunk port to the router and then all the rest of the ports are our main, "corporate" vlan.

    One thing I might try is setting the subnet for my iSCSI vSwitch to be totally different than my management network. Currently, they are all on the same subnet. Any idea what's common / best-practice for that?

    Thanks in advance for any additional help.

    DJ



  • 18.  RE: dvSwitch problem on reboot

    Posted Dec 10, 2011 03:50 PM

    For whatever reason the response I emailed in last night didn't show up, so this is just a replicated version:

    DJ,

      Patching is a snap, even without vCenter or Update Manager.  In a nutshell...

    1. Download the patch (it's typically a .zip file, don't extract it) to your PC, then use the VMware client to access the host.
    2. Pick a local datastore (i.e. one of the local drives on your host) and "browse" it...then simply uploade the .zip file you downloaded to that datastore (you'll see some buttons in the pop window that will do the upload/download).
    3. Now you'll need to enable the ESXi shell (for local "console" access) or SSH on the host so you can Putty into it.  Either one works.
    4. Log into the host via shell/SSH...now you're going to do some esxcli trickery.
    5. Type "esxcli software vib install -d [name-of-datastore]name-of-package.zip" (keep the brackets around the name of the datastore and the no-space between the closing bracket and package name).

      The update will run and when complete will tell you to reboot.  Done.  Congrats, you just did fairly geeky VMware stuff.

      On the VLANs/subnets...it's been my experience that when you're using something like iSCSI you keep the "storage network" separate from the production network.  I do it by using VLANs.  Depending on your workload it might not be absolutely necessary but IMO it's good practice for performance and security reasons.  For some folks the VLAN isn't enough and they'll actually go to separate physical switches (obviously FC requires different switches period).

    DJ, are you using 1 switch or 2 in your VMware host (not physical switches...virtual switches)?  Set up 2 simple switches...vSwitch0 with vmk0 and nic0 (that'll be your management network) and vSwitch1 with vmk1 and nic1 and the iSCSI initiator (that'll plug into your iSCSI box).  If you're currently using just 1 vSwitch I'm betting that going to 2 will clear your problem.

    I'm not sure what the "fix" will be for you...but between the 3 of us I'm not comfortable with the behavior I'm seeing in the dvSwitch.  This wasn't happening to me (well, I didn't have this problem at least) with 4.x dvSwitches.

    Good luck!



  • 19.  RE: dvSwitch problem on reboot

    Posted Dec 12, 2011 09:58 PM

    Hi, Matt,

    Thanks for the patch command -- worked great. I'm now on the latest build. I'm definitely a PuTTY fan.

    I did notice on this new build that during boot, the time spent (apparently) looking for iSCSI targets is significantly reduced... from 20 minutes to about 5 minutes or less, in my case. Unfortunately, though, this patch did not resolve my loss of connectivity problem upon reboot after configuring iSCSI.

    Maybe it has something to do with how I'm setting it up. But I really feel like I've done everything according to the vmware storage guide.

    As far as the vSwitches... I do have 2 different ones, one for my management network, and one for the iSCSI adapters. My question was whether or not I should have 3... 1 for management and 2 for iSCSI, where each one of the iSCSI ones has only one adapter on it. The VMware guide says it's optional either way. I've been putting both my adapters in the same switch. I will try creating a separate switch for each adapter. I want 2 adapters setup b/c my Drobo B800i has 2 iSCSI ports, so I want both for failover (it doesn't support link aggregation, unfortunately).

    So I gave those ports on the Drobo each a network address and my 2 vmware host adapters each an address, all on the same subnet, which is also the same subnet as my management network (but the Drobo is connected directly to the server and not through the pSwitch). The only physical connection to the physical switch is the management adapter. I thought this was best for troubleshooting to leave out any physical switch between the host and the Drobo... but does anyone see any problem with this arrangement, specifically that would make the host lose all connectivity after a reboot?

    Thanks,

    DJ



  • 20.  RE: dvSwitch problem on reboot

    Posted Dec 12, 2011 11:29 PM

    I'm glad the patch went in smoothly and reduced your boot time.  I think once the connectivity issue is resolved you'll see it drop even further, as my experience leads me to believe the delay is the host attempting to enumerate the iSCSI targets that it knows should be there...but that it can't find due to the connectivity issue.

    At this point I'd suggest you take the host to the simplest configuration that should work...a single vSwitch that has the VMnetwork (for your guests) and the management network, operating off of vmk0 and vmnic0 (presumeably).  This would be the vSwitch with connectivity to your pSwitch.

    I'd then take the remaining vSwitch (the one with 2 pNICs/vmnics and 2 vmks) and remove the second pNIC/vmnic and second vmk.  I'd even yank the second network cable (knowing myself, it'd probably be in a fit of rage).  Get that "storage" vSwitch into the most simple configuration possible...then reboot the host and see what you get.  My experiences from earlier in this thread would suggest that things will come up fine (my issue seemed to be centered around multiple vmk MACs being fed down the same pNIC/vmnic with no VLAN setup).

    I think ultimately the shortest path to what you're looking for may be that third switch...it's also entirely possible there's a way to do it with just two switches and some setting(s) in the vSwitch that I've not had to use personally (I'm thinking the routing or perhaps failover order).  I got mine to go with VLANs...but since you're plugging directly into the Drobo I don't think that's going to work for you.

    I'll continue to help as much as I can but you may need someone with more depth of experience with the redundant NICs.



  • 21.  RE: dvSwitch problem on reboot

    Posted Dec 13, 2011 10:07 PM

    Yes, I have cut it down to the bear minimum. Here's all I'm doing:

    1. Create a vSphere standard switch, select the NIC already wired to the Drobo, set switch IP addr and subnet (192.168.1.7, 255.255.255.0) (gw is pre-set, of course, with the GW of the management network, which is 192.168.1.254) (BTW, that mgmt adapter, which is on its own vSphere standard switch, is 192.168.1.2).

    2. Bind the storage adapter that corresponds to the Drobo NIC in step 1 to that NIC.

    3. Set the Dynamic Discovery settings of that storage adapter, listing the IP of the Drobo port that's connected (192.168.1.4, same subnet and gw as above).

    4. I was trying it with CHAP, but for now I have that off everywhere.

    What I find is that the host either does not find the Drobo at all, or I did have it find it and report a status of "mounted" only for it to go into this weird cycle problem a few seconds later, where it would go to standby during which time the host reports the status as "Dead or Error". It would cycle in and out of those two states on an approximately 10-14 second interval.

    That's the network/storage side of my problem. Then, if I reboot after doing the above, my host still becomes totally inaccessible via that managment interface.

    What am I doing wrong? Something with the IP addresses? Again, I need to stress that the Drobo is directly connected to the server (with a single cable), not through a physical switch. The only connection to a physical switch is the management interface).

    Thanks,

    DJ



  • 22.  RE: dvSwitch problem on reboot

    Posted Dec 14, 2011 12:09 AM

    DJ,

    This may just be my opinion...but I'm not wild about having both vmks on the same subnet.  I'd leave the management switch/vmk/etc. as-is (presumeably that's the actual subnet required for you to get around your network), then change the "storage" vswitch/vmk/etc. over to 192.168.2.x, along with changing the Drobo IP as well.  The GW will be inconsequential (it'll probably still show the mgmt. GW IP) as the iSCSI is wired with a crossover cable and not going off-nework.



  • 23.  RE: dvSwitch problem on reboot

    Posted Dec 15, 2011 09:57 PM

    Hi, Matt,

    Yes, I had the same thought and had tried that. To be sure, I tried it again, but no better results. However, I did make some progress...

    I narrowed the problem down to the dependent-hardware iSCSI adapters (i.e., the chips on the Broadcom 5709s). I'm sure you know, but just for the benefit of others who might be reading... these are the vmhbaXX devices listed under "Broadcom iSCSI Adapter" in the "Storage Adapters" area. If instead I create a software iSCSI adapter, and bind that to the NIC, then the Drobo is happy and rebooting causes no issues. Everything else is the same as far as network addresses, my vSwitch, etc.

    So, why in the world would those cause the iSCSI storage to not work properly as well as making the server totally unreachable after a reboot?! Is there a way inside vSphere to check for the latest drivers for those cards? I'll start with that tomorrow. Any other ideas?

    Thanks,

    DJ



  • 24.  RE: dvSwitch problem on reboot

    Posted Dec 16, 2011 01:00 PM

    DJ,

      First, let me just put out there that I've not used 5709s (so I have limited familiarity with them) and I'm not suggesting that they won't work...but my understanding of them is that they're actually fairly limited devices when it comes to iSCSI.  FIrst thing I'd do would be to check the VMware HCL regarding their compatibility/supportability in the iSCSI role.  There do appear to be some driver updates for the Broadcom devices...you can find these in the vSphere download section of the VMware site...so you might try updating the drivers to see if that gets you any further.

      I think something to keep in mind is that there's a difference between the ability to do some iSCSI offload (which I think the 5709 can do) and being a full-fledged iSCSI HBA (I don't think the 5709 fits that description).  It may be that what's required of the 5709 during boot isn't something it's capable of due to these limitations (I'm just speculating here).  There are a lot of Google results around "VMware Broadcom 5709"...I'd start there.  FWIW in my experience the software iSCSI initiator in VMware really does an excellent job, it's probably time to start weighing just how much more time you want to invest in making the 5709 work vs. just going to the software initiator and being done with it, particularly if this system won't require mission-critical performance.



  • 25.  RE: dvSwitch problem on reboot

    Posted Dec 16, 2011 07:22 PM

    I'm not that familiar with them either, but VMware has 2 entries for this card in their IO compatability list, one for the network part and one for the iSCSI part ( http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=18683&deviceCategory=io&partner=12&releases=76&keyword=5709&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc ). So they are definitely certified.

    Also, from the following paragraph from the vSphere Storage Guide, it sure sounds like it is fully supported:

    "An example of a dependent iSCSI adapter is a Broadcom 5709 NIC. When installed on a host, it presents its two components, a standard network adapter and an iSCSI engine, to the same port. The iSCSI engine appears on the list of storage adapters as an iSCSI adapter (vmhba). Although the iSCSI adapter is enabled by default, to make it functional, you must first connect it, through a virtual VMkernel interface, to a physical network adapter (vmnic) associated with it. You can then configure the iSCSI adapter. After you configure the dependent hardware iSCSI adapter, the discovery and authentication data are passed through the network connection, while the iSCSI traffic goes through the iSCSI engine, bypassing the network."

    I did find out at least part of why my storage configuration is not working for me. I mentioned the info in my last post to the Drobo agent I was working with, and he said, "What, you're trying to use hardware adapters with the Drobo -- we don't support that at all!". After I got up off the floor, I started asking myself the same questions you mentioned. Is the loss of main CPU power significant enough for this to be a deal-breaker for me. Any CPU consumed by iSCSI processing is CPU I don't have later down the road as the number of my VMs grow... I feel sort of cheated, if you know what I mean... not to mention I paid extra for that iSCSI TOE.

    In my mind I'm asking, "Is it possible that, even if I use a software iSCSI adapter w/ my Drobo, the iSCSI TOE chip on my BCM5709C will somehow still offload some if the iSCSI processing?" I don't understand the hardware enough to know. Seems like a 'No', but I'm hoping the answer is that it will still offload some but not as good as a true hardware HBA. I think I could live with that. But I read that truly software-only iSCSI can consume up to 500MHz of CPU ( http://www.sanstor.info/5iSCSI%20software%20initiators%20vs.pdf ), and that's a bit scarry when I only have 3GHz to work with.

    Let me know what your thoughts are. I don't really know how to define "mission-critical performance." This will be our production server environment, and  not just backup. I'm not running a datacenter, but I plan to have 3-4 VMs and up to 80 people on it for various apps (no huge DBs or Exchange, but domain control, file serving, remote desktop hosting, a financial app, and other various smaller apps). We are a non-profit, so the Drobo's fit our budget, but I don't want to have to replace them later due to poor performance.

    Thanks for all the help.

    DJ



  • 26.  RE: dvSwitch problem on reboot

    Posted Dec 16, 2011 07:44 PM

    Have no idea if this is valid at all... I just bound the BCM5709 iSCSI HBA to the same vSwitch as the software adapter is bound to (and to which my Drobo is connected), and it seemed to not be a problem. Do you think the hardware is just being ignored for the software initiator, or is it possible that this is how I can use that hardware to offload the iSCSI processing? Anyone know?



  • 27.  RE: dvSwitch problem on reboot

    Posted Dec 16, 2011 08:21 PM

    I'm actually a bit lost inside your configuration at this point DJ...in terms of not really understanding how you've got everything put together.  Some screen shots might help.  I'm thinking:

    Storage adapters:

    1. Overall view.

    2. The properties pages of the vmhba you're trying to use (General and Network Configuration in particular).

    Networking:

    1. Overall view.

    2. Properties of the "Storage" vSwitch (including Ports and Network Adapters pages).

    Also, have you broken the switches into separate subnets yet?



  • 28.  RE: dvSwitch problem on reboot

    Posted Dec 16, 2011 09:32 PM

    Understandable. Although it might not seem like it, It's actually very simple still. One vSwitch for the managment network. I created a second and associated a second network card with it. Then I created a software iSCSI adapter and bound it to that second vSwitch. At that point, I could access my iSCSI storage (Drobo B800i), and all was happy... except that I'm using a software iSCSI adapter instead of the hardware ones that are installed and available. Just to try it, I then tried to bind the associated hardware adapter to the same vSwitch along side of the software adapter that was bound to it, and vSphere did not complain, and I can still transfer files to and from the datastore on that target. What I don't know is if that last step actually did anything functionally or if it is just being ignored.

    I understand your question about having to use software adapters, b/c I was confused about the same thing. After reading and re-reading the vmware storage guide, I am just about convinced that what I was trying earlier, by NOT creating a software adapter but instead using one the hardware adapters present in the list, is valid and should have worked. Drobo doesn't support it, so that's part of the problem. But even w/o the Drobo attached, the host still had the problem whereby it would lose all connectivity after a reboot after setting up that config. Subnets were irrelvant. I believe that could be a bug, as Frank was possibly finding out, and by what someone obviously believes in this thread: http://communities.vmware.com/message/1583579

    I just noticed another patch was released yesterday. Will definitely try that. But really, now my question is about adapters and performance.

    I have to run but I'll try to add screen shots Monday if it's still unclear. I did use different subnets, but that did not make a difference... it was all about the software vs. hardware iSCSI adapters.

    Thanks,

    DJ



  • 29.  RE: dvSwitch problem on reboot

    Posted Dec 11, 2011 09:49 AM

    Thread has been moved in vNetwork area.