For more details, please see ourCookie Policy.

Fibre Channel (SAN)

Posts: 0

4GB Ports Bottlenecking at 1GB

I have some older Solaris 10 (Sparc) servers (not super old - about 3 years) that are having some issues.

I've recently done 2 things.

Upgraded FOS on all my switches (all silkworms) to 6.3.1a.

Backups and Production data used to be done on separate HBAs - I noticed the most I ever saw any HBa ever get to was about 15-20% so I consolidated these into the same HBAs to free up some ports.

I've enabled bottleneckmon on all my ports since upgrading to 6.3.1a

We ran backups on some systems at 4:00 Friday afternoon (a normal ocurrance - No issues in the past).  Response times went WAY up.  I checked in DCFM and saw the ports only using about 25% of their available bandwidth (around 100MB/s).  Bottleneckmon as triggering alerts and tim_txcrd_z counters on some ports were increasing very rapidly.

The main ports having issues were 5, 6 ,7 -  We narrowed it down to port 5 that seemed to be doing the most damage.  To test we ran backups on ports 5, 6, 7 separately.  When we ran backups on port 5 - It afftected the host on port 6, causing its response times to go up.

I logged an SR with Brocade about this and they seem to think it's HBA related.

All of the ports are logged in as F_Ports

All but one are 4 GB (port 7 - 2 GB)

We use Emulex HBAs and EMC CLARiiON Storage.

Our production sata and Backups reside on separate CLARiiONs

I've checked the I/O on the disks and didn't see a lot of utilization. The data on eash of these hosts is spread across 70 - 90 spindles.

The traffic isn't spanning any ISLs - These are limited to 2 Silkworm 4100s on separate fabrics.

The ports are configured in G_Port mode

I'm seeing some loop-based traffic on some of these ports too. (Port 5 Example)

open 3777274872 loop_open

opened 3634372157 FL_Port opened

starve_stop 172093031 tenancies stopped due to starvation fl_tenancy 2312917158 number of times FL has the tenancy nl_tenancy 2114979339 number of times NL has the tenancy zero_tenancy 16271236 zero tenancy

I'm also seeing lots of buffer credit issues.

Should I maybe change the Topology in my Emulex HBAs from "Loop then Fabric" to "Fabric then Loop"?

Any ideas would be appareciated.

Posts: 0

Re: 4GB Ports Bottlenecking at 1GB

1. Which FOS were you running earlier??

2. Really risky move to consolidate traffic. Although it works in most cases, you may see increase in IO response times. This is due to the fact the IO is of different nature. Backup tends to be in large chunks while random IO tends to be in small chunks, interleaving works in most cases but not in all.

Although lack of BB credits on a F port rarely matter, but problems cannot be ruled out due to mixing of the traffic. If possible can you try separating the traffic once again and observe whether it makes any difference. Based on the response to Q1, I may have another suggestion to make.

Posts: 0

Re: 4GB Ports Bottlenecking at 1GB

We were running 6.2.0f.  We updated Fabric A on one week, then Fabric B 2 weeks later.

Even running on the same HBAs, I still should be getting more bandwidth on an hba than 1 GB without it completely killing my fabric, especially when I'm running on 90 spindles that aren't shared by any other hosts.

I just spoke with our UNIX team as well and one thing I am thinking it might be more and more...  They don't seem to agree though.

The servers that have been giving issues since doing this are all older.  Our M5000s don't seem to be having any issues by doing this.

The server that was really bogging everything down is a Sun Fire v890.  All of the HBAs are plugged into a single 66mhz 64-bit bus.  I wonder if running everything off the same HBAs was overloading the bus?  1 64-Bit 66mhz is cabable of about 533MB (Bytes) per second.

If my TX was 120, my RX was 120 per HBA, that would be a total of 480 MB/s going through that same bus.  Let me now your thoughts.  The only reason I could think of why the separate HBAs worked better is the HBAs sharing the same bus and data having the same priority on the bus as backups.  The bus would still be overloaded but production data can still get through.

Posts: 0

Re: 4GB Ports Bottlenecking at 1GB

I also went ahead and ran another backup using the different HBAs.  That didn't seem to fix the issue actually.  Response times on all LUNs owned by SPA went up to like 100ms (from 4ms-8ms) and I was getting 100% bottleneck alerts from bottleneckmon.

The separate HBAs didn't seem to fix it.  They are on the same fabric now, when before they were on separate fabrics.  It may have been an issue before, but no one noticed because the other (Windows) fabric isn't as response time sensitive.

I've also opened an SR with EMC now.

Posts: 0

Re: 4GB Ports Bottlenecking at 1GB

>> I also went ahead and ran another backup using the different HBAs.

Do you intend to say that you reconfigured the disk & tape devices and made them bond to one HBA each?? To really isolate the difference made by using the same HBA, you would have to zone tape devices to one HBA, and disk to the others. Then reconfigure the device files on the host to reflect the change in controllers. Else in my opinion there would always be interleaving of frames or atleast a sense of confusion for the host.

One suggestion I'd make without much reasoning is to turn off QOS on these ports. portcfgqosdisable - this feature sometimes seems to create strange problems for unix hosts.

Posts: 0

Re: 4GB Ports Bottlenecking at 1GB

I figured saying "i moved them to different HBAs" was enough.  I figured everyone would know what that meant.  Sorry.

But yes I changed my zoning, updated the host information (device files, powerpath config /etc).  The backups are going to separate HBAs.  It doesn't seem to fix anything though.

These backups are going to another CLARiiON, not to tape.

When I run backups on one host (port 5 - data), (Port 28 backups), everything on that fabric slows to a crawl.  I see response time on my Production CLARiiON (Only SPA) go way up.

I've logged an SR with EMC to find out why only SPA slows.

Even throughout the day I've been getting bottleneckmon alerts for this particular host when backups aren't running.

This is port 5.  I cleared port stats and timed 10 seconds then did a portstatsshow

stat_wtx                111250728   4-byte words transmitted
stat_wrx                121300      4-byte words received
stat_ftx                217193      Frames transmitted
stat_frx                3852        Frames received
stat_c2_frx             0           Class 2 frames received
stat_c3_frx             3852        Class 3 frames received
stat_lc_rx              0           Link control frames received
stat_mc_rx              0           Multicast frames received
stat_mc_to              0           Multicast timeouts
stat_mc_tx              0           Multicast frames transmitted
tim_rdy_pri             0           Time R_RDY high priority
tim_txcrd_z             372307      Time BB credit zero (2.5Us ticks)

Posts: 0

Re: 4GB Ports Bottlenecking at 1GB


tim_txcrd_z             372307      Time BB credit zero (2.5Us ticks) , do a portstatclear and see whther this value is increasing, if so then it points to a slow drain device, you have upgrade the HBA firmware seeing the compatibility matrix

Posts: 0

Re: 4GB Ports Bottlenecking at 1GB

I did clear the port stats.  That was for 10 seconds.

I do have a slow-draining device.  Port5.

I fix it (sort of).  I reduced the number of paths to our backup CLARiiON from 4 to 2.  I'm not load-balancing between HBAs anymore.  I guess I could have just set PowerPath not to load balance.

We ran a backup with 4ports active.  Response times went way up immediately.

I removed 2 of the paths (removed them from active zoning config) while the backups were running and immediately response times got better.

We also tried disabling the other 2 paths (after enabling the 2 that were removed) to see if maybe it was an issue with a single path.  It didn't seem to matter.

4 paths on this one host seemed to kill my entire fabric.

I really think now that it's the fact that it's a beefy server (Sun Fire v890) with lots of memory and CPUs, yet all 4 HBAs are attached to a single 66mhz 64-bit bus.

The max on a 66mhz 64-bit PCI-X bus is 533 Megabytes per second.  TX/RX on those ports totalled to about 500 MB/s.  I think the host was trying to send at 4 GB (the ports are logged in at 4GB) but the bus was limiting it causing bottlenecks and R_RDYs not to be sent.

Reducing the paths to backups effectively cut my bandwidth down so now I'm not maxing it out.

Has enyone else experienced something like this?

I still get lots of zero buffer credits for that port (even during the day when backups aren't running).  I'm thinking about either disabling load-balancing on my multipathing software, or hard setting my switch port to 1GB or 2GB.

Posts: 0

Re: 4GB Ports Bottlenecking at 1GB

Hi driskoll,

Yours is an interesting case. Taking advantage that you have powerpath try leveraging its capabilities, use the "powermt path_latency_monitor=on" to get information on each path latency and performance. Check other options that powerpath gives you, it may come handy.

I think you're headed in the right direction. I mean, backup IO can really bring a server down to it's knees. Not every server is able to backup at full speed just like it would be a normal file transfer.

Port speed of clariion matches that of HBA? Try fixing the port speed for the ports involved.

Try the IOMeter tool and see what performance it can get on that server.

Anyway the buffer starvation issue that you are experiencing is a good clue. And now that you say that you are on the last version of FOS you can try tunning number of buffers with the "portcfgfportbuffers" command. It may help in this case!

Let us know of any update! We are here to help you.

Posts: 0

Re: 4GB Ports Bottlenecking at 1GB

Hi Driskoll,

By the way, what model of clariion are you using? And how many FE ports does it have?

If you have only two ports, one for every SP then there you have a potential limitation.

Powerpath can do active-active on the SP however check your failover mode, is it set to 1 or 4?

And also check if your LUNs are balanced across both SP, so you can get some sort of processor balance on the disk array side.

Anyway this load balancing features are just a bit of help, if your server is slow then there you have your main bottle neck.


Join the Broadcom Support Community

Get quick and easy access to valuable resources across the Broadcom Community Network.