10-24-2011 12:00 PM
on sw 5 port 19 (flexfabric uplink) the fillwords it set to:
Fill Word: 0(Idle-Idle)
when i do a "portcfgshow" all ports looks the same, does it make any sense? (i'm not a SAN fabric man , but I think that you guessed it already )
just checked the port stats:
on my sw 5 port 0 (LW), this is the stats:
tim_rdy_pri 0 Time R_RDY high priority
tim_txcrd_z 276391845 Time TX Credit Zero (2.5Us ticks)
There is 0 tim_txcrd_z on port 19.
10-24-2011 12:12 PM
Ok that fillword should probably be set to 3, but verify this with HP
tim_txcrd_z increasing tells you don't have enough buffer credits configured to go the distance on the ISL at 8G.
you need to config more credit buffers on your ISL.
10-24-2011 12:24 PM
I will check with HP about the fillword tomorrow.
I also the sw 5 port 0, has this setup:
Long Distance OFF
isn't that wrong?
I will reboot the switch tomorrow and see if it changes. Just checked sw 1,2,5,6 and all the ISL's is set to the same thing.
10-24-2011 01:07 PM
Well if the length of the ISL is 300 meters or so, a long distance setting set to OFF (is the default L0 setting for a port) is not wrong perse.
L0 (or off) reserves 20 buffer credits for a port. This should be enough for a 1KM link @ 8G, BUT thats only true if you send FC Frames with the largest payload possible (=2112 bytes) .
If you're average payload is half of the maximum you'll need double the buffer credits to fully utilize the link.
If you have the counter increasing that means to me that to default 20 buffers are not enough to fully utilize the 8G 300M link .
10-24-2011 02:10 PM
I will look into it tomorrow and give it a try.. should I just double the value? or do some calculation? read a bit about some calculations, thats other uses.
10-25-2011 06:53 AM
I have set all the bottleneck ports from 8 to 24 bbc. and for now, no bottleneck have been detected..yet.. I will let it run the next 24 hours.
I have only changed it on one of our fabrics, if there is still no bottlenecks errors from this tomorrow, I will continue with the next fabric.
The "default" value was Max/Resv: 8, buffer usage: 17, buffer needed 17.
I have seen a bit more traffic on some of the ports directly to the SAN, but not over LW ISL yet.. I hope that the backup tonight can load it a bit, it's only running max. 100Mbit...
The TX BB_credit, is still raising, but not as much as before.. only about 10% af is was before.. I read a bit about it sould be less than 15% of the total blocks transfered, but is that the tx/rx count, or frame count?
oh.. also.. after setting the buffers up, all paths came back in the current fabric.. it may be because of fabric relogins, but we will see tomorrow.
10-31-2011 07:12 AM
The problem have been located.
It turns out that our ISL fiber between the two serverrooms cant handle more than 4Gbit.
After switching the ports from N8 to 4, we gained a stable connection, and can use the entire fiber. Before you could only use about 100-150 MB/s, now we are up at 450MB/s.
Thanks for the help
11-01-2011 08:58 AM
Glad you resolved the issue.
I'm curious though about why the fibers between your DC's can't support 8G?
Is this because they're Mutlimode cables instead of Singlemode?