I set the portlongdistance to LE oand VC_Trans_Link_Init = 1 on an ISL to a 4100 and the Max/Resv Buffers is 46.
I did the same on another ISL on another 5100 and it only allocated Max/Resv Buffers of 26.
5100's on same FOS version 7.2.1a
I set the port speeds to 4g and disabled qos on all "E" ports.
Thanks for any assistance.
that result in the Max/resv Buffers is caused because on that end the speed was set to AUTO and forced to 4G on the other end of the ISL.
'46' is the number of reserved BB credits for a speed of 8G in LE mode and since you didn't force the speed, the switch reserves the maximum, just in case it may need it.
I am just speculating so, please, repeat your test but forcing the speed to 4G on both ends of the ISL and let us know if the result fits.
You were correct sir - putting port back to auto increased the buffers to 46 on port 33/37.
Another question - the column "Average Buffee Tx" What are values inside and outside parentheses exactly?
In example below from portbuffershow: Column 5?
Do I need to increase further beyond the 46, because of the 120 average? This port 38 is connected to a 4100 and has latency errors.
33 E LE 46 - ( 68) - ( 64) 26 26 10km 34 E - 8 - ( - ) - ( - ) 26 26 2km 35 E - 8 - ( - ) - ( - ) 26 26 2km 36 - 8 - ( - ) - ( - ) 0 - - 37 E LE 46 - ( 72) - ( 48) 26 26 10km 38 E - 8 - ( 120) - ( 460) 26 26 2km 39 E - 8 - ( 80) - ( 60) 26 26 2km 1544
The values inside the parenthesis are supposed to be the average Frame size on that link. Whilst the values outside the parenthesis are supposed to be the average buffer usage on that link. A good example is shown in the following link
In your particular case, I am not sure what those figures really mean...
Regarding the latency errors, you may need to add additional ISLs between this switch and the 4100 but a deeper investigation should be performed on the fabric because usually latency event on a standard isl are caused by some kind of slow drain device in the fabric and not the isl itself.
Thanks for the help Felipe,
I will add more buffers to the ISL's and continue my fabric analysis.
During latencey erros, with trunkshow -perf - the bandwidth is very small < 10% max.
I also have two ISL's to same switch.
Now - I need to google how to find the slow drain device, unless again you come to the rescue :-)