The DWDM vendor should always be contacted regarding proper settings. Typical port configurations may not apply with the specific DWDM interface card you chose.
Just as when connecting SAN switches directly, the ports on each end of the ISL link must be configured the same way. Although many port configuration changes can be made non-disruptively, in most cases both ports on the ends of the ISL must be disabled and re-enabled to affect the change.
Comments herein are relevant to FOS 7.1 and above. Behavior of some features and settings may be different in prior FOS releases.
Forward Error Correction (FEC)
FEC is enabled by default on all ports. With Gen5, 16G, FEC uses bits outside the frame. Most DWDM do not pass these bits so it must be disabled. It may work with DWDM in transparent mode. In Gen6, 32G, the FEC bits are in the standard protocol header which does get passed intact through DWDM so this is a Gen5 consideration only.
In FOS 7.4 and above, FEC can also be disabled using the portcfglongdistance command. To disable FEC:
portcfgfec --disable -FEC s/p
Encryption & Compression
Compression and encryption is limited to no more than two ports per ASIC at the maximum data rate that switch is capable of (16G for Gen5 and 32G for Gen6). Compression and encryption can be enabled on more ports at lower speeds. Consult the FOS Administrators Guide for details.
portcfgcompress --enable slot/port portcfgencrypt --enable slot/port
Quality of Service (QoS)
QoS uses different virtual channels to establish high, medium, and low quality of service zones. Using virtual channels uses VC_RDY, rather than R_RDY, flow control. Few DWDM vendors support VC_RDY so QoS is typically not used in conjunction with DWDM.
Buffer Credits & Fill Words (VC Link Init)
By default, ARB fill words are used; however, most DWDM interfaces need to see IDLE fill words.
An excessive number of buffer credits on a link can cause application problems during error recovery. In severe cases, it can result in frame drops.
Padding a small number of extra buffer credits, typically 32, is recommended to accommodate bursts of traffic. It can help local frame flow when a port has traffic for both local and remote ports by reducing the probability of an out of credit situation at the F-Port. Note that if an E-Port is out of credit, it creates back pressure on the F-Ports by not returning buffer credits (R_RDY).
Most SAN applications use large frame sizes. There are a few applications, typically communications protocols such, as FICON CTC, that use small frames. Mixing applications with both large and very small frame sizes on the same ISL over distance is not recommended. The required buffer credits for the small frames will be excessive for large frame traffic and inadequate for large frames traffic.
Most DWDM interfaces do not supply buffer credits. One buffer credit per frame is required. Most FC environments do not always use full frames. The average frame size is typically 1800 bytes. When using the distance parameter with the portcfglongdistance command to have FOS automatically calculate the appropriate number of buffer credits, recommended padding can be achieved by adding 10 Km. to the distance.
Link extended (LE) mode adds buffer to buffer credits to keep a 10Km link of full frames full. For DWDM that supplies its own buffer credits, this mode provides adequate padding, is easy to implement, and does not require any additional licensing.
When compression is enabled, the payloads may be smaller so more frames are required to fill the link. A good rule of thumb is to double the number of required buffer-to-buffer credits. When using the commands in the example, this can be accomplished by doubling the distance or dividing the average frame size by 2.
Example: Configure a port for 90 Km. (padded to 100 Km.) with an average frame size of 1800 bytes, IDLE fill words, and FEC disabled.
portcfglongdistance s/p LS 0 -distance 100 -framesize 1800 -fecdisable
Credit recovery is enabled by default.
The recommended best practice is to leave credit recovery enabled. Credit recovery is compatible with most DWDM except when using DWDM that uses time division multiplexing (TDM) technology.
Most DWDM provide time slots for each interface. Bits therefore are stored while waiting for their time slice and then sent to the remote end to be reassembled. Although the jitter introduced by this sampling is very small, the ISL algorithm in FOS has a very tight skew specification and, depending on when frames arrive and where the sample clock is for a given interface, may sometimes be within specification for the trunking algorithm while at other times be outside the skew specification. As a result, trunks can frequently form and break resulting in frequent routing table updates. ISL trunking, therefore, is typically disabled.
There are a limited number of DWDM vendors that support both VC_RDY mode and meet the ISL trunking skew requirements when identical length links are used.
There are two ways to disable trunking:
To disable trunking on an individual port:
portcfgtrunkport s/p, 0
To disable trunking on all ports in a switch:
ISL R_RDY and Fibre Channel Gateways
A gateway merges SANs into a single fabric by establishing point-to-point E-Port connectivity between two Fibre Channel switches that are separated by a network with a protocol such as IP or SONET.
By default, switch ports initialize links using the Exchange Link Parameters (ELP) mode 1. Gateways expect initialization with ELP mode 2, also referred to as ISL R_RDY mode.
A limited number of link parameters are exchanged when using ELP mode 2 (ISL R_RDY enabled). ISL R_RDY, therefore, should only be enabled when it is required. ELP mode 2 has the following restrictions:
Most Fibre Channel interfaces support ELP mode 1 but if the interface is referred to as a "gateway", ISL R_RDY must be enabled.
Example: Enable ISL R_RDY mode on port 3 of slot 1:
portcfgislmode 1/3, 1
To disable ISL R_RDY mode:
portcfgislmode 1/3, 0
To see how it's configured:
De-bouncing of Signal Loss & Optical Protection Switching (OPS)
Many DWDM vendors offer OPS. When redundant networks are available, this feature switches from a primary network to a backup network when there is a failure on the primary network.
With this feature, most DWDM vendors hold up light on the connection between the DWDM and the fibre channel switch when switching to the backup network. For equipment that drops light momentarily during the switchover, Brocade switches have a loss time out value (Loss TOV) to de-bounce the link. When enabled, instead of taking the link out of service immediately, the link is only taken out of service if the loss of light exceeds 100 msec. This is valuable because it eliminates fabric rebuilds and routing table rebuilds associated with loss of light on E-Ports.
By default, de-bouncing is disabled. The Loss TOV is effectively 0.
Example: Enable link de-bounce on port 1/5:
portcfglosstov 1/5 1
Interface Speed & Type
IP interfaces to DWDM are only supported on the 7800, 7840, and chassis with FCIP capable blades. Depending on the product, an additional license may be required when connecting at 10GE on Gen5 (7840 or FX8-24 blade).
10G Fibre Channel
Enabling a switch port for 10G fibre channel requires an additional license on all Gen5 (16G capable) products. Gen6 products do not require an additional license. Whether using Gen5 or Gen6, a 10G SFP is required. A 32G or 16G SFP will not connect at 10G.
Creating a custom logical group for E-Ports through DWDM is recommended for two reasons:
We are trying to provide 8G FC link for our governemtn customer from site A to Site B, which is about 50KM apart. Our customer have borcade 6510 switch at both Site A and Site B.
In our recommendation, they bought 8 Gbit/sec ELWL 25 km for their brocade 6510 switch at Site A and Site B to connect to our DWDM equipment at near Site A and Site B that are in distance of 2Km and 4KM respectively.
In DWDM site, we configure end to end service of 8G FC with 8G FC XFP 1310nm 10KM optical module for both sites according to the vendor configuration procedure.
But, the link is not working.
As we too are not have any knowledge of brocade switch or even governement customer. Hence, we posted this query here seeking help and suggest what kind of error log necessary to pin point the problem.
Alar on DWDM equipment is:
in 8G Configuration, optical module at DWDM side towards borcade switch has receiving level fluctuation in evenly manner, the optical fiber is ok.
Brocade 6510 Switch--2KM-----DWDM
Brocade 6150 Switch---4KM----DWDM