We have a SAN environment with an IBM V7K and 2*Brocade 300 switches. There are no configurations defined on both the switches, however ESX hosts are online and can see the LUNs!!
The environment is running with this config for a long term (>2 yrs), noticed when we need to build few servers. Physical connectivity exists as I can se both server hbas are logged in and switch ports are online.
abcswfc1:admin> zoneshowDefined configuration:no configuration defined
Effective configuration:no configuration in effect
abcswfc1:admin> cfgshowDefined configuration:no configuration defined
abcswfc1:admin> switchshowswitchName: abcswfc1switchType: 71.2switchState: OnlineswitchMode: NativeswitchRole: PrincipalswitchDomain: 97switchId: fffc61switchWwn: 10:00:00:27:f8:3d:59:a0zoning: OFFswitchBeacon: OFFHIF Mode: OFF
Index Port Address Media Speed State Proto==================================================0 0 610000 id N8 Online FC F-Port 21:00:00:24:ff:59:05:6f1 1 610100 id N8 Online FC F-Port 50:05:07:68:02:16:07:ab2 2 610200 id N8 No_Light FC3 3 610300 id N8 No_Light FC4 4 610400 id N8 Online FC F-Port 21:00:00:24:ff:59:60:395 5 610500 id N8 Online FC F-Port 50:05:07:68:02:16:07:ac6 6 610600 id N8 Online FC F-Port 21:00:00:24:ff:8b:d9:7c7 7 610700 id N8 Online FC F-Port 21:00:00:24:ff:8b:d9:64
check the state of the default zone (defzone --show) which probably will show as all access.
Unusual outside mainframe (where one large zone or not at all is used).
That mean that your whole fabric / switch / hosts / storages are on single error domain....
But as long as all devices are well behaved, and following the rules it work (it has 2 years).
Though, I prefer to reduce the error domain by zoning - singe HBA or single host...
The setting is All access as you said.
abcswfc1:admin> defzone --showDefault Zone Access Modecommitted - All Accesstransaction - No Transaction
The environment is set up like all luns are shared to esx hosts, no more individual host exists. So i believe whenever we add new server, this will be UP automatically.
Also i would like to know, if I am creating a config it wont affect the production. Do i need to change any settings then ?
notice that defzone settings define what kind of access is available when no zoning is present = your case.
Notice that if you create a zone with a server hba and storage port, all other devices not in this zone will have no access (at all - except for it own port). So, if you a considering start using zoning, you will need to do zoning for all - or create one single large zone with all devices which are not yet zoned, and take them out from there one by one.
Any devices (initiators or server) which are impacted (concerned) by a zoning will need to re-login into it storage (PLOGI or PDISC or ASIC) which all well behaved HBA (and drivers) do when they get the notification that a change have occured....
As long you do not have any effective zoning, only defined, it will not affect your production.
Once you enabled a configuration with a zone, all devices not zoned will loose all access.
There exists configuration - like mainframe/FICON - or example when you only have ESXi server a set of them which have all access to a set of LUN. Once you have a host to your environment, it will see all other hosts (most hosts ignore other hosts), and all storage ports and all LUNs (if you do not do LUN masking on the storage) or limited to those LUN configured on your host.
Excellent! Thanks Martin for the detailed note.
I have 4 servers connected to switches, my plan is to create seperate zones for each host, then cfgsave and cfgenable.
I believe with these steps I dont have any production impact as I am defining zones for all devices here.
And by enabling a configuration, the defshow setting will be automatically changed to "No Access" or do I need to manually run command ?
first, to change the default zoning, you will need to run the command "defzone --noaccess" followed by cfgsave.
Of course, if you do this when no zoning in place, then nobody sees anything.
So, I would create my zoning for all hosts/storage, create the configuration and enabled it.
Once you have a effective configuration and zoning, then I would do "defzone --noaccess; cfgsave".
Notice that when you activate the zoning, the server will see a interruption and receive a state change notification at that time. And force the server to query the name server, and re-login (PLOGI/PDISC/ADISC) into the storage.
And any outstanding scsi exchange will be terminated if the server does a PLOGI (after RSCN) but if server does PDISC any outstanding / non terminated scsi exchange will be continued....
Dependent on ESXi server / drivers / HBA, you might see timeout and retries when you enabled your configuration,
since we need program our ASIC to filter on certain address. Most modern HBA and OS handles this by default, and MPIO software.