SAN to the Future

Storage Area Networking (SAN) is something I'd guess most Network Engineers have heard of, or some limited exposure, but not much; maybe you've done some zoning for the Storage Guys on your Cisco N5K boxes, but otherwise it's a bit of a dark art. Well, same here - but recently I was posed an interesting problem, that in the IP/Ethernet world, is a fairly trivial undertaking:

Can we merge our IBM SAN with our Cisco/Hitachi SAN, so that Servers on one can access Storage on the other, and vice-versa?

Ever the idiot optimist, I immediately responded "Sure, that's like 10 minutes of work or something right?", and so dear reader, we begin.

Being prepared (FC Learnings)

Optimistic as I am, I've been burned by playing with stuff I dabble in before. So a hasty £4 transaction was made on fleeBay to procure this fine tomb of knowledge from the early 2000's:

undefined

I can highly recommend this book. A few bedtime reading sessions later, and I've already learned an awful lot more about Fibre Channel (FC) and undone some misconceptions I'd brought in from the IP/Ethernet world, like:

  • A Fibre Channel Domain (collection of FC Switches interconnected) can only work if each Switch has a unique FCID
    • By default, like VLANS, this is FCID 1
    • Two FCIDs of 1 on the same FC Network ("Domain") mean you're gonna have a bad time (one of the FC Switches will be "segmented" from the rest of the world)
  • A SAN Fabric is the collection of Switches in an FC Domain
  • HBA is a Host Bus Adapter (for FC)
    • This is the NIC of the FC world
  • CBA is a Combined Bus Adapter (for FCoE)
    • This is a NIC, but now it's also a HBA (the "C" refers to the fact that the same physical port is both a HBA and a NIC)
  • Normally, there are no more than two SAN Fabrics (A and B) per Deployment of a given set of Compute/Storage Array
    • But each SAN Fabric (i.e. the A Leg or B Leg) could have lots of FC Switches within it, and a Hub-and-Spoke setup, where the "Core Switch" is an FC Director-class Switch, and the "Access Switches" are Pizzabox-like FC Access Switches
    • "Ghostbusters Rule" applies here, the two streams (A Fabric and B Fabric) must never cross/talk to each other
  • Fibre Channel comes in 1, 2, 4, 8, 16 and 32 Gbps speeds, typically called "<x> GFC" (i.e. 8 GFC is 8 Gbps Fibre Channel)
    • Cisco N5Ks only go up to 8 GFC; I'm convinced 16 and 32 GFC are unicorns
    • Each is their own OSI Layer 1/2 Protocol pairing, although my brain approximates them to equivalent-tier on the OSI Model to, say, 1 Gbps Ethernet vs 10 Gbps Ethernet (i.e. an 8 GFC SFP will normally be backward-compatible for 1/2/4 GFC as well)
      • There's some optical magic where the OTU/OTN "encapsulating wavelength" is the same for, say a 8 GFC SFP as a 10 GbE SFP, it's just that an 8 GFC SFP "wastes" the 2.5 Gbps of this bandwidth (the world of optical is made up of 1.25 Gbps Wavelengths it seems)
  • FC uses an IS-IS/SPF-like algorithm to construct a Network Tree and block redundant paths
    • A large Blue/Red-hatted company who trIed Bloody hard to iMplement this on one of our SANs had completely misunderstood this, and thought that 4x 8 GFC uplinks makes 1x 32 GFC uplink
    • You can typically see which is active on, say, Brocade kit by looking at the "(upstream)" or "(downstream)" flag against a "fabricshow" or "switchshow" command
  • FC Interswitch Links are called ISLs
  • FC has sets of features - such as the FC Name Service - not all manufacturers/products support all features
    • This is hard to swallow, as it's a bit like Cisco and Juniper still competing on commonly-done features, at the "Ah yeah, we do Ethernet, but not with STP as an option" level (i.e. you can't take FC features for granted between vendors/products like you can in the IP/Ethernet world)
  • FC has various terms for the types of port (much more than "Access" vs "Trunk")
    • E_Port is a Trunk between FC Switches or Nodes
    • F_Port is an Access towards a Server or Array
    • N_Port (on the HBA) is the Server/Array Port towards a Switch N_Port
  • All FC Switches in a Domain can see all others and know the topology
    • On Brocade FOS, you can quickly get this with the following CLI (which looks like a reversie of Cisco IOS, with the space between keywords removed):
      • switchshow
        fabricshow
  • All Zoning/LUN/Fibre Channel Database Login ("flogi") information is held in the Fibre Channel Name Service (FCNS), which each FC Switch automagicaally populates with other FC Switches as soon as it is updated on any one FC Switch
    • I like to think of this as to FC Zoning Database what VTP is to VLANs in the IP/Ethernet world
  • World Wide Names (WWNs) are the equivalent of a MAC Address
    • Some are for the physical Port, others are for the Node (Switch/Server/Storage Array) itself
    • As well as the OUI-like "Vendor Identifier" concept on MAC Addresses, WWNs have a "Usage Identifier" to show if that WWN belongs to a Server or Storage Array
  • Logical Units (LUNs) are the name for Virtual Disks, which the Storage Array abstracts away onto multiple Physical Disks for redundancy
  • Everybody calls it a SAN Array although really it's a Storage Array
  • Fibre Channel over Ethernet (FCoE) is it's own thing, and aside from using the same Ethernet Medium/Cabling, can be viewed as a compete foreigner hitching a lift on the last-mile bit (i.e. Server-to-Switch) on the IP/Ethernet Network
    • FCoE requires a host of other stuff, like DCBX (Adapters that can negotiate FCoE parameters/Switches that can do something useful with the Ethernet "PAUSE" frame, rather than ignoring it; QoS parameters that prioritise FCoE frames...)
    • There's a reason FCoE never really took off (it's a pain in the arse to do right, even more than FC)
  • Targets (i.e. where the Storage LUN lives, the Storage Array) can't live on the same N_Port as an Initiator (i.e. the Server wanting to put/pull from that Storage LUN)
  • VSANs are another level of abstraction (unnecessary for most) where you can have a VSAN act as a container to a SAN, which in turn has Zones, which in turn only allow certain FC Aliases (human-friendly names for WWNs) to speak to other certain FC Aliases/WWNS
  • Everything in FC Zoning configs is an Inception-style "mapping to something else, which maps to something else" that only ends when you swallow the blue pill

Applying the theory to reality

Now armed (and definitely dangerous), let's look at what it is we've got in terms of the two SAN Fabrics to merge today, focussing only on the "A Leg" (for visual simplicity, but the same exists again for a "B Leg"):

undefined

If you're not familiar with an IBM FlexSystem/PureFlex Blade Server, think of a Cisco UCS but with much less functionality. For those of you unfamiliar with the world of the Blade Switch (you lucky, lucky people) - it's a module within the Blade Chassis that takes power/hosting from the Chassis, and has some ports on it as invisible internal ports (i.e. maybe Eth1/1-48 map 1:1 to the respective Backplane NIC on each Server in Blade Chassis Slots 1-8 - so Eth1/3 on Blade Switch #1 is NIC0 on Blade Server #3), and other ports on it as physically-connected uplinks (i.e. maybe Eth1/9-12 are 4x 1 Gbps Uplinks to the Top of Rack Switch, via 1000BaseSX Multimode Fibre patch lead).

Relevant for NPIV/NPV (when we get onto it), the IBM FlexStor V7000 is an in-Blade Chassis Storage Array, which utilises some of the Blade Chassis Server Slots, but acts as an FC Target (Storage) rather than a typical Blade Server Compute Node (as an FC Initiator, Compute Server).

As with many things in Large Enterprises, the cool kid unicorns don't exist here; is it daft that we've got two distinct Data Centre Stacks (one IBM and one Cisco/Hitachi) from each other? Absolutely. Would a cool kid hipster DevOps tell me this is impossible in the real world? Probably. Is there a technical reason for it existing? Not at all. Why is it there? Big Company politics and Project silos.

On the IBM kit; it's all re-badged Brocade, running Brocade Fabric Operating System (FOS), namely:

  • IBM SAN24B = Brocade 300
  • IBM FC5022 = Brocade 6547

IBM make this hard to discover, for some reason; I can't think why their Customers have left them in droves since the early 2000's, everyone must be wrong.

Raising Vendor TACs

Looking at the above, you're probably thinking - "Not too hard then, cable up some OM3/OM4 8 GFC from the IBM SAN24B to the Cisco N5K, job done?". Sadly, no - there's a few pre-requisites we need to do; so I'll leverage the expensive IBM-side and Cisco-side Technical Assistance Centre (TAC) Contracts I've got, and check my back. Caveats I'm aware of are the uniqueness of the FC IDs, so I go around and do the following to glean these:

  • IBM/Brocade
    • Login to each Switch via SSH/Telnet, and issue the following to glean the FC Topology, FCIDs and SFP Inventory/Status for each Switch
      • fabricshow
        switchshow
        sfpshow
    • Record them all in a big ol' spreadsheet
      • Including the Hostname, which handily for me, Big Blue have made completely different from the sticker on the front of the kit/documentation; thanks for that, IBM - again, it *really* hurts me that you're slowly going under in the Cloud Era, I can't think why your Cloud offering isn't even on the leaderboard...
    • Pull out the FC Alias (human-friendly name:WWN) and FC Zoning information
      • alishow
        zoneshow
    • Record this all in a big ol' notepad
      • Because I think I might have to transpose this into Cisco NX-OS/SANOS syntax
  • Cisco
    • Login to each Switch via SSH, and issue the following to glean the FC Topology (not much, there's 1x N5K per SAN Fabric), FCIDs and SFP Inventory/Status for each Switch
      • show fcdomain
        show vsan membership
        show inventory
    • Record them all in the same big ol' spreadsheet
    • Pull out the FC Alias and FC Zoning information
      • show flogi database
        show zoneset active
        show zone
        show fcalias
    • Record this in a big ol' notepad
      • To get the syntax I need to translate into (Brocade FOS -> Cisco NX-OS)

With Vendor TACs in progress, I go around and complete the above, and am happy that the FCIDs are unique on each FC Switch, so a SAN Merge isn't going to cause a problem. Having read this fantastic blog post on Merging Brocade SAN Fabrics, my understanding is that the SAN Fabric with the highest  (in ASCII terms, so "Z" trumps "A" for instance) Effective Configuration name (Brocade speak, or "Zoneset Name" in Cisco speak) wins/goes active. As I want to minimise the outage, and have the Cisco N5K "win" as the FCNS Master, my thinking is:

  1. Convert all the Brocade (IBM) FC Aliases/Zones from Brocade FOS into Cisco NX-OS
    1. Easily achieved file-by-file with Notepad++ and some Regular Expressions (RegEx)
  2. Pre-apply this to the Active Zoneset on the Cisco N5Ks
    1. Won't do anything, but won't harm anything/go FC Active Zone until the applicable WWNs are seen on the Cisco N5K fabric
  3. Arrange an Outage Window "just in case", and plug in the IBM SAN24B to the Cisco N5K, and allow the ISL to form
  4. Ensure the Cisco Zoneset is active, and no FC Switches have Segmented
    1. Merge them with the applicable CLI command on the Brocade/Cisco if they have
  5. Party on down

Response of the Vendor TACs

Cisco are the first to come back; they're not too sure the IBM (Brocade side) will ISL with their N5K kit. Initially, I'm confused - "Surely FC is FC, like Ethernet is Ethernet, if both bits of kit speak FC, even if you've not tested the interoperability, it'll work right?". Sadly, as per the Brocade Community Forums post on "Can I connect a 300e to a Cisco Nexus 5548", the answer is no for me, because:

  1. I'm running Brocade Fabric FOS greater than 7.0.0
    1. After this point, Brocade disabled the ability to turn on so-called "interop mode", which means it can't ISL with anything other than a Brocade
    2. The lack of this means FCNS-type stuff, like ability to specify FC Aliases, will fail miserably on me (and both Cisco/IBM Fabrics already make extensive use of FC Aliases)
  2. Neither Cisco nor Brocade guarantee it will work

So back to the drawing board then; but now running with the suggestion someone made in the Forums about Access Gateway (AG) mode.

Brocade Access Gateway (AG) Mode

Access Gateway is Brocade's renaming of what everyone else calls N_Port Virtualisation (NPIV) - because, as I'm now finding, FC Vendors are aresholes and don't believe in notions like standardisation or consistent naming. Access Gateway (and NPIV for that matter) basically turns the Brocade Switch (in this case, the FC5022 Blade Switch) into a "dumb FC Hub", which has no configuration/Zoning on it, and consolidates a given number of F_Ports into 1x shared N_Port, such that the upstream Cisco N5K Switch will see multiple WWNs (Servers) as connected to 1x F_Port (rather than the normal 1x F_Port per WWN). It's better described by The SAN Guy on his Configuring a Brocade Switch for Access Gateway (AG) Mode post, but visually it does this:

undefined

 

Given that I've got enough spare FC ports on the GEMs on my Cisco N5Ks, this is a perfect opportunity to kill-off the useless IBM SAN24B Top of Rack (ToR) Switches I've got, and just cable the 4x Uplinks from each IBM FC5022 (Brocade 6547) directly into the Cisco N5K, so I end up with this:

undefined

Implementing Brocade AG to Cisco NPIV

I'll need an outage to achieve this to the Brocade (IBM) side, as after Access Gateway Mode is enabled, the Brocade forgets all it's FCNS/Config, so I'll need to do the following. There is also a very important note in the Brocade Fabric OS Administrator Guide, which basically says FC Initiator and FC Targets can't live on the same N_Port; which is something that could happen to me/has significance, as I have an IBM FlexStor V7000 Storage Array on the same Blade Chassis as IBM Flex Compute Nodes (Blade Servers) that want to access it via FC as a LUN. To overcome this, I'll need to ensure the N_Port Groupings of my Blade Backplane Ports for a given Blade Compute Node end up on differing N_Ports, or "AG Port Groupings" to those which any given V7000 Arrays end up on.

This all looks like:

  1. Cisco N5K preparation (non-disruptive)
    1. Copy-mutate-paste over the Brocade (IBM) FC Aliases and FC Zoning into the Active Zoneset on the Cisco N5K, and activate it in advance ready
    2. Enable "feature npiv" (non-disruptive, not to be confused with "feature npv" which turns the Cisco N5K into a "dumb FC Hub", and is disruptive - as it does to the Cisco side the same that Access Gateway does to an IBM/Brocade)
  2. Brocade cutover (disruptive/needs an Outage Window)
    1. Re-cable the 4x Uplinks from each IBMFC5022 -> IBM SAN24B to instead go IBMFC5022 -> Cisco N5K
      1. Use OM3/OM4 as it's 8 GFC over a short distance
      2. Cisco-side SFPs are DS-SFP-FC8G-SW
      3. IBM/Brocade-side SFPs are XBR-000147
    2. Take the FC Switch out of the FC Domain
      1. switchdisable
    3. Enable the Brocade (IBMFC5022) for Access Gateway (NPIV) Mode
      1. ag –modeenable
    4. Verify NPIV (AG) is done/running on the Brocade (IBM FC5022)
      1. ag --modeshow
    5. Show the port mappings (F_Port -> N_Port), and verify that the V7000 Blade Chassis Ports/WWNs are in differing N_Port Groups to any Blade Compute Servers
      1. ag --mapshow
      2. If they aren't (i.e. WWN from a V7000 and a Blade Compute Node mapped to same N_Port), split them out:
        1. ag --mapdel 0 "13;14"
          ag --mapadd 13 "1;2;5;6"
  3. Cisco N5K post-cutover check
    1. Check copied-over FC Zones using Brocade/IBM WWNs/Hosts are now active (have a "*" against them)
      1. show zone active
        show zoneset active
    2. Check Brocade WWNs are logged into the FLOGI Database
      1. show flogi database
  4. Hit the old IBM SAN24B repeatedly with a large lump hammer and/or baseball bat for all the pain it has caused

I've not had chance to navigate the "Politics of ITIL" (TM) yet to tell you if this is the correct way; I'll let you know.