Why Your Storage isn't Always to Blame!

Why Your Storage isn't Always to Blame!

Summary: CRC Errors, Class 3 Discards, Code Violation Errors & Loss of Sync –Why Storage Isn’t Always to Blame!Storage is often automatically pinpointed as the source of all problems.

SHARE:
TOPICS: Storage
5

CRC Errors, Class 3 Discards, Code Violation Errors & Loss of Sync – Why Storage Isn’t Always to Blame!

Storage is often automatically pinpointed as the source of all problems. From System Admins, DBAs, Network guys to Application owners, all are quickly ready to point the figure at SAN Storage given the slightest hint of any performance degradation. Not really surprising though, considering it’s the common denominator amongst all silos. On the receiving end of this barrage of accusation is the SAN Storage team, who are then subjected to hours of troubleshooting only to prove that their Storage wasn’t responsible. On this circle goes until there reaches a point when the Storage team are faced with a problem that they can’t absolve themselves of blame, even though they know the Storage is working completely fine. With array-based management tools still severely lacking in their ability to pinpoint and solve storage network related problems and with server based tools doing exactly that i.e. looking at the server, there really is little if not nothing available to prove that the cause of latency is a slow draining device such as a flapping HBA, damaged cable or failing SFP. Herein lies the biggest paradox in that 99% of the time when unidentifiable SAN performance problems do occur, they are usually linked to trivial issues such as a failing SFP. In a 10,000 port environment, the million dollar question is ‘where do you begin to look for such a miniscule needle in such a gargantuan haystack?’

To solve this dilemma it’s imperative to know what to look for and have the right tools to find them, enabling your SAN storage environment to be a proactive and not a reactive fire-fighting / troubleshooting circus. So what are some of the metrics and signs that should be looked for when the Storage array, application team and servers all report everything as fine yet you still find yourself embroiled in performance problems?

Firstly to understand the context of these metrics / signs and the make up of FC transmissions, let’s use the analogy of a conversation. Firstly the Frames would be considered the words, the Sequences the sentences and an Exchange the conversation that they are all part of. With that premise it is important to first address the most basic of physical layer problems, namely Code Violation Errors. Code Violation Errors are the consequence of bit errors caused by corruption that occur in the sequence – i.e. any character corruption. A typical cause of this would be a failing HBA that would eventually start to suffer from optic degradation prior to its complete failure. I also recently experienced at one site Code Violation Errors when several SAN ports had been left enabled after their servers had been decommissioned. Some might think what’s the problem if they have nothing connected to them? In fact this scenario was creating millions of Code Violation Errors causing a CPU overhead on the SAN switch and subsequent degradation. With mission critical applications connected to the same SAN switch, performance problems became rife and without the identification of the Code Violation Errors could have led to weeks of troubleshooting with no success.

The build up of Code Violation Errors become even more troublesome as they eventually lead to what is referred to as a Loss of Sync. A Loss of Sync is usually indicative of incompatible speeds between points and again this is typical of optic degradation in the SAN infrastructure. For example if an SFP is failing, its optic signal will degrade and hence will not be at for example the 4Gbps it’s set at. Case point: a transmitting device such as a HBA is set at 4Gbps while the receiving end i.e. the SFP (unbeknownst to the end user) has degraded down to 1Gbps. Severe performance problems will occur as the two points constantly struggle with their incompatible speeds. Hence it’s an imperative to be alerted of any Loss of Sync as ultimately they are also an indication of an imminent Loss of Signal i.e. when the HBA or SFP are flapping and are about to fail. This leads to the nightmare scenario of an unplanned path failure in your SAN storage environment and worse still a possible outage if failover cannot occur.

One of the biggest culprits and a sure-fire hit to resolving performance problems is to look for what are termed CRC errors. CRC Errors usually indicate some kind of physical problem within the FC link and are indicative of code violation errors that have led to consequent corruption inside the FC data frame. Usually caused by a flapping SFP or a very old / bent / damaged cable, once CRC errors are acknowledged by the receiver, the receiver would reject the request leaving the Frame having to be resent. For example as an analogy imagine a newspaper delivery boy, who while cycling to his destination loses some of the pages of the paper prior to delivery. Upon delivery the receiver would request for the newspaper to be redelivered with the missing pages. This would entail the delivery boy having to cycle back to find the missing pages and bring back the newspaper as a whole. In the context of a CRC error a Frame that should typically take only a few milliseconds to deliver could take up to 60 seconds in being rejected and resent. Such response times can be catastrophic to a mission critical application and it’s underlying business. By gaining an insight into CRC errors and their root cause one can immediately pinpoint which bent cable or old SFP is responsible and proactively replace them long before they start to cause poor application response times or even worse a loss to your business.

The other FC SAN gremlin is what is termed a Class 3 discard. Of the various services of data transport defined by the Fibre Channel ANSI Standard, the most commonly used is Class 3. Ideal for high throughput, Class-3 is essentially a datagram service based on frame switching and is a connectionless service. Class 3’s main advantage comes from not giving an acknowledgement that a frame has been rejected or busied by a destination device or Fabric. The benefits of this are that it firstly significantly reduces the overhead on the transmitting device and secondly allows for more bandwidth availability for transmission which would otherwise be reduced. Furthermore the lack of acknowledgements removes the potential delays between devices caused by round-trips of information transfers. As for data integrity, Class 3 Flow control has this handled by higher-level protocols such as TCP due to Fibre Channel not checking the corrupted or missing frames. Hence any discovery of a corrupted packet by the higher-level protocol on the receiving device instantly initiates a retransmission of the sequence. All of this sounds great until the non-acknowledgement of rejected frames starts to also bring about Class 3’s disadvantage. This is that inevitably a Fabric will become busy with traffic and will consequently discard frames, hence the name Class 3 discards. Due to this the receiving device’s higher-level protocol’s subsequent request for retransmission of sequences will then degrade the device and fabric throughput.

Another indication of Class 3 discards are zoning conflicts where a frame has been transmitted and cannot reach a destination, hence concluding in the SAN initiating a Class 3 discard. This is caused by either legacy or zoning mistakes where for example a decommissioned Storage system was not unzoned from a server or vice versa leading to continuous frames being discarded and degraded throughput as sequences are retransmitted. This then results in performance problems, potential application degradation and automatic finger pointing at the Storage System for a problem that can’t automatically be identified. By resolving the zoning conflict and spreading the load of the SAN throughput across the right ports, the heavy traffic or zoning issues which cause the Class 3 discards can be quickly removed bringing immediate performance and throughput improvements. By gaining an insight into the occurrence and amount of Class 3 discards, huge performance problems can be quickly remediated before they occur and thus another reason as to why the Storage shouldn’t automatically be blamed.

These are just some of the metrics / signs to look for which can ultimately save you from weeks of troubleshooting and guessing. By first acknowledging these metrics, identifying when they occur and proactively eliminating them, the SAN storage environment will quickly evolve and transform into a healthy, proactive and optimized one. Furthermore by eliminating each of these issues you also empower yourself by eliminating their consequent problems such as application slowdown, poor response times, unplanned outages and long drawn out troubleshooting exercises which eventually lead to fingerpointing fights. Ideally what will occur is a paradigm shift where instead of application owners complaining to the Storage team, the Storage team will proactively identify problems prior to their existence. Here lies the key to making the ‘always blaming the Storage’ syndrome a thing of the past.

Topic: Storage

Archie Hendryx

About Archie Hendryx

SAN, NAS, Back Up / Recovery, Virtualisation & Cloud Specialist.
Please note that the thoughts, comments, views and opinions expressed in this blog are entirely my own and not those of the company I work for. Content published here is not read or approved in advance by my employer and does not necessarily reflect the views and opinions of the company I work for. Currently working as a Principal vArchitect for the company VCE.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

5 comments
Log in or register to join the discussion
  • This is a real eye opener and the problems caused by Crc errors as detailed here are nothing short of jaw dropping. We're running with Cisco mds SAN directors and I've often wondered what real impact they're causing. I'm also surprised at the level of problems you've detailed caused by bent fibre channel cable. We've got a migration coming up so this was good to know. Cheers!
    TNelson-6e456
  • Great and informative article. Thanks.
    David_D1971
  • Well sorry, but there are some wrong parts in this article. A defect SFP and a slow-draining-device is a total different issue. A slow-draining-device comes from a configuration like HBA 2 GBit zoned to more the one Target port and so a total bandwith >2GBit of bandwith could push back data to the HBA! Or having a 2 GBit HBA and a 8GBit Target port....or having a heavy (memory) paging server, wrong PCI bus putting the HBA in..... You can easy see this on the Buffer2Buffer Credit Zero counter of the switch port where the HBA is connected to. Well good idea with this writting but please refer "Fibre Channel - A Compreshensive Introduction" // Robert W. Kembel if you like to know more about Fibre Channel. There are different kinds of C3 Discarded frames. Please make a clear difference between them ! (due to timeout, zone-miss...and so on). If you have C3 discarded frames due to timeout (>150ms buffer2buffer credtit zero state) will be discarded. Now SCSI Error recovery will take place - not FC Class 3 ! (Most environments use FC Class 3). On SCSI Layer the I/O gets timeout after 60 seconds and then the SCSI Command will be aborted and retry the command. Thats what happend for every Discarded C3 Frame due to timeout cause the whole SCSI exchange broken !
    Well having a good SAN Design will avoid such problems and the right monitoring will also give you a good chance to don't step into such situation where you will not detect CRCs for weeks :-) In deed I saw a few environment where people act blind and don't have any monitoring on the FC mentrics and have no performance data. Thats the main problem - plus they don't have the knowhow about Fibre Channel and the right SAN Design. And thats the cause of cost saving...
    mawelo
  • First off thanks to all for their comments and feedback. @Mawelo you have made some very valid points and certainly ones I will try and clarify and address. Firstly you are correct that it would be wrong to classify a failing SFP as a slow draining device in the traditional sense i.e. a host holding up the SAN as it can't process the data sent to it, even though as I’ve pointed out it may contribute to latency issues due to retries. Ultimately a slow draining device is one that is requesting more information than it can deal with, typically due to it being at a slower link rate than the rest of its environment. As you’ve also correctly mentioned the usual cause for this is within the server or the device itself being overloaded in terms of CPU or memory and thus having difficulty in dealing with the data its requested. Lastly I agree I have given a somewhat general overview of Class 3 Discards and this is something that certainly can be discussed in more depth and has several causes such as credit issues etc. I think the main point as you’ve rightly mentioned and what I was trying to get across with this blog is that what is needed is a soundly designed SAN that is comprehensively monitored. To emphasize your last point it’s simply too costly to be unaware of your SAN and FC metrics!
    Archie Hendryx
  • by the way....its 500ms until a frame would be discard due to timeout will happen...
    mawelo