Bedford and the Normalization of Deviance

Like many pilots, I read accident reports all the time. This may seem morbid to people outside “the biz”, but those of us on the inside know that learning what went wrong is an important step in avoiding the fate suffered by those aviators. And after fifteen years in the flying business, the NTSB’s recently-released report on the 2014 Gulfstream IV crash in Bedford, Massachusetts is one of the most disturbing I’ve ever laid eyes on.

If you’re not familiar with the accident, it’s quite simple to explain: the highly experienced crew of a Gulfstream IV-SP attempted to takeoff with the gust lock (often referred to as a “control lock”) engaged. The aircraft exited the end of the runway and broke apart when it encountered a steep culvert. The ensuing fire killed all aboard.

Sounds pretty open-and shut, doesn’t it? There have been dozens of accidents caused by the flight crew’s failure to remove the gust/control lock prior to flight. Professional test pilots have done it on multiple occasions, ranging from the prototype B-17 bomber in 1935 to the DHC-4 Caribou in 1992. But in this case, the NTSB report details a long series of actions and habitual behaviors which are so far beyond the pale that they defy the standard description of “pilot error”.

Just the Facts

Let me summarize the ten most pertinent errors and omissions of this incident for you:

  1. There are five checklists which must be run prior to flying. The pilots ran none of them. CVR data and pilot interviews revealed that checklists simply were not used. This was not an anomaly, it was standard operating procedure for them.
  2. Obviously the gust lock was not removed prior to flying. This is a very big, very visible, bright red handle which sticks up vertically right between the throttles and the flap handle. As the Simon & Chabris selective attention test demonstrates, it’s not necessarily hard to miss the gust lock handle protruding six inches above the rest of the center pedestal. But it’s also the precise reason we have checklists and procedures in the first place.
  3. Flight control checks were not performed on this flight, nor were they ever performed. Hundreds of flights worth of data from the FDR and pilot interviews confirm it.
  4. The crew received a Rudder Limit message indicating that the rudder’s load limiter had activated. This is abnormal. The crew saw the alert. We know this because it was verbalized. Action taken? None.
  5. The Pilot Flying (PF) was unable to push the power levers far enough forward to achieve takeoff thrust. Worse, he actually verbalized that he wasn’t able to get full power, yet continued the takeoff anyway.
  6. The Pilot Not Flying (PNF) was supposed to monitor the engines and verbally call out when takeoff power was set. He failed to perform this task.
  7. Aerodynamics naturally move the elevator up (and therefore the control column aft) as the airplane accelerates. Gulfstream pilots are trained to look for this. It didn’t happen, and it wasn’t caught by either pilot.
  8. The Pilot Flying realized the gust lock was engaged, and said so verbally several times. At this point, the aircraft was traveling 128 knots had used 3,100 feet of runway; about 5,000 feet remained. In other words, they had plenty of time to abort the takeoff. They chose to continue anyway.
  9. One of the pilots pulled the flight power shutoff handle to remove hydraulic pressure from the flight controls in an attempt to release the gust lock while accelerating down the runway. The FPSOV was not designed for this purpose, and you won’t find any G-IV manual advocating this procedure.  Because it doesn’t work.
  10. By the time they realized it wouldn’t work and began the abort attempt, it was too late. The aircraft was traveling at 162 knots (186 mph!) and only about 2,700 feet of pavement remained. The hydraulically-actuated ground spoilers — which greatly aid in stopping the aircraft by placing most of its weight back on the wheels to increase rolling resistance and braking efficiency — were no longer available because the crew had removed hydraulic power to the flight controls.

Industry Responses

Gulfstream IV gust lock (the red handle, shown here in the engaged position)
Gulfstream IV gust lock (the red handle, shown here in the engaged position)

Gulfstream has been sued by the victim’s families. Attorneys claim that the gust lock was defective, and that this is the primary reason for the crash. False. The gust lock is designed to prevent damage to the flight controls from wind gusts. It does that job admirably. It also prevents application of full takeoff power, but the fact that the pilot was able to physically push the power levers so far forward simply illustrates that anything can be broken if you put enough muscle into it.

The throttle portion of the gust lock may have failed to meet a technical certification requirement, but it was not the cause of the accident.  The responsibility for ensuring the gust lock is disengaged prior to takeoff lies with the pilots, not the manufacturer of the airplane.

Gulfstream pilot and Code7700 author James Albright calls the crash involuntary manslaughter. I agree. This wasn’t a normal accident chain. The pilots knew what was wrong while there was still plenty of time to stop it. They had all the facts you and I have today. They chose to continue anyway. It’s the most inexplicable thing I’ve yet seen a professional pilot do, and I’ve seen a lot of crazy things. If locked flight controls don’t prompt a takeoff abort, nothing will.

Albright’s analysis is outstanding: direct and factual. I predict there will be no shortage of articles and opinions on this accident. It will be pointed to and discussed for years as a bright, shining example of how not to operate an aircraft.

In response to the crash, former NTSB member John Goglia has called for video cameras in the cockpit, with footage to be regularly reviewed to ensure pilots are completing checklists.  Despite the good intentions, this proposal would not achieve the desired end.  Pilots are already work in the presence of cockpit voice recorders, flight data recorders, ATC communication recording, radar data recording, and more.  If a pilot needs to be videotaped too, I’d respectfully suggest that this person should be relieved of duty.  No, the problem here is not going to be solved by hauling Big Brother further into the cockpit.

A better model would be that of the FOQA program, where information from flight data recorders is downloaded and analyzed periodically in a no-hazard environment.  The pilots, the company, and the FAA each get something valuable.  It’s less stick, more carrot.  I would also add that this sort of program is in keeping with the Fed’s recent emphasis on compliance over enforcement action.

The Normalization of Deviance

What I, and probably you, are most interested in is determining how well-respected, experienced, and accomplished pilots who’ve been through the best training the industry has to offer reached the point where their performance is so bad that a CFI wouldn’t accept it from a primary student on their very first flight.

After reading through the litany of errors and malfeasance present in this accident report, it’s tempting to brush the whole thing off and say “this could never happen to me”.  I sincerely believe doing so would be a grave mistake. It absolutely can happen to any of us, just as it has to plenty of well-trained, experienced, intelligent pilots. Test pilots. People who are much better than you or I will ever be.

But how? Clearly the Bedford pilots were capable of following proper procedures, and did so at carefully selected times: at recurrent training events, during IS-BAO audits, on checkrides, and various other occasions.

Goglia, Albright, the NTSB, and others are focusing on “complacency” as a root cause, but I believe there might be a more detailed explanation.  The true accident chain on this crash formed over a long, long period of time — decades, most likely — through a process known as the normalization of deviance.

Social normalization of deviance means that people within the organization become so much accustomed to a deviant behavior that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. People grow more accustomed to the deviant behavior the more it occurs. To people outside of the organization, the activities seem deviant; however, people within the organization do not recognize the deviance because it is seen as a normal occurrence. In hindsight, people within the organization realize that their seemingly normal behavior was deviant.

This concept was developed by sociologist and Columbia University professor Diane Vaughan after the Challenger explosion. NASA fell victim to it in 1986, and then got hit again when the Columbia disaster occurred in 2003. If they couldn’t escape its clutches, you might wonder what hope we have. Well, for one thing, spaceflight in general and the shuttle program in particular are  specialized, experimental types of flying.  They demand acceptance of a far higher risk profile than corporate, charter, and private aviation.

I believe the first step in avoiding “normalization of deviance” is awareness, just as admitting you have a problem is the first step in recovery from substance addiction.  After all, if you can’t detect the presence of a problem, how can you possibly fix it?

There are several factors which tend to sprout normalization of deviance:

  • First and foremost is the attitude that rules are stupid and/or inefficient. Pilots, who tend to be independent Type A personalities anyway, often develop shortcuts or workarounds when the checklist, regulation, training, or professional standard seems inefficient. Example: the boss in on board and we can’t sit here for several minutes running checklists; I did a cockpit flow, so let’s just get going!
  • Sometimes pilots learn a deviation without realizing it. Formalized training only covers part of what an aviator needs to know to fly in the real world. The rest comes from senior pilots, training captains, and tribal knowledge. What’s taught is not always correct.
  • Often, the internal justification for cognizant rule breaking includes the “good” of the company or customer, often where the rule or standard is perceived as counterproductive. In the case of corporate or charter flying, it’s the argument that the passenger shouldn’t have to (or doesn’t want to) wait. I’ve seen examples of pilots starting engines while the passengers are still boarding, or while the copilot is still loading luggage. Are we at war? Under threat of physical attack? Is there some reason a 2 minute delay is going to cause the world to stop turning?
  • The last step in the process is silence. Co-workers are afraid to speak up, and understandably so. The cockpit is already a small place. It gets a lot smaller when disagreements start to brew between crew members. In the case of contract pilots, it may result in the loss of a regular customer.  Unfortunately, the likelihood that rule violations will become normalized increases if those who see them refuse to intervene.

The normalization of deviance can be stopped, but doing so is neither easy or comfortable. It requires a willingness to confront such deviance when it is seen, lest it metastasize to the point we read about in the Bedford NTSB report. It also requires buy-in from pilots on the procedures and training they receive.  When those things are viewed as “checking a box” rather than bona fide safety elements, it becomes natural to downplay their importance.

Many of you know I am not exactly a fan of the Part 121 airline scene, but it’s hard to argue with the success airlines have had in this area.  When I flew for Dynamic Aviation’s California Medfly operation here in Southern California, procedures and checklists were followed with that level of precision and dedication.  As a result, the CMF program has logged several decades of safe operation despite the high-risk nature of the job.

Whether you’re flying friends & family, pallets of cargo, or the general public, we all have the same basic goal, to aviate without ending up in an embarrassing NTSB report whose facts leave no doubt about how badly we screwed up.  The normalization of deviance is like corrosion: an insidious, ever-present, naturally-occurring enemy which will weaken and eventually destroy us.  If we let it.

  162 comments for “Bedford and the Normalization of Deviance

  1. January 13, 2016 at 2:15 pm

    Hi Ron,

    I’m not a pilot at all, so this is interesting to me in a different way, partly hearing about the industry but partly because the normalization of deviance applies outside of flying as well.

    Since normalization within any culture happens precisely because the culture becomes self-perpetuating, isn’t the only real way to combat that to have either a) automated systems which don’t allow the deviance or b) very regular checks by people outside the culture that procedure was followed, with significant penalty for not following? In this case, given that you have audio recordings of the cockpit there could be regular audits that checklists were followed, and as some of our voice-to-text technology improves that could be automated for a greater percentage of flights. I agree with the need for buy-in on rules, since rules-for-the-sake-rules delegitimize the rule book as a whole, but humans, let alone humans within a culture, are going to slip eventually anyway. Having outside audits with penalties for failure that the pilots (or software engineers, in my case) know will happen seems to be only way to really get incentives strongly aligned…or am I missing something.

  2. David Waller
    January 15, 2016 at 6:36 am

    Hey Ron,

    I’m no pilot nor do I have any connection to aviation other than being an occasional passenger. Can I ask why the culvert is significant in the crash? Surely the ground was going to ruin their day in any case?

    Professionally, I’m an Internal Auditor and I see this sort of behavior (normalisation of deviance) all the time and in all sorts of systems and processes. I’d go as far as saying it’s part of the human condition. That said, I’m amazed at the suicidal negligence of the crew in this crash.

    • January 15, 2016 at 6:51 am

      The culvert is only significant in that it was the object which brought the airplane to a sudden stop and caused it to break apart. Until that point, the jet seems to have been rolling on the ground in one piece despite exiting the runway surface.

      Would the runway excursion have been survivable if the culvert wasn’t there? Who knows. Probably not, because the landing gear can only take so much abuse before it fails. But that’s pure conjecture on my part.

      Many commenters have noted that normalization of deviance is common in their fields. Physicians, IT security gurus, firearms safety professionals, sailors, etc. As you noted, it is a part of the human condition–a naturally-occurring phenomenon, much like the corrosion I mentioned in the final paragraph.

      • David Waller
        January 15, 2016 at 6:56 am

        Thanks for the reply. Great article. I think Internal Auditors are waking up to organisational culture playing a huge part in what causes risk to crystalise. The big issue is how you can audit it and get the results to make your point to the board.

      • RUSSELL W STYLES
        October 22, 2017 at 7:19 am

        Yes, it is a part of many fields. Your rifle’s bolt is on the table. Does it really matter whether or not you allow the barrel to point at someone? Yes it does. You get used to allowing it now, later, when it is loaded, you might not notice.
        Same thing, really.

        • Frank Davis
          November 4, 2017 at 3:34 pm

          Sound advice. As I tell my firearms classes, never point the gun at something you aren’t willing to destroy. All guns are loaded.

  3. Mike
    January 16, 2016 at 9:36 am

    One factor could be that they knew they’d be fired and have their licenses revoked if they aborted, so they apparently preferred to risk their lives instead of their livelihood.

    • DRG
      January 16, 2016 at 9:51 am

      id like to think thats not the case … no one should be fired for safety … every airline ive worked for ( or even the corporate departments ) went out of their way to ensure crews knew that non punitive safety-related actions were paramount.

      • January 16, 2016 at 10:04 am

        I’d like to think so as well.

        On the other hand, if they had aborted above V1 and managed to stop the plane — probably with fuse plug releases or other things which would dictate aborting the mission entirely — I wonder what the follow-up investigation would have revealed and how/if that might have impacted their employment.

        I’m not suggesting they would have been punished for taking a safety-related action by aborting the takeoff. Rather, I wonder: would the investigation have revealed how lax the safety environment was? Would it have revealed the skipped checklists, ignored safety warnings (the Rudder Limit CAS message, for example), lack of control checks, and so on?

        • Eric LeVeque
          January 16, 2016 at 10:23 am

          All that’s possible but in the heat of the moment, I’m not sure they would be thinking that. They’ve been operating that way for years as normal. I think it’s a combination of things, one of them being the magic V1 call. They assumed they were committed to go. After all, if you abort after V1 in the sim, it’s a re-do or fail so this thought process probably drove them to fix the problem rather than stop. And when they finally tried to abort, it was a half hearted attempt at doing so. As I wrote before, the fact that the abnormal cues such as a CAS msg and stiff throttles didn’t cause them to stop and have a look or even cause them concern what’s so ever is even more troubling. They no longer respected their machine it seems.
          Just a thought

        • Kenny Brooks
          December 11, 2017 at 5:52 am

          Non Pilot here Ron, but I’ve read and reread more than several times you’re article on this tragedy. Even as a non-pilot, I’m totally aghast at the level of indifference to checklists and required procedure’s in operating this very sophisticated aircraft. You mention “fuse plug releases” as a means of aborting the takeoff. Would you explain please. Incidentally, I’ve followed Gulfstream’s for over 40 years after seeing my first back in the 70s. Always a pleasure to read the well written articles here.

          • December 11, 2017 at 8:57 am

            Ah yes. Fuse plugs are mounted in each inner wheel half. Their purpose is to prevent tire explosion caused by hot brakes. The plugs are designed to melt at a specific temperature, thereby releasing the pressure from a tubeless tire when brake generated heat causes the tire or wheel to exceed a safe temperature limit.

            In the Bedford case, a high speed abort would have transferred tremendous energy into the brakes. You can see YouTube videos of brake tests where jumbo jets abort at high speed. The brakes glow red and then white hot. They are designed to handle such an event, but if the abort occurs at a high enough speed and the aircraft is heavy enough, the brakes can heat up to the point where they would heat the tires and thereby increase the pressure of the nitrogen therein, causing a violent explosion. It’s much safer to gently release the pressure in the tire to prevent this. That’s what a fuse plug does.

    • January 18, 2016 at 7:50 am

      If they were concerned about being fired (or in general), they would have followed procedure. Ron called it a “phenomenon”. I call it becoming comfortably lax and unconcerned.

  4. January 17, 2016 at 3:52 pm

    YAWN! “Social normalization of deviance” is just a fancy way to say COMPLACENCY — quit trying to make yourself sound smart by making simple concepts complicated. The pilots here did what most professional pilots do — they cut corners — and they kept cutting so many corners for so long that there simply weren’t any corners left.

    • Eric Jaderborg
      January 17, 2016 at 5:16 pm

      Oh, but there is such a difference between normalization and complacency. These are distinctly different phenomena. Complacency is a form of slowly going to sleep: “nothing has happened before, so nothing will happen today.” Normalization of deviance is the acceptance of deviant behavior, which then becomes the norm. It is the precession of a gyroscopic compass that gives false information that is accepted as truth, until finally the “truth” is so far afield from reality that one no longer knows the difference. They arise from different roots. With respect, Mr. Currie, It’s not about “sounding smart,”—and I think that is an unkind characterization—it’s about identifying departures from established SOP, resetting the compass, and driving a straight, true line.

      • January 17, 2016 at 5:41 pm

        Couldn’t have said it better myself!

      • Chuck Taylor
        January 21, 2016 at 9:15 am

        Regarding your reply, “Oh, but there is such a difference between normalization and complacency.” – that was very well articulated. Though this article specifically deals with aviation – the hazards and lessons are applicable to so many areas of our lives. It reminds me of a definition of integrity I heard in more than one sermon, “Integrity is who you are when no one is watching”. It’s nice to know there are a large number of pilots out there resisting the normalization of deviance.

  5. Capt Scott Johnson
    January 18, 2016 at 5:32 am

    This was a great article on Culture and subsequent discussion. I’m a Event Review Committee rep. for my companies volunteer reporting ASAP program. I will forward to my airlines training staff. Words to live by.

    • January 19, 2016 at 5:09 pm

      Thanks Scott! I’m a ERC rep for my company as well. I hope the committee finds it worthwhile. 🙂

  6. January 18, 2016 at 7:33 am

    How come there is no flashing alarm with verbal prompt that the gust lock was engaged on it? Simple technology.

    • January 19, 2016 at 12:14 pm

      A good question. If I had to hazard a guess, it’s probably because the list of procedures, checklists, and flows both crew members would have to ignore to attempt a takeoff with the gust lock engaged was so long that the FAA and GAC assumed it would not happen. And for 30+ years they were right.

      Could a micro switch in the pedestal be wired into the engine start switch to lockout the starter when the gust lock was engaged? Probably. Of course, that adds weight, complexity, and new failure modes which could keep the engines from being started a lot more often than it would prevent a gust lock accident.

      Keep in mind this is a early 1980’s design. It’s like asking why that 1925 Ford doesn’t have a shoulder harness or why that ’68 Mustang fastback has no airbag.

  7. Joe Miller
    January 18, 2016 at 8:38 pm

    Ron, I have enjoyed all the replies you have received about the GIV at KBED. As I stated in previous replies I flew for one of the largest corporate aviation fleets (conglomerate oil company) before retiring the 1st of 3 retirements. Many years ago we aborted at KBED in a GII at Vr. Why would we abort above V1. REASON: 300 overcast 1 miles mist and fog and BIRDS (not reported or seen by the tower-because we asked). I need to explain the companies policies. FIRST-the senior pilot was ALWAYS in the left seat, SECOND-the Captain (senior pilot) was the only pilot that could abort the aircraft-both pilots could command an abort. THIRD-the first officer would only fly from the right seat, but he would set his power for take-off and then the Captain would take control of the power levers (so the Captain was in position to abort). The crew alway knew their positions and no one was ever out of sync. ON THIS ABORT-it was the first officer’s leg and he briefed the crew (had an engineer on board to also serve the passengers) our usual briefing plus due to the weather that we had an additional 2700 feet available over and beyond the accelerate stop distance if needed plus overrun. I called V1 then Vr and then the first officer screamed BIRDS (seagulls were sitting on the runway and flying but not out of the way but down the runway due to the low ceiling) and we were running into thousands of them. I called ABORTING with the crew doing what we had trained for except due to the wet runway I commanded NO braking until 100kts (fear of hydroplaning) only MAX reverse. At 100 kts both crew went max braking and we stayed in MAX reverse until we had completely stopped the aircraft (why save the engines if your going to go off the runway) and we still had 1500 feet of runway ahead of us plus overrun. No birds went into the engines but there we hundreds laying all over the runway. As we sat there calming down the first office said: Thank You for aborting because I knew we were never going to make it if we lifted off. The engineer stated: I’ve been through a lot but this is the first time I felt (if we lifted off) we were not going to make it. Note: due to the size of our flight department we had training personnel that were authorized to run the training simulators. So, we were able to do things in training that the training companies cannot do. We practiced all types of aborts plus practiced landings on wet or icy runways with NO BRAKES just reversers. At home on a long runway I would demonstrate to a new first officer that you could make a normal landing at 54000# and stop in less that 5000′ with only the use of the reversers. Why would I do this you ask: in the 46 yrs. of flying jets weather was not always available or not very reliable ( No FMS or cell phones, ect.) especially in the beginning of my career with all the out of the way places oil companies were going. On ending: you can never overtrain nor can you reinvent the wheel when all is going wrong (such as WHAT the flight crew attempted to do in the GIV).

    • January 19, 2016 at 5:20 pm

      Amazing story, Joe. You’re right: we never train for, simulate, or even talk about aborting above V1. I understand the logic behind that, because statistically-speaking, a high-speed abort is incredibly hazardous, and more than one crew has perished trying to accomplish it.

      But it does beg the question, what if you’re above V1 and… the ground spoilers deploy, a few hundred birds end up in the engines, the elevator or yoke jams, etc? It’s like losing both engines in a twin-engine jet. Most training programs and flight manuals don’t even touch on the subject, as though it could never happen. But it has happened in the past, and will again in the future. Misfueling, pilot error, sabotage, conversion mistakes, hijacking, volcanic ash, maintenance issues, bird strikes, and many other causes have taken out two, three, and four engines all at once.

  8. Saabchick
    January 18, 2016 at 11:27 pm

    Good to see the media bringing in a wider look at this. Unless you’re psychopatic, no one wakes up in the morning and decides they’re going to crash today. The normalisation thing is a standards issue, if these pilots weren’t running SOPs you can bet others weren’t either. Normalisation happens in every workplace but in most airlines it gets picked up and brought back on track. The FO who went along with it all -cockpit gradient or reporting culture? And it’s the FAA’s job to pick up the company for not picking this stuff up during audits. Lots of deep cultural issues to pick through here.

  9. January 19, 2016 at 11:12 am

    See this in IT/engineering/security culture all the time; also known as “that’s just how we do it here” or “it’s OK, don’t worry about that”.

    You can tell people stuff all day – but having independent auditors outside the normal chain of command brought in to break processes (red team) and hold people accountable is absolutely crucial. Continental Express 2574 exemplifies this failure mode well. In the IT world we do have inspections, but often there is a huge scramble right before to “get things right”, so while it is arguably a useful forcing function to maintain some average minimum level – it means that there are many times when your actual compliance is quite low because it basically oscillates up and down.

    Unfortunately the assumption that the culture wants to embrace mistakes and learn from them is erroneous in our world as well. Too often decisions are political and exercises are gamed to make folks ‘look good’ vice to actually learn anything/test things.

    Bottom line.. anything involving humans is hard – especially at scale. This is why I prefer small teams over large. It’s much easier to get 100 units of work out of 4 spectacular folks than 200 units of work out of 20 average folks.

    • January 19, 2016 at 11:31 am

      Thanks for the perspective from the IT side. We’re definitely dealing with human nature here. The biggest difference between flying and IT that I can think of are the stakes. IT errors and security lapses can be expensive, embarrassing, and career-limiting….but they don’t usually kill the one who circumvents procedure. In aviation, those things can and do kill you… and yet people still continue to do them. The slide is slow and takes a long time. It reminds me of the boiling frog analogy: when the temperature (aka deviance) ramps up slowly, nobody notices–not even the frog. But drop that same frog into a vat of scalding water and he’ll jump right out. That’s what the auditors are: frogs that are occasionally dropped into our operation so they can test the water.

      • Brian
        January 19, 2016 at 11:40 am

        In my sector (Defense/Intel Systems) certain mistakes could arguably get peopled killed; but it’s certainly not nearly as common/direct/nor as spectacular.

        I like the frog analogy; and it certainly demonstrates why the frogs have to live somewhere else. If they are too close to the work they ‘warm up’. Speaks nicely to the intrinsic problems of having to have folks ‘in the trenches’ to see how work is ‘actually’ done – i.e. not just on an inspection/sim/etc. Similar problems with meat inspectors.

        I will say – I really like the idea you mention of taking FDR-type data and looking at how things are ‘actually done’. You can get interesting pattern of life/trends that can be overlain with how the rule-makers believe things are being done. This will illustrate assumptions/biases neatly. One great example is how the weight of the average person has shot up dramatically since the rules were made. I believe this was revised recently but I’d argue it’s still optimistic.

        Verifiable Data > Rhetoric & Assumptions

        • January 19, 2016 at 12:21 pm

          The FDR data analysis is becoming more common. It’s called FOQA: Flight Operations Quality Assurance. Many airlines and charter companies — mine included — have such a program. It’s quite valuable for analyzing trends, improving training, and focusing on areas where standardization my be lacking. Aviation is somewhat unique in having built-in data recorders on board. They were designed to allow accident investigators to reconstruct a crash… But since they are there and recording data all the time anyway, why not put it to good use?

          • Steve Thorpe
            February 20, 2016 at 8:31 am

            The corporate, Part 91 operator I work for has had a FOQA program in place since 2006. Participating business aviation operators have a very active users group that meets on a regular basis to share best practices and brainstorm for ways to make our programs relevant and safety-enhancing for our operations. In fact, this users group has been tasked by NBAA to provide data that the NTSB asks for in one of the recommendations arising from this accident.

            Reason’s swiss cheese model may need a new paradigm…it is not enough to have the holes in the several layers of cheese to “line up” in order for an accident to happen. In this case, when the layer of cheese would seem to have been enough to prevent the accident, the flight crew burrowed through the cheese until they made their own hole!

            These self-made holes:

            1) Checklist that says, “Control lock…….Off.
            2) Flight control check in the blocks, as per the checklist.
            3) “Rudder Limit” annunciation as they were pulling on the runway, indicating the rudder was at a restrictive limit, when it shouldn’t have been. This was due to the control lock doing its job of restricting rudder movement.
            4) The PF recognizing they were only manually able to get about half of the calculated takeoff power manually before engaging the auto throttle.
            5) BOTH pilots recognizing they were not achieving full rated take-off thrust, yet continuing the takeoff.
            6) Another checklist item, checking that the yoke “floats” up off of the forward stop by 80 KIAS was not called out and likely not checked.
            And the last “burrow” through a layer of cheese, 7) Recognition that the “Lock’s on” by the PF, yet not initiation the abort for 10 more seconds.

            There but for the grace of God go I…NOD is a sneaky bastard. Combating it requires vigilance, a commitment to SOP development and execution by ALL involved, and an occasional fresh set of eyes to see “how we’re doing”.

      • Joe Thompson
        January 19, 2016 at 3:27 pm

        The boiling frog comment reminds me of something I read years ago in a book about repairing your own car — every now and then it’s good to let somebody else drive your car somewhere, or drive someone else’s car yourself for a couple of days before coming back to your own. You get used to things that rather than breaking outright, wear further and further out of spec over time — a little more play in the steering, a little more mush in the brakes — and compensate without realizing. Then your friend drives it and says “How can you go out in that thing?! The wheel is looser than a two-bit streetwalker and the brakes take six blocks to stop the thing!” Now you have an independent perspective that (hopefully) prompts you to actually go back and measure what you took for granted as “working fine” before.

        Likewise, it’s easy to get in the habit of not doing one thing on a checklist, then another… another…

        • January 19, 2016 at 5:07 pm

          That independent perspective can be quite revealing! It’s often recommended that the same thing be done with airplanes. For privately-owned, light GA aircraft, occasionally have a different mechanic work on it so a “new” set of eyes can give it a look. I think that helps explain why a new owner of an aircraft — even one which has been well cared for — will often find the first year or two of ownership unusually expensive.

          On larger aircraft, some companies have a policy where a mechanic who works on one engine cannot work on the other. That way in a worst-case scenario, if an error is made on one side, hopefully it will not be replicated on the other.

        • Eric Jaderborg
          January 19, 2016 at 5:29 pm

          Hey, I totally get this! I drove my (now) ex-‘s car once after months away from it, and heard this funny “whisking” sound from somewhere “down under.” I said, “Hey, don’t you hear that?” And she said, “hear what?” So I took it to the dealer, and it turned out to be a known fault in the transmission. They replaced the whole thing as a manufacturer’s “we’re sorry” warranty, with a $50 deductible! It pays to have someone else “look and listen” from time to time. “Precession” is something we don’t notice in ourselves; it’s other people who hold up the mirror. And come to think of it, that’s a major problem with my current employer: no recurrent checking of those who DO the checking. Nobody is checking the checkers. Nobody is listening for that “whisking” sound that will eventually lead to failure some dark and lonely night. Part of “having each other’s back” is being honest with our peers when we see deviation becoming the norm in their professional practices. Excellent observations!

  10. DR
    May 14, 2016 at 10:13 am

    In my early days of flying I worked with guys who had no clue what a before start ck list is and if you brought it up about a ck list your were called a pussy and where is your parachute. One fine individual I know used to trick the autofeather system on the G1 because he did not trust it. It caught up with him.

    • May 14, 2016 at 10:36 am

      There are still people like that out there, but thankfully encounters with them are fewer and further apart than in the old days. Change is never easy, especially when dealing with Type A personalities. It’s taken a long time, however accident statistics make an undeniable case for checklist discipline and CRM.

      • Eric LeVeque
        May 14, 2016 at 11:15 am

        Concerning normalization of deviance; recently I was getting ready for engine start when I noticed a fast moving front coming in. Dark clouds, strong gusty winds, rain, but still good visibility and higher ceilings, (1000). Planes were landing and departing. Wind gusts were approaching 35kts. I began to question if we should delay our departure until things calmed down. I then asked ground control for pireps of the last few approaches and departures. Reports came back of “severe turbulence” and a loss of 40kts on final. Rough takeoffs as well. Looking at the radar apps I could see that this would pass by in about 20-30 mins. I informed ground that due to the weather, I would delay the takeoff until conditions improved, but would like to taxi out. It was approved. As I taxied out and parked in the holding area, I started noticing other crews asking for delays as well. This wasn’t happening until I asked. I know because I was sitting in the plane watching and listing to the tower clear aircraft for departure. Planes were landing and taking off like normal. Everyone was pressing on. Until (and I can’t prove it) I deviated from the normal, which in my opinion was less safe. 5 or 6 planes waited with me until the weather moved out and we all took off only 35 mins late.
        Sometimes it just takes one person to say no to wake the others up.

        • May 14, 2016 at 3:18 pm

          I’ve seen that happen as well. There’s a herd mentality which sometime plays a part. Everyone else is flowing that way, so we do it as well. It takes someone with better than average situational awareness to say something about it.

          Another example: the wind shifts and pilots keep using the same runway. It’s not until a guy with a tailwheel (or just better wind awareness) decides to takeoff or land into the wind that everyone else starts doing it, too.

          • Lynn OJala
            September 20, 2016 at 9:39 pm

            This made me smile…I fly a short coupled high performance tail dragger and will often request a different runway if winds favor the change….when I do, it almost always starts a chain reaction from other pilots.

  11. Robert Rosen
    July 18, 2016 at 2:53 pm

    As someone researching the subject of “normalization of deviance” for a presentation to my workgroup, I would have liked to have read more about the sequence of just how normalization of deviance got the pilots (or could have gotten them ) from point A (full professionalism) to point B (“performance is so bad that a CFI wouldn’t accept it from a primary student on their very first flight”). Maybe it’s because only the pilots themselves ever knew for sure? So I will try to speculate on how:

    First, I suppose these pilots decided that one or two rules/checks were “stupid and/or inefficient” and so could be skipped. Their plane not falling out of the sky as a result, eventually this shortcut path became their “new normal”. Then this repeated: they identified one or two more rules that could be deviated from, then one or two more, and so on and on. After a while, their total deviation became huge but they didn’t realize it because they kept redefining acceptable procedure in their minds, and each successive deviation was only a small step from their current “new normal” (like with the “boiling frog” analogy mentioned in earlier comments.) Does this sound about right?

    • July 18, 2016 at 3:32 pm

      You hit the nail on the head. That is exactly how it happened. And that’s also why it looks so egregious to us, but didn’t seem odd to them. It’s a massive deviation from accepted procedures and standards. But to them, it was just one minor omission here or there that turned into a normal thing. One of the things that makes normalization of deviance so insidious is that the true magnitude of the danger is only apparent in retrospect.

      • AirFrank
        July 21, 2016 at 2:52 pm

        You also see it in spades with respect to the shuttle Columbia disaster. After they realized that a piece of foam and struck the orbiter they asked the engineers who built the shuttle how big of a piece of foam hitting the shuttle was acceptable. They responded NONE. In hindsight the orbiter(s) had been hit on almost every flight since they changed the formula of the foam. But nothing bad happened. Until it did.

  12. Jukka Talvio
    August 16, 2016 at 8:02 am

    As someone who has been building safety critical products, I disagree with this: “The throttle portion of the gust lock may have failed to meet a technical certification requirement, but it was not the cause of the accident. The responsibility for ensuring the gust lock is disengaged prior to takeoff lies with the pilots, not the manufacturer of the airplane.” The malfunctioning gust lock feature was exactly the root cause which could be eliminated in the future Gulfstream aircraft.

    A good designer knows that people are sometimes idiots, including pilots. There was therefore a safety feature included for those moments of idiocy: “It also prevents application of full takeoff power, but the fact that the pilot was able to physically push the power levers so far forward simply illustrates that anything can be broken if you put enough muscle into it.” It failed to take into account full muscle power. It is possible to build a safety lock which cannot be overcome with muscle power. Clear design flaw or as already said, it “failed to meet a technical certification requirement”.

    If we always accepted human error as the root cause and never tried to eliminate it, we would not have current safety features in car automatic transmissions, for instance. (You have to press the break to move the car to Drive and move the transmission to Park before you can turn off the ignition, for example.) They have already saved countless lives from human error.

    People are sometimes idiots, even consistently as the article points out. The world and design of safety critical equipment is full of examples. Assuming that people are never incredibly stupid or that we can completely train them out of it, is just stupid. We can reduce it, I am agree. We can and should fight against the normalization of deviance but doing so only through training, is not wise. We can change the product but not the users, not completely and not permanently.

    The solution suggested in the article “information from flight data recorders is downloaded and analyzed periodically in a no-hazard environment” sounds like a good idea as well but does not eliminate the root cause of the accident. Better link between throttle and gust lock would eliminate the root cause of this accident and would have prevented it.

    • AirFrank
      August 22, 2016 at 3:22 pm

      Trying to idiot proof anything ultimately leads to failure. You can’t possibly think of all possible scenarios where somebody must do something that your “idiot” proof device denies them. I prefer to rely on myself and not some engineer who knows nothing about what I’m doing.

  13. January 3, 2017 at 7:17 am

    “…It is invisible and insidious, common and pernicious. People tend to ignore or misinterpret the deviations as an innocuous part of the daily job. If the deviations also save time and resources and reduce costs, they can even be encouraged by managers and supervisors. However, the more times deviations occur without apparent consequences, the system becomes more complacent.” Great article Rob. I published an article with some literature excerpts about this issue, regarding the LaMia accident in Colombia last November (those guys that run out of fuel) I was sadly surprised with the fact that although the term was coined so long ago, it is practically unknown by the aeronautical community in general.

    • January 4, 2017 at 12:27 pm

      What a small world — I just read that article on your site! Yes, I too wish that the term “normalization of deviance” was better known. After two decades in the business, I’ve come to believe that this is a part of many human factors aviation accidents. The good news is that this is slowly starting to change. I’m finding more and more people who are aware of this phenomenon. Hopefully in future years it will be taught to aviators at all levels.

  14. October 26, 2021 at 7:28 pm

    “The pilots knew what was wrong while there was still plenty of time to stop it.”, so why didn’t they prevent the crash?

    • January 5, 2022 at 10:47 pm

      That’s a very good question. Since they’re no longer alive, we will never know for sure. But the answer may boil down to two things:

      1. They had conditioned themselves for expediency above all else. So when they discovered that the controls were locked, they assumed that pulling the Flight Power Shutoff handle would remove hydraulic pressure and allow the gust lock to be removed as they rolled down the runway.
      2. They did not want to be forced to explain to The Boss why they aborted a takeoff.

      Ridiculous reasons, in hindsight. Of course, hindsight is always 20/20, isn’t it?

      Of all the incredible facets of this crash, this will always be the one that astounds me the most: they realized the controls were locked when they still had plenty of time to abort the takeoff. And yet they didn’t.

Leave a Reply to DRCancel reply

Follow

Get the latest posts delivered to your mailbox: