High reliability organization

A high reliability organization (HRO) is an organization that has succeeded in avoiding catastrophes in an environment where normal accidents can be expected due to risk factors and complexity.

Important case studies in HRO research include both studies of disasters (e.g., Three Mile Island nuclear incident, the Challenger explosion and Columbia explosion, the Bhopal chemical leak, the Tenerife air crash, the Mann Gulch forest fire, the Black Hawk friendly fire incident in Iraq) and HROs like the air traffic control system, naval aircraft carriers, and nuclear power operations.

History

The roots of the HRO paradigm were developed by a group of researchers at the University of California, Berkeley (Todd LaPorte, Gene Rochlin, and Karlene Roberts) which examined aircraft carriers (in partnership with Rear Admiral (ret.) Tom Mercer on the USS Carl Vinson), the Federal Aviation Administration’s Air Traffic Control system (and commercial aviation more generally), and nuclear power operations (Pacific Gas and Electric’s Diablo Canyon reactor). An initial conference at the University of Texas in April 1987 brought researchers together to focus attention on HROs. Further research on each of these three sites included Karl Weick[1] and Paul Schulman.[2] More research has examined the fire incident command system,[3] Loma Linda Hospital’s Pediatric Intensive Care Unit,[4] and the California Independent System Operator [5] as HROs.

Although they may seem diverse, these organizations have a number of similarities. First, they operate in unforgiving social and political environments. Second, their technologies are risky and present the potential for error. Third, the scale of possible consequences from errors or mistakes precludes learning through experimentation. Finally, to avoid failures these organizations use complex processes to manage complex technologies and complex work. HROs share many properties with other high-performing organizations including highly trained-personnel, continuous training, effective reward systems, frequent process audits and continuous improvement efforts. Yet other properties such as an organization-wide sense of vulnerability, a widely distributed sense of responsibility and accountability for reliability, widespread concern about misperception, misconception and misunderstanding that is generalized across a wide set of tasks, operations, and assumptions, pessimism about possible failures, redundancy and a variety of checks and counter checks as a precaution against potential mistakes are more distinctive.[6]

Defining high reliability and specifying what constitutes a high reliability organization has presented some challenges. Roberts [7] initially proposed that high reliability organizations are a subset of hazardous organizations that have enjoyed a record of high safety over long periods of time. Specifically she argued that: “One can identify this subset by answering the question, “how many times could this organization have failed resulting in catastrophic consequences that it did not?” If the answer is on the order of tens of thousands of times the organization is “high reliability”” [8](p. 160). More recent definitions have built on this starting point but emphasized the dynamic nature of producing reliability (i.e., constantly seeking to improve reliability and intervening both to prevent errors and failures and to cope and recover quickly should errors become manifest). In other words, there has been increased focus on thinking of HROs as reliability-seeking rather than reliability-achieving. Reliability-seeking organizations are not distinguished by their absolute errors or accident rate, but rather by their “effective management of innately risky technologies through organizational control of both hazard and probability” [9](p. 14). Consequently, the phrase high reliability more generally has come to mean that high risk and high effectiveness can co-exist, that some organizations must perform well under very trying conditions, and that it takes intensive effort to do so.

A key turning point that reinvigorated HRO research was Karl Weick, Kathleen M. Sutcliffe, and David Obstfeld’s [10] reconceptualization of the literature on high reliability. These researchers systematically reviewed the case study literature on HROs and illustrated how the infrastructure of high reliability was grounded in processes of collective mindfulness which are indicated by a preoccupation with failure, reluctance to simplify interpretations, sensitivity to operations, commitment to resilience, and deference to expertise. In other words, HROs are distinctive because of their efforts to organize in ways that increase the quality of attention across the organization, thereby enhancing people’s alertness and awareness to details so that they can detect subtle ways in which contexts vary and call for contingent responding (i.e., collective mindfulness). This construct was elaborated and refined as mindful organizing in Weick and Sutcliffe’s 2001 and 2007 editions of their book Managing the Unexpected.[11][12] Mindful organizing forms a basis for individuals to interact continuously as they develop, refine and update a shared understanding of the situation they face and their capabilities to act on that understanding. Mindful organizing proactively triggers actions that forestall and contain errors and crises. Mindful organizing requires that leaders and organizational members pay close attention to shaping the social and relational infrastructure of the organization, and to establishing a set of interrelated organizing processes and practices, which jointly contribute to the system’s (e.g., team, unit, organization) overall culture of safety

High reliability organization theory and HROs are often contrasted against Charles Perrow’s Normal Accident Theory (NAT) [13](see Sagan [14] for a comparison of HRO and NAT). NAT represents Perrow's attempt to translate his understanding of the disaster at Three Mile Island nuclear facility into a more general formulation of accidents and disasters. Perrow's 1984 book also included chapters on petrochemical plants, aviation accidents, naval accidents, "earth-based system" accidents (dam breaks, earthquakes), and "exotic" accidents (genetic engineering, military operations, and space flight).[15] At Three Mile Island the technology was tightly coupled due to time-dependent processes, invariant sequences, and limited slack. The events that spread through this technology were invisible concatenations that were impossible to anticipate and cascaded in an interactively complex manner. Perrow hypothesized that regardless of the effectiveness of management and operations, accidents in systems that are characterized by tight coupling and interactive complexity will be normal or inevitable as they often cannot be foreseen or prevented. This pessimistic view, described by some theorists as unashamedly technologically deterministic, contrasts with the more optimistic view of HRO proponents, who argued that high-risk, high-hazard organizations can function safely despite the hazards of complex systems. Despite their differences, NAT and high reliability organization theory share a focus on the social and organizational underpinnings of system safety and accident causation/prevention.

Characteristics

Researchers have found that successful organizations in high-risk industries continually reinvent themselves. For example, when an incident command team realizes what they thought was a garage fire has now changed into a hazardous material incident, they completely restructure their response organization.

There are five characteristics of HROs that have been identified [16] as responsible for the "mindfulness" that keeps them working well when facing unexpected situations.

Practitioners in HROs work in recognized high risk occupations and environments. Wildfires create complex and very dynamic mega-crisis situations across the globe every year. U.S. wildland firefighters, often organized using the Incident Command System into flexible interagency incident management teams, are not only called upon to "bring order to chaos" in today's huge mega-fires, they also are requested on "all-hazard events" like hurricanes, floods and earthquakes. The U.S. Wildland Fire Lessons Learned Center has been providing education and training to the wildland fire community on high reliability since 2002. HRO behaviors can be recognized and further developed into high-functioning skills of anticipation and resilience. Learning organizations that strive for high performance in things they can plan for, can become HROs that are able to better manage unexpected events that by definition cannot be planned for.

Notes

  1. Weick, K. E., & Roberts, K. H. (1993). Collective Mind in Organizations: Heedful Interrelating on Flight Decks. Administrative Science Quarterly, 38, 357-381.
  2. Schulman, P. R. (1993). The Negotiated Order of Organizational Reliability. Administration & Society, 25(3), 353-372.
  3. Bigley, G. A., & Roberts, K. H. (2001). The Incident Command System: High-Reliability Organizing for Complex and Volatile Task Environments. Academy of Management Journal, 44(6), 1281-1300.
  4. Madsen, P. M., Desai, V. M., Roberts, K. H., & Wong, D. (2006). Mitigating Hazards Through Continuing Design: The Birth and Evolution of a Pediatric Intensive Care Unit. Organization Science, 17(2), 239-248.
  5. Roe, E., & Schulman, P. R. (2008). High Reliability Management: Operating on the Edge. Palo Alto, CA: Stanford University Press.
  6. Schulman, P. R. (2004). General attributes of safe organizations. Quality and Safety in Health Care. 13, Supplement II, ii39-ii44.
  7. Roberts, K. H. (1990). Some Characteristics of High-Reliability Organizations. Organization Science, 1, 160-177.
  8. Roberts, K. H. (1990). Some Characteristics of High-Reliability Organizations. Organization Science, 1, 160-177.
  9. Rochlin, G. I. (1993). Defining high reliability organizations in practice: A taxonomic prologue. In K. H. Roberts (Ed.). New challenges to understanding organizations (pp. 11-32). New York:Macmillan.
  10. Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (1999). Organizing for High Reliability: Processes of Collective Mindfulness. In B. M. Staw & L. L. Cummings (Eds.), Research in Organizational Behavior (Vol. 21, pp. 81-123). Greenwich, CT: JAI Press, Inc.
  11. Weick, K. E., & Sutcliffe, K. M. (2001). Managing the Unexpected: Assuring High Performance in an Age of Complexity (1st ed.). San Francisco: Jossey-Bass.
  12. Weick, K. E., & Sutcliffe, K. M. (2007). Managing the Unexpected: Resilient Performance in and Age of Uncertainty, Second Edition. San Francisco, CA: Jossey-Bass.
  13. Perrow, C. (1984). Normal Accidents: Living with High-Risk Technologies. New York: Basic Books.
  14. Sagan, S. D. (1993). The Limits of Safety: Organizations, Accidents, and Nuclear Weapons. Princeton, N.J.: Princeton University Press.
  15. HRO research shares interest in complexity and errors with other work including Michael Cohen, James March, and Johan Olson's study of garbage-can decision-making processes, Barry Turner's work on man-made disasters, and Barry Staw, Lance Sandelands, and Jane Dutton's research on "threat-rigidity cycles.
  16. Weick, Karl E.; Kathleen M. Sutcliffe (2001). Managing the Unexpected - Assuring High Performance in an Age of Complexity. San Francisco, CA, USA: Jossey-Bass. pp. 10–17. ISBN 0-7879-5627-9.
This article is issued from Wikipedia - version of the 11/9/2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.