NZLII Home | Databases | WorldLII | Search | Feedback

Victoria University of Wellington Law Review

Victoria University of Wellington
You are here:  NZLII >> Databases >> Victoria University of Wellington Law Review >> 2004 >> [2004] VUWLawRw 38

Database Search | Name Search | Recent Articles | Noteup | LawCite | Download | Help

Roberts, Peter --- "Policy to Protection: The Role of Human Nature and System Nature in Preventing Patient Injury" [2004] VUWLawRw 38; (2004) 35(4) Victoria University of Wellington Law Review 829


POLICY TO PROTECTION: THE ROLE OF HUMAN NATURE AND SYSTEM NATURE IN PREVENTING PATIENT INJURY

Peter Roberts[*]

This article examines theories and research relating to medical error. It looks at the human tendencies to err, to blame, and to trust. In light of the professionalism of health practitioners and the uniquely complex nature of the health system, the author encourages a focus on professionalism rather than consumerism. ACC's abandonment of the concept of medical error is consistent with this approach to patient safety.

I INTRODUCTION

Memo: To all clinical staff -

The Anatomical Pathology Laboratory has experienced an 18.6 per cent increase in normal biopsy specimens this past quarter. This is very inefficient and unnecessary.

In future, please biopsy only abnormal tissue.

-------- ---------, MBA

Clinical Support Services Manager

The memo above was sent to clinical staff in the first blush of managerialist control over clinical behaviour in a newly formed Crown Health Enterprise. Although it represents one of the more obvious fallacies of the pursuit of efficiency over other values in an organisation's function, it also says something about the "non-clinical" manager's perception of the task that health professionals face every day. The non-clinical manager thinks that there are simple rules to follow, that everything health professionals do is just part of a linear production process, and that outcomes, what is normal or abnormal, are obvious and invariant. Few of these managers seem to understand the stochastic nature of health care or the fact that reliability is a greater value in health care than efficiency.

These ideas, and one other significant misconception, have influenced much of the policy about medical misadventure and medical error over the last 25 years, to the detriment of the health system and the relationship between professionals and patients. The other misconception is that it is only threat of sanction that causes professionals to strive to do no harm. In fact, it has been proposed that doctors who decry New Zealand's blame culture are being "precious" because ACC and other resolution processes protect them from significant sanction and punishment.

One needs to consider whether present policies promote an environment in which healing relationships flourish, or whether they simply try to control costs, account and allocate blame, and focus on controlling error rather than enhancing human performance. Also one should assess whether they reflect an understanding of the nature of the people doing the work and the nature of the complex system of health care. I suggest that the ideologies of the times are so profoundly skewed toward oversimplification of human nature and the nature of the health system that they interfere with the provision of safe, effective health care. This conflict reflects fundamental differences in worldviews, or what Elliot Freidson calls the conflict between a professional logic and a market logic or a bureaucratic logic.[1] Others have called it a "Clash of Cultures".[2] In essence, this conflict is predicated on theories about why health professionals do what they do, what it is that they do, and how they do it in the most complex system known to man.

II REASON'S PARADOX: PATIENT SAFETY IS NOT A CONCRETE BLOCK

Psychology Professor James Reason has noted that many people outside of health care think that patient safety is maintained by slavish, invariant behaviour, requiring professionals to do exactly the same thing every time.[3] Among other things, this reflects a failure to understand that medical treatment is a stochastic art, not a production line. However, "human factor scholars" such as Reason and Karl Weick, have shown that maintaining safety is a "dynamic non-event":[4] that is, a dynamic state maintained by complex processes of behaviour utilising the human capacity to note subtle differences in situations and to adjust behaviour in response to these changes so that harm does not occur (the non-event). Reason calls this the "benign aspect of human variability".[5] He notes a paradox in recent changes in health care:[6]

For a variety of practical, political and legal reasons, the medical profession is moving away from a culture in which individual practitioners were granted – and trained to exercise – a great deal of personal autonomy... to one in which the practice of medicine is becoming increasingly governed by prescriptive protocols. Ironically, this is happening at a time when many tightly regulated and rule-driven industries (for example, railways, aviation, etc) are moving in the opposite direction. ... Clearly, there is likely to be some optimal middle ground in which an appropriate balance is struck between procedures/protocols and individual discretion. In the case of medicine, however, it is important that the architects of the new protocols do not press their efforts to the point where the benign aspects of individual variability are overly restricted.

What are those "practical, political and legal reasons" that are leading to increasing prescription of practice? Could they make the system less safe as they promote more regimented behaviour? What fundamental assumptions about human nature are driving such changes in health care?

The practical, political and legal reasons for the recent changes are inexorably tied up with an attempt to deal with human fallibility in performing complex health care, the very human tendency to blame people for events even when outcomes are beyond human control, and the ever-changing nature of trust at the interpersonal (doctor-patient) and inter-societal (government-professional) levels. The practical bases particularly relate to attempts to predict the future in financial terms. For instance, the unit cost of various surgical interventions is relatively straightforward, but medical care for patients with much co-morbidity presenting with life-threatening pneumonia costs what it costs. In many ways doctors' functions of making a diagnosis, prescribing a treatment and comforting the sick have come to conflict with the demands of being "gatekeepers" to precious resources by limiting access to professional time and human caring. To deal with these conflicts politically and legally, formal statements of what can and cannot be dispensed and what should and should not be done are thought necessary to prescribe professional behaviour so that when harm occurs, managers (medical or lay) who might be held accountable are able to say, "I wrote down the rules, but they were not followed by practitioners who should have followed them". At the same time, expectations of ever cleverer and more effective treatment processes raise "consumerist" views that people have been denied treatment because of arbitrary decisions made by the people who should be their advocates. Along with unrealistic hopes, the media focus on injury at the hands of doctors can create an environment of distrust and fear of those who would be trustworthy.[7]

III TO ERR IS HUMAN, TO BLAME IS ALSO HUMAN[8]

There are at least five functional definitions of error arising from different paradigms or views of human behaviour.[9] These paradigms are often so fundamental that people are unaware of the accompanying underlying assumptions. Possible views include a moralistic view projecting moral interpretations onto all human behaviour, a mechanistic or Newtonian view assuming immutable laws that determine behaviour in a linear fashion, and a view objectively understanding complexity, a theoretical base derived from chaos theory applied to the complex interactions of humans.

Derived from these paradigms, there are at least five functional definitions of error:

(1)Failure to reach planned goal (without unforeseeable event) (from psychology);
(2)Moral failure character flaw (moralism);
(3)"Should have known the act was substandard at the time it was committed" (engineering);
(4)A symptom of system complexity and failure to control it (normal accident theory[10]); and
(5)Judgement made with benefit of hindsight (complexity theory).

What is the source of safety; and does the proliferation of protocols and process control improve system function? This question hinges on the essence of human performance and competence.[11] Even though the recent Health Practitioners Competence Assurance Act 2003 has no definition of competence, it is what one is able to do in the best possible circumstances. Performance, on the other hand, is what one does in a situation with distractions, confounding stimuli, and so forth. James Reason has drawn on the work of Jens Rasmussen to draw three distinctions about performance that elucidate many aspects of the source of human error and blame.

A First Distinction: Slips, Lapses and Mistakes

The first distinction Reason makes is the levels upon which humans perform and across which they develop expertise. Performance proceeds from accomplishing various skills, in which slips and lapses can occur because they depend on aspects of human mental function, to rule-based and knowledge- or deliberation-based decision-making in which mistakes can occur because rules are misapplied or no rule adequately fits the situation and one must create a process to deal with it. In these levels of performance, errors are involuntary and no amount of threat or admonition succeeds in gaining better performance.

Figure 1: First distinction: performance levels and relationship of types of errors.

2004_3800.jpg

Even experienced practitioners who have mastered skills suffer slips and lapses. Distractions, failing eyesight, and the like, can compromise this level throughout a career. Prevention of error in these cases depends on providing adequate information in the environment (alarms and so forth) that warns practitioners of situational change or information in the mind (education/experience) that informs them that errors can occur and provides means to catch errors or mitigate their effects.

B Second Distinction: Errors versus Violations

The second distinction, between errors and violations, comes at the level of rule-based performance and involves voluntary decisions to violate the rules. Rules are broken when the practitioner cuts corners in pursuit of efficiency, lacks discipline due to boredom or criminal intent, or the situation demands an adjustment to the rules. The context here is a regulated society, such as a professional organisation. Preventing violations depends on motivation of individuals and groups. This again is not a function of threats and sanctions, but of understanding the emotional and intellectual stimuli that motivate people to follow rules. Karl Weick has recognised that this involves a significant degree of engagement and involving people in the development of rules and their ongoing review.[12]

Figure 2: Second distinction: Performance levels and relation of errors to violations.

3801.wmf

C Blame as Explained by the Second Distinction

Blame often arises in two areas of the relationships described by the second distinction. The first is the common confusion of involuntary errors and voluntary violations.[13] An erring practitioner would not purposely harm a patient, but some environmental or situational element has triggered a misconception, which, with the benefit of hindsight, is seen as wrong. This should not evoke blame, but often does. On the other hand, an individual who violates an accepted practice rule may be held culpable even though there are degrees of intention from innocent technical error to purposeful criminal behaviour. Organisations enter the "Cycle of Blame" (Figure 3). The assumption is that there is a degree of free will in human error which causes sanctions and demands "to be more careful in the future". This is ineffective and when further errors invariably occur these errors are seen as even more blameworthy.

Figure 3: Blame Cycle (Reason, 1997).[14]

Human actions are viewed as the least constrained causes of accidents and hence the most avoidable.

Why? People are seen as free agents, able to choose between correct and erroneous actions.

Errors are now regarded as being even more blame-worthy, since they seem to ignore warnings, sanctions and exhortations.

Since errors are regarded as partly deliberate, they attract blame.

Actions thought to be blameworthy are dealt with by warnings, sanctions and demands to ‘be more careful in future’.

Such measures are ineffective and so errors continue to be implicated in bad events.

In the context of regulated social behaviour in an organisation, there is potential for significant blame and recrimination at the junction where less-experienced team members (for instance, nurses or junior doctors), observe an experienced individual make a knowledge- or deliberation-based decision which the less experienced judge as violating a rule they think should be followed. They may, of course, be correct in their view. However, they may be unaware of particular contextual subtleties of the situation that take it beyond the parameters of the rules. Should the patient be injured and an accountability process ensue, the less-experienced person's opinion can be seen to fit better with the outcome than those factors that caused the more-experienced person to make the "mistake". In fact, due to the stochastic nature of medical care, there may be no "right" decision or action in the face of uncertainty. This is a central factor in the application of hindsight bias, the refuge of the "expert witness", to the analysis of adverse medical events.[15] In terms of ACC policy, this has been one of the reasons that so much of the medical misadventure/medical error debate has been confused and many cases are subject to endless debate. On a personal note, the more I deal with this aspect of medical/legal behaviour, the more I recognise the expert witness or case reviewer as exhibiting a particular type of professional bullying.

D Third Distinction: Latent (System) Failures versus Active (Individual or Team) Failures

Figure 4: Active versus latent failures in relationship to organisational function

Human Failures

Active Failure

(Individual or Team)

Direct Patient Contact

Slips and Lapses

Knowledge factors / Attentional Dynamics

Mistakes

and Violations

Planning and Problem Solving Strategic Factors

Sharp End

Blunt End

Latent Failure

(Organisational)

Removed in time and space

Higher Echelon

Lie Dormant for Some Time

Local Trigger Factors to initiate event

The third distinction is that between those failures of individual humans at the sharp end, proximal to the event of patient injury and those of the organisation as a whole, although still made by people, relating to strategic decisions made in everyday activities that may lie dormant in the system for some time. Latent failures are often made due to the pressures to enhance production processes, which are readily measured and understood by management, at the expense of protection processes, which are evident only by the absence of negative outcomes, hence information is discontinuous and indirect.

For instance, the Bottrill Inquiry into the apparent individual failure of Dr Bottrill to read a number of abnormal cervical smears was ascribed to his incompetence, but the inquiry subsequently revealed that the clinical standards for routine review of his interpretations had been lowered by the Regional Health Authority in order to "get new players into the marketplace".[16] This also demonstrates that individual errors usually affect only a few people, but latent failures can amplify the effect to many people depending on the integrity of the system and its ability to monitor function.

Figure 5: Reason's Swiss Cheese Model: Failures at various levels of the organisation line up to allow injury from latent danger to lead to harm.[17]

2004_3802.jpg

Reason's model illustrates the defences-in-depth that are arrayed in safety critical organisations to prevent harm. It is only when lapses occur and processes fail at many levels that system failure occurs. It is often only at the last step that someone recognises that something is not right – whether it is the surgeon who notices a tear in their glove or gown or the patient who asks about why their pills are a different colour – that prevent a serious operative infection or drug reaction.

It is often assumed that all that is necessary to reduce dependence on human factors and prevent human error is to institute mechanical, technology-based processes to force humans to "do the right thing". Indeed, human limitations are the reason for most technological advances. However, the difficulty in recognising when the technology creates demands or distractions of its own and when it fails to "guarantee safety" in particular situations has led researchers of instruments from aviation autopilots to personal computers to see them as an "electronic team-mate" in complex socio-technological relationships.[18] These devices exist in the midst of a generic hierarchical system of design and analysis (Figure 6). Nonetheless, training and technological interventions are the main techniques available to organisations to address incipient injury embedded in imperfect human performance. Most of the time injury can be ascribed to human failure, not technical failure, hence the mantra of "Name, blame and retrain."

Figure 6: Generic hierarchical systems-oriented approach to design and analysis of technical device.[19]

PHYSICAL DEVICE

PHYSICAL BEHAVIOUR

INDIVIDUAL BEHAVIOUR

TEAM AND GROUP BEHAVIOUR

ORGANISATIONAL AND MANAGEMENT BEHAVIOUR

LEGAL AND REGULATORY RULES

SOCIETAL AND CULTURAL PRESSURES

Work station layout

Perception attention thought memory

Bias communication and reinforcement

Safety culture hierarchy of authority goal setting

Constraints on system design and work practices

Economic pressures

Perception of responsibility

Motor skills anthropometrics

Lighting sound

Demands by members of society outside the system

Legal liability

Shift work patterns fault reporting practices

Communication coordination cooperation

Decision making educational level

Displays controls

Indeed, Figure 7 shows the complex sociological, industrial, legal and political environment that bears on the culture[20] of those who deliver health care in hospitals. The situation is even more complex in private practice due to demands of business management. Many of the external organisational relationships have come into existence to accomplish various societal goals and many external institutions, such as the Health and Disability Commissioner or media perceive their role as protecting consumers from the dangers of erring or uncaring practitioners. At the core of these relationships are sociological processes to control human behaviour. In fact, this figure represents those three logics[21] devised to control people: markets (consumerism), bureaucracies (managerialism) and workers' self-control (professionalism). As the bold arrows in Figure 7 demonstrate, the dominant, and often clashing, logics are managerialism and professionalism.

Figure 7: Multiple social/legal and organisational aspects of health system culture.[22]

2004_3803.jpg

Indeed, as Richard Cook represents these relationships (Figure 8), the weight of the "blunt end" – the organisational/institutional and policy/regulatory environmental factors, mediated through resource control and constraint – bears down on the ability of individuals and teams to function at the sharp end of health care performance. This raises fundamental problems about the ability of the system to "guarantee" competence – another non-concrete characteristic of human performance related to behaviour taking place in the best possible circumstances. Cook's diagram also demonstrates the significant problem with "blaming the system" for injury when there are so many factors involved. At the end of the day, blame is usually sheeted home to individuals within the system anyway. Much of the perception associated with recent high profile cases has been that many of the people in the system simply did not care.

Figure 8: The weight of organisation and policy bears down on "sharp end" performance.[23]

2004_3804.jpg

Worldwide, surveys have shown public concern that health professionals have failed to guarantee competence, otherwise the unacceptably high rates of injury would not occur; and also that health professionals are self-interested, care only about their own wealth, and lack altruism and integrity. Because of these alleged failings of the professional groups, the public perceives a need for greater external control processes to enforce improved performance and attitudes. It is important to note that this is also a dominant theme in medical literature, with many authors berating the profession for allowing matters to deteriorate to this extent.[24]

IV ROLE OF PROFESSIONALISM: SOCIALISATION FOR VALUES AND SELF ORGANISATION AND ECONOMICS' ATTEMPTS TO RECOGNISE THIS

Economic rationalist views of the role of professionalism in the health system are limited to a perception of a monopolist labour group that is able to "conspire against the laity" by controlling access to the market for their skills. These views can lead to significant confusion about both the nature of the people who deliver health care and the nature of the system used to deliver it. Assumptions about self-interested behaviour has led to a perception of less trustworthy practitioners and a system that needs a greater level of external control.

The way that people at the sharp end behave is not, however, a function of external control and morality, no matter how much consumers or policy makers want those techniques to work, but of the internal morality and value system driving those people who spend their lives trying to help the sick. Rasmussen's information (knowledge in the head and in the environment) and motivation (the value system and personal virtues) are products of a socialisation process. Professionals are socialised, they are compelled by the entire prolonged process of their educational and clinical apprenticeship experience to follow codes of ethical and practical conduct. However, this socialisation does not take place in isolation from the society in which it occurs.

The search for a comprehensive definition of professionalism is an ongoing quest. This is primarily because it involves determining not only what a professional is, but also what professionals do and why they do it. Freidson points out that, at its core, professionalism is a set of values and the way practitioners organise their work.[25] The process that accomplishes professionalism is not simply education, even though specialised knowledge is central to the work. It is a complex form of socialisation, direct and usually successful, which aims to create a group of people who manifest the culture: a sense of community and bonds of common identity, norms and values; particularly an ethical code and traditions of behaviour exemplified by senior members; dedication to knowledge and the benefit of others; and self-regulation.[26] Although many of these terms, like self-regulation, are an anathema to people who define the world by markets, they are the elements that have been developed over centuries to deliver health care by preparing people to be independent but responsive in reliable ways in a complex system. Because of the value system, the environment promotes competence, altruism and integrity. While no system does this perfectly, this one is quite successful most of the time. Helmreich and Merritt state that one of the reasons for success is that those who become professionals self-select to have many of the appropriate characteristics, particularly altruism.[27] What is less uniform in many professionals is an ability to tolerate systemic complexity, if not chaos, to live with uncertainty, and to pursue responsibility rather than guard against accountability.

Over the last 25 years, the change in the professional environment that has caused trust in professionals' competence, altruism and integrity to decline is a widespread shift in, and confusion about, the perception of self-interest. This change is primarily the result of theories whose "times have come" and dominates the thinking of decision makers, media, consumers and their organisations, and many professionals themselves. At the same time that altruism of professionals is a trait required for trust, a denial of the existence of altruism is at the base of most of the social, organisational and political policy. As the behaviour of Nazi or South African doctors illustrates, professionals do not stand apart from their society, but reflect dominant ideas that come to influence relationships for better or worse.

Patients only expect the professional to behave in self-interest, and yet they desperately need an advocate. For the professional, recent trends suggest that there is nothing wrong with self-interested behaviour because the system is set up to balance those interests. What is becoming clear is that a system that was set up to assure trust in interdependent sufferer-helper relationships is predicated on the assumption that the parties cannot trust or be trusted.

Although there is not room here for a full assessment of the effect of these theories on the present environment, no student of public policy can ignore the dominant theories that have arisen over the past 40 years. In their book, Public Management: The New Zealand Model, Boston and others outline the influence of public choice theory, agency theory and managerialism (known as New Public Management).[28] At the core of these theories is the assumption that only self-interest determines human behaviour. The theories are supported by economic assessment in which all non-self-interested behaviour is excluded from consideration because it conflicts with the basic assumption. As is often the case, economists have had to simplify their view of human nature in order to fit their arguments. As Boston and others point out, the self-interest of principals and agents is bound to conflict and hence must be controlled by a series of contracts that limit agents' freedom ("provider capture") in going about their business – a state of distrust at the core of a relationship based on trust.[29] While these theories have been very successful in convincing people that selfishness and survival are the fundamental drivers of human behaviour, and allow for simple graphical representation of market behaviour as well as stringent contractualist definitions of labour market relationships, risk shifting, and so forth, there is growing evidence that the theories fall down in the face of measurement and monitoring.

In fact, attempts to measure self-interest and altruism in human behaviour, and particularly in relationships of trust, have been the subject of sophisticated experiments. Princeton psychologist Daniel Kahneman was awarded his Nobel Prize for "having integrated insights from psychological research into economic science, especially concerning human judgement and decision-making under uncertainty."[30] George Mason University economist Vernon Smith, was made laureate "for having established laboratory experiments as a tool in empirical economic analysis, especially in the study of alternative mechanisms."[31] Hence, solid economic theory underpinned by experimentation is developing – something notably missing from theories based on assumed self-interest – to support values such as altruism and trustworthiness in human nature including professional sufferer-helper relationships where emotions drive most decisions made under risk or uncertain conditions. There are good arguments that trusting and being trustworthy are evolutionary determinants for species survival because we have evolved in a social context, not been struck by a social contract.

As economist Keith Hudson says of this research:[32]

In fact, cooperation within a social species is so important that we have evolved a genetic predisposition to trust one another in the first instance, even between those who may never have met before. This doesn't make us cooperative to the exclusion of competition in all circumstances, but it means that we are a great deal more complex in our social behaviours than the rather simplified sociological or political specimens that some would have us to be. Inside us, we have a quiverful of entirely different types of social behaviour according to specific situations. These have no doubt accumulated within our DNA due to the entirely different circumstances in which man and his predecessors have found themselves during distinctly different climatic environments of the last few millions years.

Neuro-economist Paul Zak has used game theory to test a trust game theoretical model and documented neurohumoral changes, specifically a rise in oxytocin, when people are trustworthy.[33] In the "trust game" two strangers have the opportunity to trust (player one trusts by giving player two some money) or be trusted (player two is trusted to give some money back) through a computerised laboratory mechanism. The tenet of self-interest (the Nash[34] Equilibrium) would suggest that neither should give any. However, a large series of studies has reliably shown that half of players one trust player two to give them some money back and three quarters of players two were trustworthy. Investigation of humoral response demonstrated a rise in oxytocin of those who were trustworthy. The stronger the signal of trust, the higher their oxytocin level and the level of trust reciprocation (trustworthiness).

Zak has been quoted in Science Daily:[35]

Interestingly, participants in this experiment were unable to articulate why they behaved the way they did, but nonetheless their brains guided them to behave in "socially desirable ways", that is, "to be trustworthy", says Zak. "This tells us that human beings are exquisitely attuned to interpreting and responding to social signals. Based on the animal studies, the scientists hypothesized that what is happening in the trust experiment is that people are forming temporary social bonds with the other person in their pair. "This is just what we found. The stronger the signal of trust, the more oxytocin increases, and the more trustworthy people are. This is surprising given the sterile laboratory environment of the interaction so that the effect of oxytocin on face-to-face interactions must be quite strong.

The social value of cooperation was recognised by ancient philosophers[36] and is also strongly supported by recent research. Adam Smith wrote in 1759 that:[37]

Kindness is the parent of kindness; and if to be beloved by our brethren be the great object of our ambition, the surest way of obtaining it is by our conduct to shew that we really love them.

Ernst Fehr[38] and his group have found that it is more socially effective to trust than to threaten. While trusting engenders trustworthiness:[39]

Threats introduce hostility and distrust into a relationship and that initial distrust may be self-fulfilling because it seems to generate untrustworthy behaviour. [Fehr] thinks that trust has an emotional component and a cognitive, conscious component. It is important to understand both.

Their work also demonstrates a clear acceptance by socialised people of fair sanctions against unacceptable behaviour. In New Zealand's context, then, an environment that enhances professional virtues and internal morality holds more promise than one built on complaint-based external morality.[40]

Nobel Laureate Vernon Smith and fellow researcher, Kevin McCabe[41] have also found that they are able to reliably demonstrate activation of cortical areas using functional Magnetic Resonance Imaging (fMRI) in people trusting others, but not in those interacting with computers. They argue that decisions about whether or not to trust someone in the first place are made both emotionally/subconsciously and more consciously because we need to take into consideration our beliefs about another person's intentions. James Rilling and his group have used MRI to study cooperative social behaviour.[42] They found:[43]

Mutual cooperation was associated with consistent activation in brain areas that have been linked with reward processing. ... We propose that activation of this neural network positively reinforces reciprocal altruism, thereby motivating subjects to resist the temptation to selfishly accept but not reciprocate favors.

Hence, in the fraught environment of health care this biological need to be trusted and reliable has even more credibility and supports arguments for various societies' efforts to socialise members, such as doctors, to be trustworthy. This suggests that policy decisions should take into account that being trustworthy is human, just as is being fallible and having the tendency to blame are human, and that policies should be developed in such a way that trust and trustworthiness are fostered, not denigrated, dismissed or negated, by external control modes.

All the tenets of biological science recognise that these neuromodulators and pathways exist due to natural selection. Humans have evolved in societal relationships in which trusting and being trustworthy favour species survival. Another way to see this research is that humans are "wired and plumbed" to be trustworthy and altruistic, some more than others, and properly designed socialisation processes to reinforce the individual habits and neural pathways that favour reciprocal altruism can accomplish, and historically have accomplished, society's goals. More to the point, it suggests that threat and external moral control modes may well hinder progress and jeopardise desirable trusting relationships based on an understanding of altruism and mutual reciprocity.

V THE SYSTEM – A WARY CULTURE OF SAFETY CONSCIOUS PROFESSIONALS

The crux of the matter is by what means a society controls human behaviour in a complex system, such as the health system, and one motivates or compels professionals to set the patient's interest above their own and/or practise in a safe manner. This is squarely in the realm of public policy and the recent focus on competence, altruism and integrity has raised fundamental questions about the nature of humans who become health professionals and the nature of the system in which they work. Virtually the entire ambient policy environment is based on the assumption that professionals are self-interested and must be controlled by various carrots and sticks. In fact, this cynical assumption that only self-interest can be trusted is the received truth that has driven New Zealand's "contractualist" government reforms for the past 20 years. Likewise, the system has been accepted to require a managerial hierarchy or market structure to accomplish social goals because of distrust of "provider capture".

These different ideologies – markets and bureaucracy versus professional self-control – have different effects on individual and team performance and the potential for safe practice in a complex health system served by what should be reliable organisations. Arising from new insights into the nature of the system from human factors/ergonomics, organisational psychology and systems analysis, it has become clear that the health system is the most complex known to man.

A Complex Systems

Researchers like James Reason, Richard Cook, David Woods and Karl Weick have elucidated the necessity for autonomous, self-directed actors who actually understand the complexity of their system and respond in adaptive and creative ways to reach their goals and maintain safety.

Systems analyst Harold Van Cott has described the system:[44]

Systems thinking – and systems analysis and design tools – have been around for a long time, but health-care delivery is usually not thought of as a system. Yet, of all sociotechnical systems, it surely is the largest, most complex, most costly and, in at least one respect, the most unique.

What is the health-care system like? It is an enormous number of diverse and semiautonomous elements: ambulance services, emergency care, diagnostic and treatment systems, outpatient clinics, medical devices, home care instruments, patient-monitoring equipment, testing laboratories, and many others. All of these elements are loosely coupled in an intricate network of individuals and teams of people, procedures, regulations, communications, equipment and devices that function in a variable and uncertain environment with diffused, decentralised management control.

If there is one characteristic of the health-care system that distinguishes it from others, it is its uniqueness as a sociotechnical system. Each of its many component subsystems – hospitals, emergency care, pharmacies, clinics, laboratories, and others – represent a distinct culture with its own unique goals, values, beliefs, and norms of behaviour. Each is managed separately from the others. Coordination among the subsystems is accomplished by informal networking, custom and regulation.

In contrast with the health-care delivery system, change is effected in centralised systems through the authority of a hierarchical, vertical management structure. The process is relatively quick and reasonably efficient. Change in the health-care system is accomplished laterally across several subsystems in which responsibility and decision-making are distributed across many people and units. In such a diffuse system, change is a slow, often difficult process with more opportunities for error and more unpredictable outcomes than in a single, hierarchical system. As a further drawback, the health-care system must cope with very rapid advances in medical technology and practice. It must also cope with legal and economic constraints. Any program aimed at the reduction of human error in the health-care system must be designed with an understanding of characteristics such as these if desired cultural learning and change are to be fostered.

Unfortunately, the demands on the actors in such a system have imposed a sense of defensiveness in response to the demands of accountability processes, when safe practice would encourage a sense of responsibility. Accountability and responsibility are words often used interchangeably, but they actually lead to quite different behaviour and outcomes. This is illustrated in the answer of a naval captain to a question about the difference between accountability and responsibility in the military: "[a]ccountability is explaining to the tribunal why you ran the ship aground. Responsibility is not letting it happen on your watch."[45]

Most decision makers, professionally trained managers and non-clinical managers in the system face demands from the public and in their job descriptions for every-increasing levels of accountability, which is really just another word for blame. Many of their processes are actually structured to protect them from blame.

B Complex Adaptive Systems

Few situations in modern health care have a high degree of certainty and agreement, and rigid protocols are often rightly abandoned. ... To cope with escalating complexity in health care we must abandon linear models, accept unpredictability, respect (and utilise) autonomy and creativity, and respond flexibly to emerging patterns and opportunities.[46]

In the last 10 years the very definition of human error has been seen as a limiting theoretical factor. For the most advanced "human factor scholars" there has been a shift in focus from managing human error to enhancing human performance. This view counterbalances mechanistic world views. Its theories strike harmonics with Schön's Reflective Practitioner[47] and Lindblom's "self-guiding society".[48] Plsek and others have recently reviewed the theory applied to health care in a four-part British Medical Journal series, entitled Complexity Science.[49]

Paul Plsek and Tim Wilson observe:[50]

Current management thinking largely assumes that a well functioning organisation is akin to a well-oiled machine. This leads to the notion that performance is optimised when work is specified in detail and shared out to distinct operational units. Clinicians often object to these detailed specifications, while managers bemoan a lack of cooperation.

This mechanistic management thinking considers parts in isolation, specifies changes in detail, battles resistance to change and tries to reduce variation in pursuit of better performance. However, complexity thinking suggests that relationships between parts are more important than the parts themselves, that minimum specifications yield more creativity than detailed plans, and that treating organisations as complex adaptive systems allows more productive, innovative management styles to emerge. From a management viewpoint, this is favoured by using pooled budgets and whole system targets to encourage generative relationships; replacing complicated plans with minimum specifications; understanding that attraction for change is more effective than battling resistance; and recognising variation as an expression of both creativity and safety.

Their conclusion recognises a fundamental impediment to a safety culture:[51]

Perhaps the biggest barrier to these approaches prompted by complexity thinking is the incumbent leaders of health systems who have risen within the hierarchy based on command and control methods.

C Organisational Culture as a Source of High Reliability

Karl Weick has examined the vulnerability of increasingly more complex organisations and determined the characteristics of Highly Reliable Organisations (HROs).[52]

His insights are stunningly perceptive. The most important points include:[53]

(1)Organisational reliability is a more important goal than efficiency for institutions such as hospitals;
(2)However, trial and error is not necessarily available to them for learning because errors can propagate beyond control, hence, errors are the very events they know least well;
(3)Reliable performance needs substitutes for trial and error, such as telling stories, simulations, imagination and symbolic representations that are valued by practitioners;
(4)The variety that exists in the system to be managed exceeds the variety in the people who must regulate it – a lack of "requisite variety". Lacking variety, they miss important information, diagnoses are incomplete and remedies are shortsighted and can magnify rather than reduce problems;
(5)Posing the issue of accidents this way, fewer accidents would occur with a better match between system complexity and human complexity by either:
(a)The system becoming less complex or
(b)The human becoming more complex, Weick's preferred option;
(6)Collective requisite variety is higher when people trust each other and is at the core of teamwork. Divergent individuals, that is, people from different backgrounds and training such as doctors and nurses, have more variety than homogenous individuals. Trust and face-to-face contact transmit more information that detects potential errors earlier. Trust depends on confidence in colleagues' competence such that "trust takes care of itself."[54]
(7)Reliability is a dynamic non-event – dynamic inputs create stable outcomes. This is not "if it ain't broke don't fix it", but "if it isn't breaking, don't fix it";
(8)People do not do what the system says they do. What people do is the technology and what people think is the system. This concept is known as "reliability as enactment";
(9)The function of a Highly Reliable Organisation is often an exercise in faith. When people know what the other should be doing, at times they must act on what they believe the other would have done or said;

(10) Either culture or standard operating procedures can impose order and substitute for centralisation, but only culture adds the latitude for interpretation, improvisation and unique action. Culture preserves coordination and centralisation through homogenous assumptions and decision premises. Most importantly, when centralisation occurs via premises and assumptions, compliance occurs without surveillance. Culture socialises people to use similar decision premises and assumptions so that on their own, their decentralised operations are equivalent and coordinated;[55] and

(11) In essence, collective mindfulness – shared goals and mutual understanding of processes – leads to greater safety than trying to enforce uniform behaviour. Weick and his group have distinguished two aspects of organisational function: cognition (what it thinks and how it views hazards) and action (what it does).[56] Traditional "efficient" organisations try to achieve invariant performance, but have variable mindsets. HROs, on the other hand, aim for a consistent "collective mindfulness" of ever-present dangers while encouraging some variability of action. Weick describes this as "there is variation in activity, but stability of cognitive processes that make sense of the activity."[57]

No system can completely avoid errors. ... Actors frequently underestimate the number of errors that can occur. But if these same actors are dedicated people who work hard, live by their wits, take risks, and improvise, then their intensive efforts to make things work can prevent some errors. Because they are able to make do and improvise, they essentially create the error-free situation they expected to find.

While this mechanism is sometimes interpreted as macho bravado, it is important to remember that confidence is just as important in the production of reliability as is doubt. ... I have chosen to emphasise that qualities such as discretion, latitude, looseness, enactment, slackness, improvisation and faith work through human beings to increase reliability.[58]

D Eliminating Human Error versus Enhancing Human Performance[59]

The attribution of "human error" is a prejudicial and unspecific judgement about human performance using the benefit of hindsight applied after an accident or near miss has occurred. The label of "error" is a symptom that should call forth a more in-depth investigation of how a system of people, organisations and technologies functions and malfunctions. This insight helps clarify the underlying source of safety and the means of addressing the other side of the error management coin – the enhancement of human performance.

Cook and Woods comment:[60]

Clearly, strategies that derive from a desire to minimise human error are different from those that seek to aid human performance. Rules, regulations, sanctions, policies and procedures are largely predicated on the belief that human error is at the heart of large system failures and that a combination of restrictions and punishments will transform human behaviour to an error-free state. The same basis exists for some training and technology programs, for example "blame and train" and automated decision systems, whereas others (notably CRM[61]) regard human performance as the primary means for dealing with system transients and look for ways to produce more effective human performance. The distinction is an important one and not simply a matter of degree; the choice of path depends critically on the validity of the whole notion of human error.

Based on many of Weick's arguments, Reason has recognised that in addition to information and motivation, reliable people and organisations have two other characteristics:[62]

(1)Preparedness: The knowledge that things can and will go wrong is the most important aspect of resilience for both organisations and individuals. They expect and prepare for nasty surprises, and act according to "every day will be a bad day".[63] Reliable organisations train staff to recognise and recover from errors; review and generalise (rather than localise) lessons from past failures; tell stories; brainstorm new scenarios of system breakdown; and have crisis management plans for real and imagined events. "Excellent individuals" mentally rehearse possible events, including their own errors and how to cope with them. Both HROs and excellent individuals see that mental skills are as important as technical skills and that they need to be continuously practised.
(2)Flexibility: HROs and resilient individuals are able to reconfigure plans, structures and actions to suit local circumstances. Reason notes that HROs adopt different command structures for high demand and routine periods.[64] Average performers usually cope while things go as planned, but cannot deal with surprises. Wise use of technology reflects knowing when not to trust the "electronic team mate". Successful compensation requires the confidence to break out of routine modes of thinking and use some pre-packaging (mental preparedness).

These protective behavioural elements depend on the paradox of human variability. While many managers try to achieve prescribed consistency of action by procedures, protocols, and automation, they fail to realise that it is human variability – in the form of moment-to-moment adjustments that preserves system safety in uncertain and changing conditions. Thus, "by striving to limit the variability of human action, they are also undermining the system's last and perhaps most important line of defence."[65]

Only collective mindfulness allows for this variability of activity but stability of cognitive processes.

VI IDEOLOGY TO RELIABILITY: IN WHICH WORLD IS THE PATIENT SAFE?

Considering the ideal worlds of consumerism, managerialism, and professionalism, one can assess them for characteristics needed for a safety culture and reliable organisations in a complex adaptive system. There are many requisite characteristics:

(1)Collective mindfulness utilises human variability for safety rather than invariant performance;
(2)Environmental information reduces errors and group motivation lessens individual violations;
(3)Preparedness and flexibility favour "safety taking care of itself" – we are what we repeatedly do;
(4)Requisite variety provides complex people who understand and cope with complex systems problems;
(5)Reliability is seen as a higher priority than efficiency;
(6)Relationships are recognised as more important than and not separate from the individual parties;
(7)Trust, cooperation, autonomy, confidence and doubt are all highly respected system values; and
(8)A sense of responsibility drives action more than a fear of being held accountable.

The core tenets of each ideology say it all:

(1)Consumerism: Market: "Let the buyer beware";
(2)Managerialism: Bureaucracy: "Let the managers manage"; and
(3)Professionalism: Helping relationships: "First, do no harm".

Only a professional culture begins with a focus on relationships and sees the interests of the professional and the sufferer as the same. If a society separates those interests, it drives a wedge between the parties to the helping relationship and threatens the constant pursuit of knowledge and competence as well as the altruism and integrity that society asks of the people that it encourages to be health care professionals.

Reason's paradox highlights ideas that actually threaten safe care in a complex and dangerous world. ACC is to be congratulated for abandoning the concept of medical error – one of the bastions of the dangerously outmoded Royal College of Hindsight.

Society wants competence, integrity and altruism from its professionals. In order to achieve that, society also has the responsibility to assure that the policy environment optimises the achievement of those goals. The professions cannot do that alone. In order to create that environment, policy makers must better understand the nature of the people who become professionals, the nature of the socialisation process that makes them professionals, and the nature of the complex system that we all depend on to deliver society's health care. In the interest of reliable care in an unreliable world, for safety's sake, a professional's job is to approach every day as though the worst might happen and to be prepared and flexible enough to deal with it.


[*] Clinical Leader Internal Medicine, Capital Coast Health, Fellow of the Royal Australasian College of Physicians, American College of Physicians, American College of Critical Care Medicine and Joint Faculty of Intensive Care Medicine and Director, Public Health Policy, Medical Research Institute of New Zealand. He is a former President of the Association of Salaried Medical Specialists (1997-2002) and won the Prime Minister's and Holmes' prizes when he completed his VUW Master of Public Policy degree in 2002.

[1] Elliot Freidson Professionalism: The Third Logic (University of Chicago Press, Chicago, 2001) 3.

[2] See, for instance, J A Raelin Clash of Cultures: Managers Managing Professionals (Harvard University Press, Cambridge (Mass), 1991); A Hornblow "New Zealand's Health Reforms: A Clash of Cultures" (1997) 314 Brit Med J 1892.

[3] James Reason "Understanding Adverse Events: The Human Factor" in Charles Vincent (ed) Clinical Risk Management: Enhancing Patient Safety (2 ed, BMJ Books, London, 2001) 9, 27.

[4] Karl Weick "Organizational Culture as a Source of High Reliability" (1987) Calif Mgmt Rev 112, 118.

[5] Reason, above n 3, 28.

[6] Reason, above n 3, 27.

[7] A Daniels "Of Miracle Cures and Murderous Doctors" (2003) 179 Med J of Aust 637.

[8] To trust is also human, to interact in complex ways and depend on selected and socialised people to deal with complex situations and understand how others will behave in that system. See Part IV Role of Professionalism: Socialisation for Values and Self-Organisation and Economics' Attempts to Recognise this.

[9] Peter Roberts Snakes and Ladders: The Pursuit of a Safety Culture in New Zealand Public Hospitals (Victoria University Press, Institute of Policy Studies, Wellington, 2003) 14.

[10] C Perrow Normal Accidents (Basic Books, New York, 1984).

[11] See J Tracey, J Simpson and I St George "The Competence and Performance of Medical Practitioners" (2001) 114 NZMJ 167.

[12] Weick, above n 4, 120.

[13] See A Merry and A M Smith Errors, Medicine and the Law (Oxford University Press, Oxford, 2001).

[14] James Reason Managing the Risks of Organisational Accidents (Ashgate Publishing Ltd, Aldershot, 1997) 128.

[15] See Richard Cook, David Woods and D Miller "A Tale of Two Stories: Contrasting Views of Patient Safety" (National Patient Safety Foundation at AMA, 1998) <http:// www.npsf.org> (last accessed 29 September 2004); B Fischhoff "Hindsight? Foresight: The Effect of Outcome Knowledge on Judgement under Uncertainty" (1975) 1 J of Experimental Psychology and Performance 288; R Caplan, K Posner and F Chaney "Effect of Outcome on Physician Judgments of Appropriateness of Care" (1991) 265 J AMA, 1977.

[16] A Duffy, D Barrett and M Duggan Report of the Ministerial Inquiry into the Under-Reporting of Cervical Smear Abnormalities in the Gisborne Region (prepared for the Minister of Health, 2001) <http://www.csi.org.nz> (last accessed 29 September 2004).

[17] James Reason Managing the Risks of Organizational Accidents (Aldershot, Ashgate Press, 1997) 12.

[18] N Moray "Error Reduction as a Systems Problem" in M Bogner (ed) Human Error in Medicine (Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1994) 67, 70.

[19] Moray, above n 18, 70.

[20] Culture here is defined as the attitudes, beliefs, values, goals and behavioural norms of a group of people.

[21] Freidson, above n 1.

[22] Roberts, above n 9, 59.

[23] See Cook, R, Woods, D, Miller A Tale of Two Stories: Contrasting Views of patient Safety (National Patient Safety Foundation, McLean (Virginia), 1998) available at <http://www.npsf.org> last accessed 10 January 2005)

[24] See, for instance, S R Cruess, S Johnston and R L Cruess "Professionalism for Medicine: Opportunities and Obligations" (2002) 177 Med J Aust 208.

[25] Friedson, above n 1, 197-222.

[26] See R Helmreich and A Merritt Culture at Work in Aviation and Medicine (Ashgate Press, Aldershot, 1998).

[27] Helmreich and Merrit, above n 26, 47-51.

[28] J Boston and others Public Management: The New Zealand Model (Oxford University Press, Oxford, 1996).

[29] Boston, above n 28, 87-88.

[30] Daniel Kahneman "Maps of Bounded Rationality: A Perspective on Intuitive Judgement and Choice" (2002 Prize Lecture, Stockholm, 8 December 2002) <www.nobel.se/ > (last accessed 29 September 2004).

[31] Vernon Smith "Constructivist and Ecological Rationality in Economics" (2002 Prize Lecture, Stockholm, 8 December 2002) <www.nobel.se/ > (last accessed 29 September 2004).

[32] See the Keith Hudson-initiated website: <www.evolutionary-economics.org/> (last accessed 29 September 2004).

[33] See for example, K Grimes "To Trust is Human" (10 May 2003) New Scientist 32; Paul Zak's webpage <http://fac.cgu.edu/~zakp/> (last accessed 29 September 2004); Corante Tech News <http://www.corante.com/> (last accessed 29 September 2004); <www.sciencedaily.co> (last accessed 24 September 2004).

[34] Nobel Laureate John Nash of "A Beautiful Mind" fame, who recognised that self-interested behaviour defeated the optimal outcome for social systems and proved it mathematically.

[35] Available at <http:/www.innovations-report.com> (last accessed 26 December 2004).

[36] See, for example, F Fukuyama Trust: The Social Virtues and the Creation of Prosperity (Penguin Books, London, 1995) 285; T Clemmer "Cooperation: The Foundation of Improvement" (1998) 128 Annals of Internal Medicine 1004.

[37] Adam Smith The Wealth of Nations (edited by Edwin Cannan, 5 ed, Bantam Dell, New York, 2003) 1023.

[38] Ernst Fehr and B Rochenbach "Detrimental Effects of Sanctions on Human Altruism" (2003) 422 Nature 137; Ernst Fehr and S Gachter "Altruistic Punishment in Humans" (2002) 415 Nature 137; R Axelrod and W Hamilton "The Evolution of Cooperation" (1981) 211 Science 1390.

[39] K Grimes "To Trust is Human" (10 May 2003) New Scientist 36.

[40] C Paul "Internal and External Morality of Medicine: Lessons from New Zealand" (2000) 320 Brit Med J 499.

[41] K McCabe and others "A Functional Imaging Study of Cooperation in Two-person Reciprocal Exchange" (2001) 98 Proceedings of the National Academy of Science 11,832.

[42] James Rilling and others "A Neural Basis for Social Cooperation" (2002) 35 Neuron 395.

[43] Rilling, above n 42, page 395.

[44] Harold van Cott "Human Errors: Their Causes and Reduction" in M Bogner (ed) Human Error in Medicine (Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1994) 53, 55 (references omitted).

[45] J Law “Class Discussion” (MAPP 524 – Law, Institutions and the Policy Process, Master of Public Policy Victoria University of Wellington, 5 May 1999).

[46] P Plsek and T Greenhalgh "Complexity Science Series" (2001) 323 Brit Med J 625, 628.

[47] D Schön The Reflective Practitioner: How Professionals Think in Action (Basic Books, New York, 1983).

[48] C Lindblom Inquiry and Change (Yale University Press, New Haven, 1990).

[49] P Plsek and others "Complexity Science Series" (2001) 323 Brit Med J 625-628; 685-688; 746-749; 799-803 ["Complexity Science Series"].

[50] P Plsek and T Wilson "Complexity Science Series" (2001) 323 Brit Med J 746, 746.

[51] Plsek and Wilson, above n 49, 749. These leaders may be lay managers, doctors or nurses.

[52] Weick, above n 4, 112.

[53] Weick, above n 4, 114-126.

[54] See quote "[s]afety takes care of itself": Moray, above n 19, 89.

[55] Weick, above n 4, 112-126. The great shame is that control-minded management ignores this system attribute when most needed for efficiency and safety.

[56] Karl Weick, K Sutcliffe and D Obstfeld "Organising for High Reliability: Processes of Collective Mindfulness" (1999) 21 Research Organizational Behavior 23.

[57] Weick, above n 4, 122. I have long felt that if a person clearly understands why they are trying to do something, they are more likely to reach the goal by their own means. See also, Weick, Sutcliffe and Obstfeld, above n 56, 23-81.

[58] Weick, above n 4, 122.

[59] Richard Cook and David Woods "Operating at the Sharp End: the Complexity of Human Error" in M Bogner (ed) Human Error in Medicine (Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1994) 255.

[60] Cook and Woods, above n 39, 303.

[61] Crew Resource Management – developed in aviation by Robert Helmreich and his team, University of Texas, adapted for anaesthesia as Crisis Resource Management by David Gaba, Stanford Medical School.

[62] Reason, above n 3, 26.

[63] James Reason "Building a Safer Health System" (Asia Pacific Forum On Quality in Health Care, Sydney, 18 September 2001).

[64] Reason, above n 3, 26.

[65] Reason, above n 3, 27.


NZLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.nzlii.org/nz/journals/VUWLawRw/2004/38.html