By H. William Dettmer
“When you’ve got them by the balls, their hearts and minds will follow.”
—Framed needlepoint reputedly on the wall of the office of
Charles Colson, President Nixon’s legal counsel in 1973
Here’s a typical scenario. It may even sound familiar to many of you. You develop a comprehensive, sound problem analysis and solution for it. You present it to your decision-maker: what the problem is, what the solution is, and how it should be deployed. And despite a generally favorable (or neutral) reaction, ultimately nothing happens—no decision, or a rejection.
There’s no shortage of opinions on why organizations—or, more precisely, people—resist changing. Many of these opinions can be found in books, most of which usually provide some prescription on how to overcome such resistance. Very few of these prescriptions work. Or if they do, they don’t work for very long. The reason that this happens is that…
- Logic is not enough to persuade people
- Human emotion, motivation and behavior enter into the equation,
and these factors are likely to be even more decisive than logic
As Winston Churchill once observed, “Man will occasionally stumble over the truth, but usually he just picks himself up and continues on.”
Change and Risk
Figure 1 shows a typical emotional dilemma. It’s known as Efrat’s cloud. [1:118-119] It characterizes the internal conflict most people experience when they contemplate changing their lives.
Their objective is to be happy. In order to do that, they must feel both satisfied and secure. Now, in order to feel satisfied, they must initiate change of some kind. But in order to feel secure, they must resist change—that is, cling to the status quo.
Why does satisfaction require change? Essentially, it’s because the pleasure associated with satiation eventually wears off. One must find new horizons or challenges to conquer. Or, as Efrat Goldratt put it, “Satisfaction comes from achieving a difficult objective, when there was substantial doubt about the probability of success.”
Why is there security in resisting change? Security is defined as a sense of well-being that comes from predictability of the events in one’s life. [1:118] We feel secure because we can predict what’s going to happen in our lives…today, tomorrow, next week, or next month. Beyond that, who knows? But we’re confident it won’t be too different, as long as we stay in our nice, secure little cocoon and don’t change much.
But change poses a risk for its initiator. In the security-versus-satisfaction dilemma, changing clearly leans toward the “satisfaction” side. Not changing, naturally, leans toward the “security” side. And the degree to which a particular person—a decision maker, if you will—embraces or resists change depends on their predisposition to search out or avoid risk. This is a personality trait, and some people get a kind of “high” from taking risks. Risk-takers are those for whom their need for satisfaction outweighs their need for security. The risk-averse are exactly the opposite. And a significant number of people lie somewhere in the middle. In other words, for most people, some degree of calculated risk is acceptable.
The Technology Adoption Life-Cycle
In 1995, Geoffrey Moore, a consultant at Regis McKenna Associates in Silicon Valley, offered the technology adoption life cycle as a way of explaining why and how some new product introductions succeed while others fail miserably.  Moore suggested that the acceptance of new technology by the public approximated a bell curve. The curve had two extreme tails and a large mainstream in the middle. Relatively little of the population lies at the extremes. Most is in the middle. I suggest that a similar phenomenon applies to methods or practices. Figure 2 illustrates Moore’s concept applied to management methods.
On the far right are the completely risk-averse. Most of the populations is composed of reluctant risk-takers and conservative risk-takers. On the left extreme are the ambitious risk-takers and those with a “death wish.” Notice that there’s a gap, or chasm between the ambitious risk-takers and the conservative risk-takers. This indicates a psychological barrier between those who are comfortable embracing risk and those who might prefer to avoid it. I suggest that adoption of revolutionary new methods is akin to the embrace of new technology—it’s related to the decision maker’s degree of risk aversion.
Most readers are familiar with Peter Senge. His book, The Fifth Discipline , is still a best-seller today. Senge third discipline is the concept of mental models, which he defined as:
“…deeply ingrained assumptions, generalizations, or even pictures or images that influence how we understand the world and how we take action. Very often, we are not consciously aware of our mental models or the effects they have on our behavior.”
But the concept didn’t originate with Senge. In 1974, Argyris coined the term “mental maps” to explain people’s behavior.  Argyris maintained that the mental maps we create in our minds help us plan, implement, and review the actions we take. In addition, Argyris contended that how humans behave is often at odds with what they claim they do. Argyris called this incongruent behavior. In other words, their mental maps direct their behavior in ways that may be diametrically opposed to what they say they do.
How important are mental models in the human make-up? In 2003, Laurence Gonzales wrote a magnificent book entitled Deep Survival: Who Lives, Who Dies, and Why.  It’s a fascinating look into the world of extreme survival situations. Call it “field research,” if you will. Using actual case studies, he describes extreme situations in which some people lived, other situations where people died, and even some situations where some lived and some died in the exact same situation. Gonzales investigated these cases to find out why this happened.
Not surprisingly, he found that in extreme survival situations, people often let their long-held mental models overrule their common sense—even at the risk of death. This is particularly true in stressful situations, when the tendency is to revert to instinct. In one such case, a guy who was lost in the Rocky Mountains deliberately left his compass behind along with some of his gear to lighten his load. He didn’t have a map, because he didn’t think he’d need one. And he kept walking deeper into the mountains instead of out of them, because his mental model told him that civilization was just over the next ridge—but it wasn’t. But the important conclusion Gonzales drew from his research was that mental models are actually “hard-wired” into the human brain.
Gonzales cited the research of Antonio Damasio, a medical doctor with a Ph.D. in neuroscience as well. In 1994, Damasio wrote a book called Descartes’ Error  in which he biologically refuted the notion of Rene Descartes, the 18th century French philosopher, who proposed that logic and emotion occupied completely separate parts of the human brain. This view had been prevalent for the 200 years since Descartes’ time.
Not only do mental models exist, according to Damasio, but they are actually “hard-wired” into the brain in the form of neural networks—a specific arrangement of connections and firing sequence of neurons.
These networks continually “rewire” themselves as new learning occurs, but the transition is slow. One might say that it happens in an evolutionary way, not a revolutionary one. In other words, time is a factor—sometimes a lot of it.
This brings us to the work of Robert Wright, who wrote a book called The Moral Animal: Why We Are the Way We Are.  This book explores the emerging field of evolutionary psychology, but Wright translated fundamentals and principles into words and concepts that laymen can understand. Wright demonstrated that human behavior does evolve in response to a changing environment, but this happens over extended periods of time. However, once human behavior is established according to some norm, it doesn’t change easily.
Finally, there’s the subject of paradigms. The term was first introduced in 1962 by Thomas Kuhn.  For decades since then, it’s been used, abused, and redefined to suit different purposes by a host of management experts. In fact, it’s been so overused that it’s lost its impact, if not its meaning.
According to Kuhn, a paradigm is a pattern or model that describes how things must, or do, happen within the confines of some domain. This model might be considered “the rules of the game.” Some examples of paradigms include organized sports, economics, societies, industries and business, geopolitics, and scientific bodies of knowledge.
The accepted state of our knowledge about systems and their interactions might constitute a paradigm. This knowledge shapes behavior within systems—say, the banking or home mortgage system, for example. As with mental models and evolutionary psychology, paradigms may be modified or refined, but they are rarely replaced. But why is this so?
Let’s put everything that we’ve discussed so far together: our discussion of security versus satisfaction; theories of action and incongruent behavior; risk-taking versus risk-aversion; mental models; evolutionary psychology, and paradigms. Figure 3 is a cause-and-effect logic tree that explains how these various theories combine to produce executive behavior we often see concerning decisions to make major changes. Read the tree from bottom to top, preceding the text in each box with the words “If…”, “…and…”, or “…then…”, as appropriate.
What this tree is saying is that revolutionary change has a low probability of success because people’s mental models cause them to resist it, even though they may profess to support it. And even if a revolutionary change is adopted, it won’t be likely to have much “staying power.” In other words, after some period of time, behavior of executives (from whom all other in a system take their cues) will revert to a greater degree of congruency with their mental models—what their neural networks are telling them is required for “survival.” In some cases, revolutionary changes may “stick,” but these instances are relatively few in number, and they may occur only when the system’s survival is perceived to be at risk without such change.
Mental Models: The Role of Security and Satisfaction
Let’s not forget that in the scheme of evolution, the underlying goal is survival and propagation of the species. In the business world, this might be interpreted as “job security.” This can be a powerful force for retaining and reinforcing the status quo.
Now, if we consider Efrat’s cloud (Figure 1), which is focused on individual happiness, we can better understand the powerful psychological forces underlying resistance to change. On the other hand, as individuals, we long to realize satisfaction in our lives. This provides some degree of stimulus for change. Efrat’s cloud is a mental model, but most people are completely oblivious to it. Yet it subliminally guides their behavior to balance security and satisfaction—in other words, a compromise. That balance, for most of the population is tilted in favor of security—resisting change. (See Figure 4)
Whose security and satisfaction are we talking about when we’re trying to introduce a revolutionary new business paradigm—which constraint theory, lean, and six sigma most certainly are? It’s the security and satisfaction of decision-makers—executives! Subordinates in the organization generally follow the lead of their executives. Let’s look briefly at another logic tree (Figure 5).
What this tree is telling us is that once people reach executive levels, they become more conservative in their decision making, and thus less likely to initiate or approve radical change. They’ve reached a position of status, authority, prestige, and possibly highly desirable perquisites, and they’re reluctant to risk those. Even if they were ambitious risk-takers in their drive to achieve power, once they get it, they want to keep it. They can’t go anywhere from there but down, and their behavior may adjust from risk-taking to risk-aversion. This naturally leads them to avoid or delay risky decisions, while embracing and approving low-risk decisions that aren’t likely to have much, if any, effect on their position and power.
A paradigm-changing decision, such as the embrace of new management methods or products, is more likely when a company and its executive find themselves in a “survival” situation, where they must do something different or die. If things are going “okay”—not great, but not that bad, either—the odds are very low that an executive will embrace a revolutionary change like TOC…unless he or she is an ambitious risk-taker. And we all know there aren’t very many of those. Another way of saying this is that “the good is the enemy of the better.” This is particularly true of larger, established, more bureaucratic businesses, rather than smaller, “hungrier” businesses on the way up. The latter are often run by executives who are responsible for day to day operations as well as future direction and results—small to medium-sized businesses that may be family-owned and operated.
So, what can we conclude from this line of reasoning?
- First and most important, people’s behavior is “hard-wired”
and largely emotional.
- Second, people behave according to their mental models,
which provide a comfort zone.
- Third, logical persuasion is likely to have little impact, in
spite of what it might seem like at the time it’s attempted.
(Remember incongruent behavior?) Emotion is a much
more powerful actor.
- Fourth, relatively few people—those whose need for satisfaction
outweighs their need for security—will risk changing on their
own. And of those that do, most will do so only cautiously or
- Fifth, people’s actions often don’t reflect their words about it.
- And, finally, true behavioral change occurs naturally at a more
evolutionary rate, not a revolutionary one.
Now, what are the implications of these conclusions? First, the odds of success for revolutionary change by persuasion alone are very long. A more likely outcome is rejection, or reversion to past behavior. Incremental, evolutionary changes have a better chance of long-term success. But they require patience!
Second, if you’re determined to attempt revolutionary change in the face of these long odds, imposed change has a better chance of success—at least for the short term.
How New Ideas “Get In.”
This brings us to the question of how new ideas get into an organization in the first place. Deming once observed that truly profound, game-changing knowledge must come from outside of a system; it doesn’t naturally reside within it. Moreover, he said, it has to be invited in. In other words, someone has to actively seek it out and bring it in.
The problem with introducing something game-changing is that in most cases the people with the decision-making power—the executives—are too busy running the company on a day-to-day basis to worry much about searching out new ideas, or new ways of doing things. They don’t have the time or the inclination to go to conferences like this. They may not even have much time to do professional reading in journals that might provide them exposure to paradigm-shifting methods (even if they were inclined to consider them).
Who actually does this kind of external research to find out “what’s out there?” It’s people in the middle levels of large organizations—the working professionals who aspire to upper management. Maybe even department heads. But not usually executives.
The exception to this tendency, as mentioned above, is small-to-medium sized privately-held companies, whose owners are often the executives in charge. These executives have to wear two hats. Besides running the company, they have to originate the breakthroughs that will grow the company. So these are the companies where radical new methodologies are embraced by decision-makers and actually have a better track record of success. When a new idea or method penetrates an organization at the middle level, it requires persuasion up the chain of command. When it enters at the top, it can be imposed down the chain.
Change initiated by an executive is no guarantee of long-term success, and maybe not even short-term success. It’s definitely required for initial success, and the best example of this is Jack Welch and Six Sigma at General Electric.
But even executive-imposed change is no guarantee of sustainability. Because the change was imposed in a revolutionary way, behavior may revert when the executive leaves. And with rare exceptions, executives don’t stay in their positions for very long. Not “evolutionarily long,” at any rate. Again, the prime example is Jack Welch at General Electric. After he retired, Jeffrey Immelt took GE in a completely different direction—into the ditch, as it happened.
There are no easy solutions to this problem. The best that can be offered is a possible model for change, and it’s not particularly new or original.
A Change Implementation Model
Figure 6 comes from The Logical Thinking Process.  It doesn’t specify how to win the heart and mind of the top leader, but that particular step has to happen first, and logic alone won’t do it. Once it does happen, this model provides a potentially effective roadmap for implementation and sustainment.
- Briefly, the leader’s commitment precedes everything else.
- Then the leaders must define the new, modified behavior required of both themselves, and of subordinates.
- Simultaneously, leaders must communicate the new mission, task, or charter to all subordinates, and…
- Visibly demonstrate—by their own behavior—their commitment to the new way of doing things. This requires a cognitive acceptance on their part of the need to change and the value of doing so.
- Only these first four steps have a chance of engendering subordinates’ understanding of their charter and their commitment to it.
- This brings us to the ubiquitous feedback loop: managing performance though behavior modification techniques. Whole books have been written about this topic, so I won’t go into that here. Suffice it to say that the Theory of Constraints tells us nothing about this vitally important contributor to long-term success. But if you don’t measure and correct behavior to the desired standard over a long period of time, the evolutionary change will not occur—behavior will revert. This is where most newly-embraced philosophies and methods ultimately fail.
Finally, if all of this is done effectively, the desired outcomes should be achieved—if you’re lucky. And then leaders must grasp that success and hammer it home as reinforcement from the executive level, as proof that the changes were the right things to do.
All this paper addresses is the cognitive side of changing the status quo…emotionally internalizing the need to change. There’s a powerful behavioral side that must also be considered. If you’re fortunate enough to overcome the cognitive challenge with the organization’s leader, the nature of evolutionary change still means that extensive reinforcement will be required for an indefinite transition period if a change is ultimately going to succeed. It takes a long time to modify mental models.
The easier it is to do, the harder it is to change.
—Jacob M. Braude
Nothing is as temporary as that which is called permanent.
Nothing is as permanent as that which is called temporary.
Any change looks terrible.
—The Principle of Design Inertia
The only difference between a rut and a grave is their dimensions.
1. Dettmer, H. William. Strategic Navigation: A Systems Approach to Business Strategy. Milwaukee, WI: ASQ Quality Press, 2003.
2. Moore, Geoffrey, Inside the Tornado. NY: HarperBusiness, 1995.
3. Senge, Peter M. The Fifth Discipline: The Art and Practice of the Learning Organization. NY: Doubleday, 1990.
4. Argyris, Chris and Schön, Donald. Theory in practice: Increasing Professional Effectiveness. San Francisco: Jossey-Bass, 1974.
5. Gonzales, Laurence. Deep Survival: Who Lives, Who Dies, and Why. NY: W.W. Norton & Co. 2003
6. Damasio, Antonio R. Descartes’ Error: Emotion, Reason, and the Human Brain. NY: Putnam, 1994.
7. Wright, Robert. The Moral Animal: Why We Are the Way We Are: The New Science of Evolutionary Psychology. NY: Vintage, 1994.
8. Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962.
9. Dettmer, H. William. The Logical Thinking Process: A Systems Approach to Complex Problem Solving. Milwaukee: ASQ Quality Press, 2007.
For a downloadable PDF of this article, click here: Part-12: LOGIC AND EMOTION IN CHANGING MINDS