When Goals Take Over

“Just give me the numbers!”

Falling firmly into the “I just can’t make this stuff up” category, the preceding statement was made by the head of a certain engineering department. He wanted the performance figures on a series of database lookups so that he could determine if the database code was performing up to specifications. This would be a perfectly reasonable request except for one minor problem: the database code was not producing the correct results in the first place. Performance was sort of irrelevant given that getting the wrong answers quickly is not necessarily all that helpful, although it may be less irritating than having to wait for the wrong answers. It’s rather like driving at 75mph when lost: you may not know where you are or where you are going, but at least you’ll get there quickly. Or something.

In another example, the engineers developing a bioinformatics data analysis package spent all their time arguing about the correct way to set up the GUI elements on each page. The problem was that when they actually ran one of the calculations, the program appeared to hang. In fact, I was assured by everyone, it just “took a long time to run.” How long? The answer was, “maybe a few weeks.”

This may come as a shock to those few people who have never used a PC, but a few weeks is generally longer than most computers will run before crashing (or installing an update without warning). Besides, the complete lack of response from the program regularly convinced users that the program had crashed. The engineers did not want to put in some visual indicator of progress because they felt it wouldn’t look good visually. They refused to remove that calculation from the product because “someone might want to try it.” Eventually, they grudgingly agreed to warn the user that it “might take a very long time to run.”

In both of these cases, the team was solving the wrong problem. Although there were definitely complaints about the speed of the database, speed was very much a secondary issue so long as the database wasn’t producing correct results. And while the user interface decisions were certainly important, designing an elegant interface for a feature that will convince the user that the product is not working is not particularly useful. At least rearranging the deck chairs on the Titanic was only a waste of time. It didn’t contribute to the ship sinking.

So why were these teams so insistent upon solving the wrong problems? If you give someone a problem they can solve comfortably, and one that they have no idea how to approach, they will do the former. At that point, once goals are set, they become the focus of everyone’s attention and a lot of work goes into accomplishing them. That is, after all, the best thing about goals; unfortunately, it can also be the worst thing about goals.

While clear, specific goals are certainly good things, goals also have to make sense. You need to have the right goals. It can be a very valuable exercise to look at the goals assigned to each person and each team in the company. Do those goals make sense? What problems or challenges are they addressing? Are the goals complementary, or are there significant gaps? If the engineering team is being evaluated on how many bugs they can fix and the QA team on how many new bugs they can find, what happens to the step where fixed bugs get verified? If no one is responsible for that happening, it won’t get done (and didn’t, in several software companies!). If the team focuses on the wrong problems, they’ll spend their time fighting symptoms or revisiting solved problems, and never deal with the real issues.

Therefore, even before you can set goals, you have to know what the problem is that you are trying to solve. That means first separating the symptoms of the problem from the problem itself. The symptoms are only symptoms; frequently, they can point to many possible problems. It’s important to look at the symptoms and brainstorm which problems they could be indicating. When you start developing possible solutions, you then need to ask what the final product will look like if you go ahead with your solution and you need to know what success looks like. Make sure that your proposed solution will actually solve at least some of the potential problems you’ve identified, and develop some way of testing to make sure you are solving the correct problem. In other words, have some checkpoints along the way so you can make sure that you’re actually improving things. Only then can you start to set goals that will effectively guide you to producing the results you actually need.

Once goals are set, they have a way of taking over. What are you doing to make sure you don’t set goals before you know where you’re going?

The Case of the Blind Airplane Pilot

Recently, on Best of Cartalk, Tom told an interesting tale.

Apparently, a plane was delayed taking off. This isn’t the interesting part; in fact, that’s hardly even news. The plane subsequently made a stop and, big surprise, got delayed again. At this point, the pilot announced that since they were going to be sitting at the gate for some time, passengers might wish to disembark and stretch their legs. Everyone left except for Mr. Jones, a blind man. He had apparently flown this flight before as the pilot knew him by name.

“Mr Jones,” the pilot said, approaching the man, “we’ll be at the gate for at least an hour. Would you like to leave the plane?”

“No thank you,” said Jones, “but perhaps my dog would like a walk.”

A few moments later, passengers at the gate were treated to the sight of the pilot, in full uniform and wearing sunglasses, walking past seemingly led by a Seeing Eye dog.

Sometimes things are not what they appear to be. Of course a blind man with a service dog cannot be an airline pilot. The dogs can’t read the instruments. When it comes to choosing leaders, though, sometimes we’re not much different from a blind airline pilot, with potentially similar results if we get it wrong.

The question of whether someone looks like a leader is a concept that has been in the news a bit lately. I was asked on a radio show once what a leader looks like. I created a stretch of dead air when I responded, “Whatever we think a leader looks like.”

This is the problem with leadership: we can’t necessarily agree on what a leader looks like or even what it means to look like a leader.

Where do we learn what a leader looks like? Fundamentally, from our culture via a variety of sources: growing up, it may be through stories, books, TV, and movies. It may be through activities we take part in, such as sports or playing Dungeons and Dragons. It may be through acting in plays or participating in live roleplaying scenarios. In the workforce, people are seen as leaders sometimes just because they physically resemble other leaders or the company founder. Sometimes, merely acting like a known leader or imitating some key characteristic of theirs or being associated with them psychologically is enough to become recognizable as a leader.

The thing is, those cultural lessons are usually superficial and, at best, tell us only what past leaders looked like. Even worse, when someone matches up to the superficial characteristics of leadership, it is a common human response to assume they have other characteristics as well. Which other characteristics? Whichever characteristics the viewer thinks a leader should have. Conversely, those who do not fit the superficial image of “leader” are then assumed to not have the abilities a leader needs to be successful. Thus, an organization that focuses only on what worked in the past will often blind itself to the vast pool of talented people whom it is not promoting, and who are the right people for the problems the organization has today or will have tomorrow.

Ironically, a common reflex when things don’t work is to become frustrated and metaphorically hit the system with a monkey wrench: while percussive maintenance might sometimes work with a mechanical device, even then it works mainly in fiction. In reality, kicking your computer will rarely yield good results for either the computer or your foot. Hiring someone unskilled for the job just to shake things up may feel very satisfying, but the results are similar to hiring a pastry chef to perform open-heart surgery. He may shake things up, but it’s your funeral.

Thus, it is critical to look seriously at what a leader will be expected to do. What role will they play? What skills will they need? Failing to do this makes it easy to fall into the trap of appointing someone with the wrong skillset, or no skills at all. For example, in the early 2000s, Pfizer had two potential CEO candidates: Hank McKinnell and Karen Caten. McKinnell was an aggressive, abrasive man; Caten a woman who was praised for her ability to build teams. Pfizer chose McKinnell. As Harvard Business Review later observed, he was forced out five years later amid declining share prices, his abrasive manner being less than effective despite the fact that it initially appealed to board members’ mental image of what a leader “looked” like.

The image of an airline pilot with a service dog is comical. Choosing the wrong person to lead an organization is not. Leadership is about more than superficial characteristics: leaders require knowledge, skill, and temperament in order to be successful. Actually taking the time to understand the issues at a more than superficial level is critical to making a successful choice. There’s no reason to fly blind.

Kubler-Ross Meets the Saucer People

Hello world!

Kubler-Ross Meets the Saucer People

A psychologist and flying saucers? No, it’s not some bizarre new cooking show from the Food Network. Back in the 1960s, Leon Festinger, of cognitive dissonance fame, and two other psychologists were investigating a flying saucer cult. The cultists believed that the saucers would come and take them from the Earth before all life was destroyed by a great flood. The psychologists wanted to find out what would happen when the world didn’t end on schedule. Although some people might have thought them biased, they did not consider the case where the world did end on schedule.

Interestingly enough, when doomsday came and went with neither flood nor flying saucers, the cultists did not abandon their faith. They concluded that their actions had somehow saved the world, they became even more convinced of their beliefs, and they immediately launched into a massive recruitment drive. It was only after that failed, months later, that the saucer cult collapsed. But, as Festinger went on to observe, that didn’t always happen: sometimes the recruitment drive was successful, and the cult would survive for years after its belief system had been ostensibly proven false.

This phenomenon is hardly unknown in business and non-business realms alike. Sometimes an idea simply won’t die even after reality has stuck the metaphorical fork in it and declared it done. Whether this is a small group fighting to preserve a product idea that’s been abandoned or people stubbornly supporting a political candidate who has lost the primary, the faithful are undeterred by the fact that the flying saucers did not arrive on schedule. Denial is a powerful force, particularly when other people reinforce the belief.

Denial, of course, is one of the stages of grief in Elisabeth Kubler-Ross’s famous model: denial, anger, bargaining, depression, and acceptance. Although grieving does not require that any particular person experience the stages in any particular order, or even that each person will experience every stage, nonetheless the model is a powerful tool for understanding how people are likely to react when something they are deeply committed to comes to an end or does not turn out as expected.

It is at that point of experiencing loss, be that the loss of a person or of an idea, that Kubler-Ross meets the saucer people. It is at that point that denial can take charge and then take a flying saucer ride around reality: the loss of an idea is the loss of an abstraction. There is no body; rather, the physical world is unchanged. This can create a profound sense of cognitive dissonance, in which everything can appear as it did before even as everything is also very different.

Denial can be difficult, although hardly impossible, when one is alone. The more people who join in the denial process, however, the easier it gets. When a tightly knit group collectively denies the facts that are in front of it, members of the group are often forced to choose between joining in the denial or abandoning the group. The greater their commitment to the group, the more of their lives they’ve invested in it, the harder it is to leave. And the denial is so tempting… and surely if everyone else is saying the same thing that you are feeling, well, you can’t all be wrong. Well, in fact, you can all be wrong, but that doesn’t necessarily mean anything. Given enough people all in denial together and suddenly what you have is a group that is engaging in something that looks suspiciously like groupthink.

Even for a diffuse social group, the same phenomenon can still take place. Social media dramatically and drastically simplifies the process of denial as it becomes ever easier for group members to collectively reinforce each other’s belief system. As for reality, it’s lucky if it only gets kicked to the curb.

Once again, the group enters the realm of groupthink. It allows in only information that supports its views and denies the validity or existence of anything that does not. In either case, the members of the group are never able to process their loss and some to terms with reality. Whether this is merely a footnote in a history book or a significant cause of financial loss to a business really depends on the situation. Of course, the second situation can often lead to the group becoming one of those footnotes.

So how can you tell if your group is about to take a ride on a flying saucer? There are a few clues.

Are you closing yourself off to input from people or sources who disagree with you? If you only listen to the people who tell you what you want to hear and what you already “know” that’s a danger signal.

Are you turning against the very people whom you used to trust because now they’re telling you something you don’t want to hear? For example, many Bernie Sanders supporters turned against economist Paul Krugman when he criticized Bernie’s economic plan as a fantasy. Suddenly, Krugman was a sellout and the enemy. In fact, it was the Sanders supporters who were embarking on a flying saucer ride.

Are you refusing to allow in anyone who might tell you that you’re wrong? When groups get stuck, they will often use all manner of techniques to avoid considering alternatives. For example, being “too busy” to stop and think is one tried and true approach to simultaneously feeling like you are doing something without actually changing anything. This makes it easier to keep out anyone who might tell you something you don’t want to hear.

The problem with flying saucer rides is that no matter how comforting they might be, eventually they always crash and burn. If you want to make progress, though, you need to find a way to get off the saucer.

Did WP fix the problem?

let’s find out…

What’s a Vote (It’s not the guy on second)

“Lord Nelson has a vote.”

“No Baldrick, Lord Nelson has a boat.”

                                               — Blackadder

 

In Blackadder’s London, some people may have a boat, but it seems that virtually no one has the vote. Today, of course, voting is a considerably more common occurrence than it was in Britain in the late 1700s, even if the results are not always quite as comic as they are when Rowan Atkinson gets his hands on the process. What, though, is a vote? We’ve determined that it’s not something in which one can sail, even if the process may sometimes leave people feeling a little seasick.

At root, voting is merely one of the six methods that a group can use to make a decision and move forward. Voting, or majority rule, is popular in large part because voting to make decisions is an obvious and central part of the larger culture of United States and other democracies.  In other words, it’s a culturally normative behavior.

Voting systems rely on several tacit assumptions: members of the group understand the issues; members are able to argue with one another effectively and resolve questions around the issues; members have developed a solid communications and social structure; members of the group will support the final decision reached by the group.

In small groups, these assumptions are often, though not always, valid provided that the group membership has developed fairly strong, trusting relationships with one another. As groups get larger, member connections become thinner and even the boundaries of group membership may become somewhat diffuse: it’s easy to see the boundaries of a specific department in a company, while it’s much harder to define the exact boundaries of a group such as “Red Sox fans.”

When the assumptions that underlie voting are violated, the voting system starts to break down in various ways. The most common, and obvious, breakdown is that the debate moves from a battle over ideas to a battle over votes: I don’t have to come up with good ideas so long as I can sell my ideas better than you can sell your ideas. Alternately, perhaps I can call the vote by surprise so your side won’t have enough people there, lock your allies in the restroom while the vote is being held, or otherwise take away your ability to influence the outcome of the vote. There’s a reason why many organizations have explicit rules requiring quorums and prior announcements of when a vote is going to be held, as well as rules specifying who gets to vote.

Claiming that the vote was rigged in some way is often a variant on the voter suppression approach: it’s a way of not facing the unpleasant reality that maybe most of the people didn’t like my ideas. In a large group, it’s particularly easy to perceive a vote as rigged if you happen to be surrounded by people who are voting as you are. This creates a false sense of unanimity as the local echo chamber reinforces the idea that “everyone” supports your view. This makes the actual result all the more shocking. The fact that sometimes a vote can be rigged does complicate this issue; fortunately, the larger the scale of the voting process, the harder that is to do.

Losers of a vote may also try to protect their ideas by consciously or unconsciously sabotaging the majority result: if the decision turns out to be “wrong,” even if because some members of the group kept it from working, then the losing party in the vote can claim that the group should have chosen their option instead. This behavior manifests in small groups fairly often, and can sometimes force the group to reconsider its decisions. Sometimes, though, the behavior is purely a means of saying, “see I was right all along!” even as the entire group fails. I worked for a startup or two many years ago that failed in part because of this type of behavior. For some people, being right was more important than being successful.

Depending on how the voting rules are set up, a majority rules system can degenerate into a minority rules system. Minority rule is another group decision making method, although frequently a dysfunctional one. In minority rule, the group adopts a decision supported by, as the name would imply, a minority of the group. Sometimes this is due to railroading the vote and not giving anyone a chance to object, sometimes minority rule is the result of each person assuming that they are the only ones who have doubts about a course of action, and so not speaking up. Sometimes, minority rule can result from a plurality voting system in which only a single vote will be held and multiple choices leave one option with more votes than any single one of the others, although less than half of the total. Some systems allow for subsequent rounds of voting with only the top finishers or have some form of preferential balloting in order to avoid this problem. Minority rule can also result from voter suppression or indifference.

Voting systems can also break down as individual people try to deal with the choices in front of them. Groups may move through a series of votes in order to reduce a large set of options down to a smaller number: in a sense, the group is sorting out its priorities and feelings about the different choices, making a series of decisions on potentially superficial criteria in order to reduce the decision space to something more manageable. At any point in this process, not all members of the group will always like the set of options that the group is considering. Sometimes this is because the group has already eliminated their favorite option; sometimes, it’s because members may not want to accept that other options are infeasible, impractical, or otherwise unavailable: members of a jury get to vote on each individual charge, but not on anything that wasn’t part of the court case, regardless of their feelings on the matter. Sometimes the group as a whole simply didn’t know about or care to investigate particular options that some members feel strongly about. In all of these cases, and others that you can probably imagine, individuals are left with a menu of choices that they might not like.

Group members may drop out of the process as their favorite options are eliminated, particularly if their only interest in the vote is a particular decision or outcome; depending on circumstances, this could represent a form of tunnel vision, as those members forget about the larger goals of the group and become stuck on one specific outcome. This can also be a form of trying to prove the majority wrong, as discussed above.  In some cases, other group members may become more invested later in the process, either because they didn’t care much which option was selected so long as they have a voice near the end, or because they realize that the vote isn’t going the way they expected.

The problem at this point is that, all too often, everyone involved in the voting process is totally focused on the choices and the process, not on the point of voting: it’s to make a decision that lets the group select a course of action that will, at least in the opinions of enough members, advance its goals. Which goals get prioritized is, in a very real sense, a consequence of the voting process: each decision, that is, vote, that the group makes is implicitly or explicitly prioritizing some goals over others. That’s it. A vote is nothing more than a decision making tool. That decision will have consequences of course, but so does not making any decision. Some voting systems allow for a non-decision, or “none of the above,” choice, which can force the group to go back and reevaluate the options. That can work well in situations where the decision is low urgency and the cost of redoing the process is low. Other systems, such as US Presidential elections, are designed to force a decision within a specific time frame. The implicit assumption is that it’s better to make some decision than no decision: no matter what the outcome, someone will become president.

In a small group, members might refuse to support any of the available options. If enough members make clear their unwillingness to support any option, this can force the group to reevaluate its decision space. However, this really does depend on how many group members feel this way: if it’s a small enough minority, the group will go ahead anyway. Holdouts who then refuse to support the outcome will often leave the group if they disagree deeply enough, or may be forced out by the rest of the group.

In a large group, it’s much easier to avoid supporting any of the available choices. This is particularly true with a secret ballot voting system: secret ballots make it easier for people to vote as they wish, but also make it easier to disengage from the moral consequences of a bad group decision. The larger the group, the less any individual feels responsible for the overall outcome. Thus, a group member can vote for an unlikely outcome, write in an outcome not on the presented list, or not vote at all, and simultaneously feel like their action is disconnected from the final result. This disconnect makes it easier to not feel guilt over a group decision that hurts other people and also not feel guilt over profiting from a group decision that they might have refused to support. This is particularly true in the plurality/minority rule systems discussed earlier. Arguably, though, all members of the group share in the responsibility for the decision and subsequent actions that result from it, particularly if they are in a position to benefit from those decisions.

Ultimately, voting is a tool that enables a group to make a decision, sometimes whether or not members of the group want to make a decision at that time or whether or not they like the (available) options. Sometimes what counts is that the decision be made and the group move on. Voting is thus a very powerful tool. As with all power tools, improper use may result in injury to the social structure of the group.

What’s a Vote?

“Lord Nelson has a vote.”

“No Baldrick, Lord Nelson has a boat.”

                                               — Blackadder

 

In Blackadder’s London, some people may have a boat, but it seems that virtually no one has the vote. Today, of course, voting is a considerably more common occurrence than it was in Britain in the late 1700s, even if the results are not always quite as comic as they are when Rowan Atkinson gets his hands on the process. What, though, is a vote? We’ve determined that it’s not something in which one can sail, even if the process may sometimes leave people feeling a little seasick.

At root, voting is merely one of the six methods that a group can use to make a decision and move forward. Voting, or majority rule, is popular in large part because voting to make decisions is an obvious and central part of the larger culture of United States and other democracies.  In other words, it’s a culturally normative behavior.

Voting systems rely on several tacit assumptions: members of the group understand the issues; members are able to argue with one another effectively and resolve questions around the issues; members have developed a solid communications and social structure; members of the group will support the final decision reached by the group.

In small groups, these assumptions are often, though not always, valid provided that the group membership has developed fairly strong, trusting relationships with one another. As groups get larger, member connections become thinner and even the boundaries of group membership may become somewhat diffuse: it’s easy to see the boundaries of a specific department in a company, while it’s much harder to define the exact boundaries of a group such as “Red Sox fans.”

When the assumptions that underlie voting are violated, the voting system starts to break down in various ways. The most common, and obvious, breakdown is that the debate moves from a battle over ideas to a battle over votes: I don’t have to come up with good ideas so long as I can sell my ideas better than you can sell your ideas. Alternately, perhaps I can call the vote by surprise so your side won’t have enough people there, lock your allies in the restroom while the vote is being held, or otherwise take away your ability to influence the outcome of the vote. There’s a reason why many organizations have explicit rules requiring quorums and prior announcements of when a vote is going to be held, as well as rules specifying who gets to vote.

Claiming that the vote was rigged in some way is often a variant on the voter suppression approach: it’s a way of not facing the unpleasant reality that maybe most of the people didn’t like my ideas. In a large group, it’s particularly easy to perceive a vote as rigged if you happen to be surrounded by people who are voting as you are. This creates a false sense of unanimity as the local echo chamber reinforces the idea that “everyone” supports your view. This makes the actual result all the more shocking. The fact that sometimes a vote can be rigged does complicate this issue; fortunately, the larger the scale of the voting process, the harder that is to do.

Losers of a vote may also try to protect their ideas by consciously or unconsciously sabotaging the majority result: if the decision turns out to be “wrong,” even if because some members of the group kept it from working, then the losing party in the vote can claim that the group should have chosen their option instead. This behavior manifests in small groups fairly often, and can sometimes force the group to reconsider its decisions. Sometimes, though, the behavior is purely a means of saying, “see I was right all along!” even as the entire group fails. I worked for a startup or two many years ago that failed in part because of this type of behavior. For some people, being right was more important than being successful.

Depending on how the voting rules are set up, a majority rules system can degenerate into a minority rules system. Minority rule is another group decision making method, although frequently a dysfunctional one. In minority rule, the group adopts a decision supported by, as the name would imply, a minority of the group. Sometimes this is due to railroading the vote and not giving anyone a chance to object, sometimes minority rule is the result of each person assuming that they are the only ones who have doubts about a course of action, and so not speaking up. Sometimes, minority rule can result from a plurality voting system in which only a single vote will be held and multiple choices leave one option with more votes than any single one of the others, although less than half of the total. Some systems allow for subsequent rounds of voting with only the top finishers or have some form of preferential balloting in order to avoid this problem. Minority rule can also result from voter suppression or indifference.

Voting systems can also break down as individual people try to deal with the choices in front of them. Groups may move through a series of votes in order to reduce a large set of options down to a smaller number: in a sense, the group is sorting out its priorities and feelings about the different choices, making a series of decisions on potentially superficial criteria in order to reduce the decision space to something more manageable. At any point in this process, not all members of the group will always like the set of options that the group is considering. Sometimes this is because the group has already eliminated their favorite option; sometimes, it’s because members may not want to accept that other options are infeasible, impractical, or otherwise unavailable: members of a jury get to vote on each individual charge, but not on anything that wasn’t part of the court case, regardless of their feelings on the matter. Sometimes the group as a whole simply didn’t know about or care to investigate particular options that some members feel strongly about. In all of these cases, and others that you can probably imagine, individuals are left with a menu of choices that they might not like.

Group members may drop out of the process as their favorite options are eliminated, particularly if their only interest in the vote is a particular decision or outcome; depending on circumstances, this could represent a form of tunnel vision, as those members forget about the larger goals of the group and become stuck on one specific outcome. This can also be a form of trying to prove the majority wrong, as discussed above.  In some cases, other group members may become more invested later in the process, either because they didn’t care much which option was selected so long as they have a voice near the end, or because they realize that the vote isn’t going the way they expected.

The problem at this point is that, all too often, everyone involved in the voting process is totally focused on the choices and the process, not on the point of voting: it’s to make a decision that lets the group select a course of action that will, at least in the opinions of enough members, advance its goals. Which goals get prioritized is, in a very real sense, a consequence of the voting process: each decision, that is, vote, that the group makes is implicitly or explicitly prioritizing some goals over others. That’s it. A vote is nothing more than a decision making tool. That decision will have consequences of course, but so does not making any decision. Some voting systems allow for a non-decision, or “none of the above,” choice, which can force the group to go back and reevaluate the options. That can work well in situations where the decision is low urgency and the cost of redoing the process is low. Other systems, such as US Presidential elections, are designed to force a decision within a specific time frame. The implicit assumption is that it’s better to make some decision than no decision: no matter what the outcome, someone will become president.

In a small group, members might refuse to support any of the available options. If enough members make clear their unwillingness to support any option, this can force the group to reevaluate its decision space. However, this really does depend on how many group members feel this way: if it’s a small enough minority, the group will go ahead anyway. Holdouts who then refuse to support the outcome will often leave the group if they disagree deeply enough, or may be forced out by the rest of the group.

In a large group, it’s much easier to avoid supporting any of the available choices. This is particularly true with a secret ballot voting system: secret ballots make it easier for people to vote as they wish, but also make it easier to disengage from the moral consequences of a bad group decision. The larger the group, the less any individual feels responsible for the overall outcome. Thus, a group member can vote for an unlikely outcome, write in an outcome not on the presented list, or not vote at all, and simultaneously feel like their action is disconnected from the final result. This disconnect makes it easier to not feel guilt over a group decision that hurts other people and also not feel guilt over profiting from a group decision that they might have refused to support. This is particularly true in the plurality/minority rule systems discussed earlier. Arguably, though, all members of the group share in the responsibility for the decision and subsequent actions that result from it, particularly if they are in a position to benefit from those decisions.

Ultimately, voting is a tool that enables a group to make a decision, sometimes whether or not members of the group want to make a decision at that time or whether or not they like the (available) options. Sometimes what counts is that the decision be made and the group move on. Voting is thus a very powerful tool. As with all power tools, improper use may result in injury to the social structure of the group or potentially some members thereof.

 

Phoning in Culture Change

What is a phone? That seems like a pretty simple question. After all, doesn’t everyone know what a phone is?

Well, yes, in a sense. Pretty much everyone knows what a phone is, but not everyone knows the same thing. For older people, the default image of a phone is a rather bulky object with a handset connected by a cord to a base unit. How far you could walk from the base unit depended on how long your cord was. One of the most striking features of these old phones was that if you positioned the handset correctly, you could make it look like a pair of Mickey Mouse ears.

To many people, however, a phone is a small object that you can put in your pocket and carry with you. You can make calls from anywhere. You don’t need to be in, or even near, your home. These people may not even recognize an old-fashioned phone. Now, you might well be thinking, “Well of course. Young people are used to cell phones and don’t use landlines.” True enough; what’s particularly interesting is that when you ask them why mobile phones are often called “cell phones,” their answers are usually unconnected to anything having to do with reality. One person told me that mobile phones are called cell phones because “they’re small,” like a human cell.

What do we do with a phone? Again, the answer depends. For many people, phones are used to make calls to other people. For my teen-aged daughter, that’s crazy talk. Phones are used to text friends, read email, listen to music, check the weather, and play games. Talking? Why do that?

What is particularly interesting here is that when we talk about phones and using a phone, we might think we’re all talking the same language, but we’re not. In fact, we may be speaking very different languages, even though we’re all using the exact same words. As should be obvious, and ironic though it may be, this effect can make communications just a bit tricky: after all, it’s not just phones that experience this little multi-definitional condition. However, since the point about communications is obvious, we won’t discuss it further. Instead, we’ll look at the more interesting question of why this sort of thing happens.

Fundamentally, what we’re looking at is a cultural shift in process. Over time, the meaning of a “phone” is changing, and that new meaning is moving through the population at different rates. Just because culture is shifting, that doesn’t mean that it’s going to change for everyone at the same time! Cultural propagation takes time. Now, to be completely fair, in a very real sense the exact meaning of a phone probably isn’t going to make that much difference to anyone. However, when the cultural shift is around how work should get done or around the strategic direction a business is taking, this cultural propagation effect can make a very big difference.

One of the problems with any significant organizational change is that major changes typically involve altering the underlying ways in which people work. In fact, we may even be changing the basic principles or reasons beyond why the work is being done in the first place! In other words, what we’re changing is the culture. As we’ve just seen, that’s a lot easier to say than it is to do. One of the big reasons why cultural change is so difficult is that it takes time to propagate; even worse, though, is the fact that those areas of the company where the culture hasn’t changed constantly pull back on the areas where the change is occurring, further slowing down the change. In other words, doing things the way we’ve always done them remains very attractive for a very long time. The old ways are like a comfortable old jacket: no matter how threadbare it may look, we don’t want to get rid of it. Let’s face it, there are people who not only resist smart phones, but don’t even carry mobile phones at all.

Avoiding the cultural propagation problem isn’t easy. It requires doing something that many people seem to find incredibly difficult or at least sort of silly: telling a good story and then living up to it.

That’s right, we start with a good story. Businesses create stories all the time. It’s human nature: we tend to organize information sequentially and we instinctively use a narrative structure to make sense of events. The culture of a business is expressed in the stories the business tells about itself and about key figures in the organization. If you want to change the culture, first you have to change the story. Once you’ve got the story, then you have to live up to it. Senior people need to make the story real: they need to demonstrate the values and message that they are promoting. Then, even as they travel around their business telling the story, they also have to be patient while it propagates. If you can’t live up to your story, few people will believe it and your cultural change will fade out as it propagates. Sure, you may see temporary successes, but the pull of the old, comfortable, believable story will stop your change process. At best, you might have a few small areas temporarily speaking the new language.

It’s only when you tell a believable story and make it real through your actions that everyone ends up speaking the same language. That’s a successful change.

Silly Goose Choices

The graylag goose has an interesting behavioral trait: when it sees its egg sitting outside its nest, it will quickly run to the egg and attempt to roll it back into the nest. This is an automatic process for the goose, and it can be quite persistent about it. Give it a soccer ball and it becomes even more persistent: to the goose, that soccer ball looks like nothing so much as a very big egg, and that egg belongs in the nest. This is known as a fixed-action pattern: when the stimulus is provided, the behavior unrolls automatically. The stronger the stimulus, as with the soccer ball, the stronger the resulting behavior. Of course, that’s a silly goose. What about people?

The other day, my wife and I went to lunch at a local restaurant. On the wall near the “Please wait to be seated” sign this particular restaurant has a wall of really quite excellent photographs. Like most people who come to this restaurant, we stopped to admire them while waiting for someone to seat us. Since it was a quiet day, we had an unobstructed view. After a couple of minutes, a woman came and seated us. An hour later, on our way out, we paused again to look at the pictures. After about two minutes, the same woman came and offered to seat us. Even though she’d walked past us several times while we were eating, indeed, had seated us an hour before, seeing people standing and admiring the photographs was apparently the only stimulus necessary to trigger the seating behavior. Arguably, since we had just eaten, the stimulus was now ever so slightly bigger.

Okay, so this is a mildly amusing story, but does it have any further significance? In fact, yes, it does. Fixed-action patterns like this one play out in businesses all the time. You can identify them in your company with a little effort: they’re the behaviors that come out automatically in response to some predictable trigger. For example, a customer complains; what happens? Or you find a bug in the software; what happens? Sales are not going as well as planned, or perhaps they’re running better; what happens? Every organizational culture develops its fixed-action patterns, although the details will vary from business to business. The key thing about them is that they become so automatic that no one really thinks much about them any more; when the appropriate trigger occurs, people just react.

For example, at one software company, shipping a product triggered a very unfortunate fixed-action pattern. As soon as the product was out the door, everyone would gather together and look at everything they had not accomplished: the features that did not make it in, the bugs that did not get fixed. As each person tried to show how seriously they were taking their product post-mortem, the focus on the negatives only grew. Ironically, it didn’t matter how much customers liked the product: like the woman at the restaurant, the stimulus triggered the behavior. While the restaurant was just funny, and caused no harm, the pattern at the software company led to a steady decline in motivation: it’s hard to be excited about your work when you “just know” that every release will be a disaster.

Fortunately, these fixed-action patterns don’t have to be bad. A conscious effort to build a pattern of celebrating successes and focusing on the positives of a release can build excitement and momentum that will launch a team into their next product. The trick is to pay attention to the patterns you want to have, and then create the new patterns. Don’t worry about getting rid of old ones; if you focus on the new patterns long enough, the old ones will fade away. Unlike with geese, where the fixed-action patterns are genetic, for people the patterns are built into our organizational culture. It may not always be easy, but, unlike the goose, people and organizations can change.

In other words, the patterns you have are the patterns you build. You get a choice, so don’t be a silly goose.

A Disunity of Crisis

“We have an army.”

“We have a Hulk.”

— Loki and Tony Stark, “The Avengers”

 

I promise, no Captain America: Civil War spoilers here. I can make that promise because, as of the time of this writing, I have not yet seen the movie. The basic story line, though has been rather hard to avoid.

When it comes to power, the Marvel Superheroes have it in spades. They fly, they can withstand impacts that would turn a normal human body into jelly, shrink, climb walls, turn into an indestructible green creature with serious anger issues, and on and on. Given an alien invasion or an assault by a mad AI, the Avengers have everything it takes to defend the world. They do really well, except when they have a disagreement. Granted, it makes for a much more exciting movie when the Avengers are all pounding on one another, as they do in Avengers, Avengers: Age of Ultron, and, of course, in Civil War (I hope that wasn’t a spoiler for anyone).

The problem here is that the Avengers, as a group, really have no effective methods for making decisions. Sure, when the crisis actually hits, they fall into their specific roles and do their things really, really well. And, since they are all insanely powerful, they are successful. But that same lack of structure for decision making is also what leads them into trouble as well: they simply have no agreed upon mechanism, or social structure, for resolving differences and coming to a decision without getting into a fight.

Imagine for a moment what this might look like in a business or a, even worse, a government. At least in a business, when people refuse to cooperate it may be possible to fire them. Sometimes, that’s even the right thing to do. But when you can’t remove a recalcitrant person or group, and you have no agreed upon methodology for making and implementing decisions, eventually your options become pretty limited. The Avengers get to this point very quickly and, to be fair, that’s what we’re all paying to see.

So how do groups make decisions? There are really only a few ways of doing it.

Some organizations work on a purely hierarchical basis: someone is in charge, and that person makes the decision. The organization has rules that clarify who is in charge when and who the bigger boss is. Military rank is an example of this, as is many a corporate hierarchy. Sometimes the person in charge might request input from team members, sometimes they might make the decisions without involving others. This approach can work very well, but does suffer from some drawbacks. Most notably, team members may resent not being part of the decision making process, particularly if they bring their own particular expertise to the table. They might also have knowledge and expertise that is worth considering.

Decisions are also often made through voting. Voting feels good and nicely democratic. It has the potential to get people involved. Of course, for a voting system to be effective, people have to be able to argue productively and debate the issues honestly. The Avengers, as a rule, are still struggling with the productive argument concept, preferring to rely upon trial by combat. Being fictional, the consequences to them tend to be minor. More broadly, though, when a voting system lacks an effective means of agreeing upon facts and applying logical, reasoned analysis to a problem, then that system is effectively saying that ignorance is equivalent to knowledge and expertise. If you find yourself having trouble telling the difference between expertise and ignorance, just ask your doctor for help next time your car is making weird noises, and your auto mechanic next time you are. Sure, you might get lucky…

For voting systems to work effectively, participants need to go to the hard work of building consensus. This doesn’t mean that everyone agrees with the ultimate decision, but it does mean that everyone agrees to the process and agrees to support the outcome. Consensus is difficult exactly because it is hard for people to accept a decision they personally don’t like, particularly if they expected that the result would be different. And, of course, sometimes the result of a decision making process really is so awful as to be unacceptable. Having some sort of final method of checking or validating a decision before implementation can be very helpful for preventing such situations! Otherwise, the system can break down over fighting about whether the decision is worthy of being fought over (this is a separate topic all by itself). Granted, the process can also be revised for future decisions, provided the social structure is strong enough to handle that. Changing the process while it’s running, on the other hand, tends to be seen as invalidating the decision that results from that change; yes, there are counter-examples, which is part of why this process is difficult. Overall, though, one might imagine that if the loser of a voting based decision process kept trying to change the rules in order to find some way to claim victory, then that victory, assuming it even happened, might well be seen as illegitimate. It would be like deciding that the winner of the baseball game should be based on number of hits versus number of runs.

Decision making can also become non-functional, as when one or two people simply make the decision and present it as a fait accompli or try to rush everyone else into agreeing. Sometimes, people will agree to a decision and then go off and do their own thing anyway; Tony Stark has a habit of this behavior and it did cause a bit of trouble in Age of Ultron. In the real world, as in the fictional one of the Avengers, this sort of behavior is also symptomatic of a group that is really more of a group of people wandering in the same direction than it is a cohesive team.

Ultimately, part of what makes the Avengers fun to watch is that their efforts to work out their problems will quickly degenerate into a dramatic battle from which they will recover just in time to save the world. When you’re a fictional character, there’s no real reason to do the hard work to avoid a bad outcome. In the real world, doing the hard work to develop effective methods of decision making, and avoid the dysfunctional ones, is generally a better way to go.