Using Independent Research to Avoid the Bullseye of Bias

A bullseyeOver the past ten years the Justkul research team has worked on hundreds of customer surveys for investment deals and strategic initiatives. Many of our clients could execute research themselves, but they have come to value the quality of our work, our expertise in knowing what questions to ask, our skill in avoiding common mistakes, and the way we free up their team’s time to concentrate on other aspects of a deal or strategic initiative. However, there is perhaps one factor more important than all of these: the independence of our research. Unlike most other parties in a particular deal or initiative, Justkul never has significant vested interest in any particular outcome or conclusion: Justkul is paid the same amount whether a deal or strategic initiative goes forward or whether it does not. We believe this type of independence is essential for minimizing bias.

Bias, or an inherent prejudice for or against a given outcome, can have a significant impact on the conclusions of a research study. It is often underestimated how easy it is for bias to enter a research project. Bias can creep into a survey at almost any stage in the process:

  • In the decisions of which questions to include,
  • In who is invited to take part in the survey (digital redlining, etc.),
  • In the phrasing of particular questions and the order and phrasing of answer choices,
  • In adopting a narrative flow that favors certain answers over others,
  • In decisions as to which data points are included or removed from the final data set,
  • In how the data is interpreted or summarized.

A lack of independence at any of these stages can produce misleading results, which lead to poorer and more risky decisions.

Hence, in order to make sound decisions, it is important to understand the potential causes of bias so that conscious decisions can be made about how to minimize the impact of these causes.

The Bullseye of Bias

The causes of bias are various, but one way to organize these causes is as the bullseye at the top of this article with intentionally malicious causes of bias in the center, intentionally non-malicious causes of bias in the next concentric circle, and unintentional causes of bias in the outer circle.

In this schema the most nefarious type of bias is in the center, but as its relative area suggests, in our experience it is fortunately also the rarest type. Far more commonly, biases arise from the causes in the outer circles. Because of the importance of avoiding this bias, it is worth looking at each of these types of causes in more detail.

Intentional and Malicious Causes of Bias

Intentionally-malicious causes of bias occur when decision-makers make a conscious decision to either lie or omit important details that would be likely to change the overall outcome of a project or decision. “Intentionality” is a complicated concept (See Anscombe, Intention), but in this case, we only mean that it is the result of explicit choices.

Justkul team members have encountered this situation on more than one project in the past. For instance, in one project we worked on the buy-side of a private equity deal and were asked to gather customer data on a high-end consumer product company. The potential investment looked great from its own presentations: its products were attractive, prior survey work showed that clients liked the products when they initially bought them, the management team seemed experienced, and the short-term financial data looked solid. Nevertheless, at the time the potential investment had a dark secret it was actively seeking to repress: its products were prone to breaking down. In an apparently deliberate effort to conceal this fact, none of the prior survey work or reports mentioned issues about repair and maintenance, and the company deliberately went out of its way to delay sharing customer contacts with the Justkul research team. The malice of this strategy was made explicit when a leader at the company asked if they could just wait a few more days to share customer contacts, which would mean that we would only get the contacts after the deal closed. This raised red flags and after we communicated it to our private equity client, the deal was eventually called off.

In this case, the explicit request from the executive made it fairly easy to identify the bias in the information the company shared. However, in our experience such obviously malicious cases are fortunately rare. More commonly, biases are introduced for non-malicious reasons, either intentionally or unintentionally.

Intentional and Non-Malicious Causes of Bias

In intentional and non-malicious causes of bias, a conscious decision is made to provide incomplete or potentially misleading data, but unlike the malicious case, decision-makers have grounds to justify their decision. Decision-makers might even convince themselves that they are making the right decision under the circumstances, such as when a company gives into the temptation to ignore survey results that do not fit in with their overall narrative. For instance, a company might nearly always receive a very customer satisfaction score in their internal survey results, but one independently-conducted survey might show a far lower score. Perhaps the company could provide a fairly good argument that because 95% of surveys showed a different result, this result is merely an outlier and can be ignored.

However, it might also be the case that there is something wrong with the other 95% of surveys. This suggestion may seem radical, but it’s actually not unusual: perhaps the company’s survey tool was inherently biased to begin with by making respondents only focus on positive experiences, or perhaps there were factors in the recruiting process that encouraged only respondents who were more enthusiastic about the company to complete the survey, or perhaps survey respondents were not assured the anonymity of their results. The importance of anonymity has been underscored for us many times on telephone interviews — ­especially in B2B industries where customer-sales rep relationships are key. In such interviews, a respondent will often ask for reassurance that the interview is truly anonymous before proceeding to give critical feedback. This is the reason Justkul generally insists that our survey respondents be assured anonymity in addition to confidentiality.

Unintentional and Non-Malicious Causes of Bias

Finally, the most common case is non-malicious and unintentional causes of bias. Examples are numerous. An investor who has an interest in having a deal go forward might overlook some key hypotheses that would suggest it should not go forward. A consultant who thinks a company should focus on countering the strategies of a specific competitor may overlook the challenges posed by other competitors. A clothing company that believes the quality of its fabrics are obviously better than competitors might be less inclined to put that hypothesis to the test, even though customers may actually perceive the fabric as lower in quality. All of these assumptions can have a significant impact on the outcome of research, and thus produce data that is inferior to research that is conducted independently.

Finally, even if data is conducted well—using surveys that are designed not to favor any particular outcome and recruiting participants in an un-biased manner so that the resulting data is truly representative of the underlying population—there is still considerable possibility for bias in the interpretation of those results. Negative data points can be ignored, other positive data points may be artificially promoted, and almost any survey result is potentially manipulated through spin or by ignoring important qualifications. The solution to all of this is to take steps to ensure the interpretation of the research is also independent.

Conclusion

Companies often have to rely on self-collected data to make routine business decisions, and this is good and entirely appropriate, provided the company take steps to limit bias in the data set. However, if an unusually important or difficult strategic decision needs to be made, for the reasons outlined here, we believe there is no substitute for data from an independent third-party research partner.

How to Launch a Successful Online Survey

Drawing of a person taking an online survey.There are a number of tools that researchers use that get a bad rap. But as is often the case, the fault is not in the tool but in how it’s used. An online survey is one such tool. When used correctly, an online survey can be a powerful way of quickly and inexpensively gathering information for precisely targeted markets, across wide geographic areas. When used incorrectly, an online survey can provide a gobbledegook of misleading and inconclusive data. Some common practices in survey design often result in poorly designed surveys.  Some examples: a survey asks respondents to rank 15 different factors in terms of the order of importance, and only a limited number of these factors are actually relevant. Other surveys err in the opposite direction and ask a respondent to rate a complicated product on a simple scale of 1-10 without exploring the factors that underlie that ranking. And perhaps the most annoying of all, there are surveys that take respondents to narrative dead-ends: you’re forced to choose between a limited number of choices that don’t reflect your views at all in order to continue with the survey. The main problem in most of these case are that surveys are designed more from the point of view of the people developing the survey and analyzing the results rather than that of the respondent taking it. Yet, if the goal is to gain genuine insight about one’s customers and clients, taking into account user experience is essential. UX is just as important in an online survey as it is with any other product or service. So how do you make online surveys work? If you follow these ten rules you should be off to a great start. An online survey should be:

  1. Hypothesis-driven: questions are tailored to individual hypothesis and structured in such a way that they can confirm or deny those hypotheses. This is the most effective way to make the results of a survey actionable.
  2. Compelling and engaging: a survey should make respondents feel they are involved in a meaningful process. This leads to better, more thoughtful answers. Note that this is different than merely making a survey entertaining: when respondents say they enjoyed taking one of our surveys, it’s usually not because of fancy flash videos or brightly-colored buttons, but rather because they felt that the survey gave them adequate opportunity to express their views, and that they might have even learned something from the process.
  3. MECE: whenever possible answer choices should be mutually exclusive and collectively exhaustive, and when not possible, adequate opportunities should be given for respondents to introduce new categories or options.
  4. Analytically-designed: important questions and issues should be broken down into their constituent parts, and each part evaluated independently of the others. This leads to a deeper understanding of what motivates each respondent and makes survey results more actionable. This is especially important in the innovation space where respondents are asked to evaluate products they’ve never encountered before.
  5. With good narrative flow: a survey is a journey through time, and each question should follow naturally from the previous one. Narrative structure is essential to how human beings understand meaning and context, so changes in the order of questions can have dramatic effects on how those questions are answered.
  6. Rigidly-structured: although opportunities should be offered for respondents to describe things in their own words, such questions often lead to inconsistent or incomplete responses on an online survey. Unlike other research tools where an open-ended approach is essential, an online survey is a very unforgiving environment and question answers should be designed and formalized as much as possible beforehand.
  7. Carefully screened: almost any online survey will include a number of people who try to game the system and give false answers for the sake of an extra incentive. Strategies need to be employed to prevent them from succeeding.
  8. Sent to an appropriately stratified sample: a sound survey relies on more than having the right number of responses. One also has to ensure that the sample accurately reflects all the relevant subsections of the population. In some cases, such as B2B surveys in which a limited number of companies dominate a market, this may be far more important than statistical sample size.
  9. Aimed at the right degree of statistical significance: not all surveys can be sent to a sample size that meets the highest level of statistical certainty of the social sciences. Not only is this financially unfeasible in some B2B markets, but it may not even be possible in others. In these cases, an iterative research approach using smaller sample sizes can often lead to more reliable conclusions.
  10. Have adequate opportunities for follow-up: one of the greatest outcomes of an online survey is to discover some unexpected result that challenges a previously held assumption. If time is not given to reconnect with respondents then this result, and the opportunities it may represent, may never be properly understood.

A survey is a very unforgiving medium, and except for the most simplistic surveys, online surveys need to be designed with adequate care and attention. When we say, “We don’t do boring research projects”™ at Justkul Inc., we mean that we take the time and effort to follow these principles and thereby ensure every one of our online surveys is both successful and informative. What do you think? Do you have stories of badly-designed surveys? Have we left something important out? We look forward to your comments! By @jfhannon, CEO at Justkul Inc., a research firm focused on the needs of strategy and private equity.

Thoughts on Reading Contagious

Now that advertising is increasingly moving into the domain of social media, a key goal in marketing strategy has been to develop a framework for making things go viral. Campaigns such as Dos Equis’ The Most Interesting Man in the World campaign, or Old Spice’s The Man Your Man Can Smell Like campaign have significantly enhanced the reputation and visibility of these brands. Then there is the world of viral memes, where websites like I Can Has Cheezburger? can net a comparable traffic volume to The New York Times. The problem is that many of the ideas that go viral are ill-fitted to traditional marketing strategies (evidence: cats are rarely mentioned in marketing strategy books). Can classical marketing theory explain why the video of Susan Boyle or why PSY’s Gangnam Style were such great hits and why other similar videos are not? In marketing, viral media is clearly a space that is getting a lot of attention, and any book that attempts to explain something significant about this space is certainly worth reading.

One such book is Jonah Berger’s Contagious: Why Things Catch On. The book is distinguished from other explorations of what makes things cool and viral on its base in over ten years of empirical research the author has conducted at the Wharton Business School. The basic theory the book espouses is that there are six factors that contribute to something going viral. The six factors can be conveniently incorporated into the acronym STEPPS. Each of the chapters of the book deals with one of the factors. My short summary of each:

  • Social Currency: People share what makes them look good and “in the know” to others.
  • Triggers: People share things that they frequently encounter, and marketers need to ensure that ideas are triggered at the moment when they’ll actually have impact on decisions.
  • Emotion: People find emotions compelling, and Berger brings up the case of Susan Boyle’s famous performance: when something is emotionally engaging it has a tendency to stick around.
  • Public: Things need to be generally accessible to go viral. Berger brings up the example of the Apple laptop logo, which was deliberately designed to be seen properly by other people rather than the user.
  • Practical Value: People like sharing things that their friends find useful.
  • Stories: People like stories and will be more likely to share things that have an engaging narrative structure.

None of these six factors should be particularly surprising, and in fact one can find far more advanced and seminal works on each of these topics elsewhere (The New York Times review mentions the work of Malcolm Gladwell and the work of Chip and Dan Heath). However, Jonah’s book adds an interesting and entertaining perspective. The book is particularly engaging when he refers to empirical evidence that debunks a commonly held assumption (although he rarely provides enough detail to evaluate that evidence), or when he shows examples of a lack of alignment between a viral strategy and a marketing goal, as in the case of goldenpalace.com‘s 2004 Olympic stunt. There are very few places you can go to get such an interesting and entertaining discussion of examples.

Another highlight of the discussion was Jonah’s classification of emotions into high and low arousal varieties, as in this four-by-four matrix:

 

High Arousal Low Arousal
Positive Awe, Excitement, Amusement (Humor) Contentment
Negative Anger, Anxiety Sadness

The thought behind this matrix is that if you want an idea to catch on emotionally, it is better to focus on the high arousal emotions, whether they be positive or negative, rather than the low arousal emotions. Berger notes that although we tend to want to produce the positive high arousal emotions in viewers, the negative ones can be at least as potent.

What makes something go viral? At the end of the read Contagious won’t give you a detailed strategic blueprint, or provide in-depth philosophical, psychological or anthropological discussions, but it does provide a rich discussion of examples, and shows a broad range of issues people should take into account when attempting to make an idea go viral.

By @jfhannon, CEO at Justkul Inc., a research firm focused on the needs of strategy and private equity.