Using Independent Research to Avoid the Bullseye of Bias

A bullseyeOver the past ten years the Justkul research team has worked on hundreds of customer surveys for investment deals and strategic initiatives. Many of our clients could execute research themselves, but they have come to value the quality of our work, our expertise in knowing what questions to ask, our skill in avoiding common mistakes, and the way we free up their team’s time to concentrate on other aspects of a deal or strategic initiative. However, there is perhaps one factor more important than all of these: the independence of our research. Unlike most other parties in a particular deal or initiative, Justkul never has significant vested interest in any particular outcome or conclusion: Justkul is paid the same amount whether a deal or strategic initiative goes forward or whether it does not. We believe this type of independence is essential for minimizing bias.

Bias, or an inherent prejudice for or against a given outcome, can have a significant impact on the conclusions of a research study. It is often underestimated how easy it is for bias to enter a research project. Bias can creep into a survey at almost any stage in the process:

  • In the decisions of which questions to include,
  • In who is invited to take part in the survey (digital redlining, etc.),
  • In the phrasing of particular questions and the order and phrasing of answer choices,
  • In adopting a narrative flow that favors certain answers over others,
  • In decisions as to which data points are included or removed from the final data set,
  • In how the data is interpreted or summarized.

A lack of independence at any of these stages can produce misleading results, which lead to poorer and more risky decisions.

Hence, in order to make sound decisions, it is important to understand the potential causes of bias so that conscious decisions can be made about how to minimize the impact of these causes.

The Bullseye of Bias

The causes of bias are various, but one way to organize these causes is as the bullseye at the top of this article with intentionally malicious causes of bias in the center, intentionally non-malicious causes of bias in the next concentric circle, and unintentional causes of bias in the outer circle.

In this schema the most nefarious type of bias is in the center, but as its relative area suggests, in our experience it is fortunately also the rarest type. Far more commonly, biases arise from the causes in the outer circles. Because of the importance of avoiding this bias, it is worth looking at each of these types of causes in more detail.

Intentional and Malicious Causes of Bias

Intentionally-malicious causes of bias occur when decision-makers make a conscious decision to either lie or omit important details that would be likely to change the overall outcome of a project or decision. “Intentionality” is a complicated concept (See Anscombe, Intention), but in this case, we only mean that it is the result of explicit choices.

Justkul team members have encountered this situation on more than one project in the past. For instance, in one project we worked on the buy-side of a private equity deal and were asked to gather customer data on a high-end consumer product company. The potential investment looked great from its own presentations: its products were attractive, prior survey work showed that clients liked the products when they initially bought them, the management team seemed experienced, and the short-term financial data looked solid. Nevertheless, at the time the potential investment had a dark secret it was actively seeking to repress: its products were prone to breaking down. In an apparently deliberate effort to conceal this fact, none of the prior survey work or reports mentioned issues about repair and maintenance, and the company deliberately went out of its way to delay sharing customer contacts with the Justkul research team. The malice of this strategy was made explicit when a leader at the company asked if they could just wait a few more days to share customer contacts, which would mean that we would only get the contacts after the deal closed. This raised red flags and after we communicated it to our private equity client, the deal was eventually called off.

In this case, the explicit request from the executive made it fairly easy to identify the bias in the information the company shared. However, in our experience such obviously malicious cases are fortunately rare. More commonly, biases are introduced for non-malicious reasons, either intentionally or unintentionally.

Intentional and Non-Malicious Causes of Bias

In intentional and non-malicious causes of bias, a conscious decision is made to provide incomplete or potentially misleading data, but unlike the malicious case, decision-makers have grounds to justify their decision. Decision-makers might even convince themselves that they are making the right decision under the circumstances, such as when a company gives into the temptation to ignore survey results that do not fit in with their overall narrative. For instance, a company might nearly always receive a very customer satisfaction score in their internal survey results, but one independently-conducted survey might show a far lower score. Perhaps the company could provide a fairly good argument that because 95% of surveys showed a different result, this result is merely an outlier and can be ignored.

However, it might also be the case that there is something wrong with the other 95% of surveys. This suggestion may seem radical, but it’s actually not unusual: perhaps the company’s survey tool was inherently biased to begin with by making respondents only focus on positive experiences, or perhaps there were factors in the recruiting process that encouraged only respondents who were more enthusiastic about the company to complete the survey, or perhaps survey respondents were not assured the anonymity of their results. The importance of anonymity has been underscored for us many times on telephone interviews — ­especially in B2B industries where customer-sales rep relationships are key. In such interviews, a respondent will often ask for reassurance that the interview is truly anonymous before proceeding to give critical feedback. This is the reason Justkul generally insists that our survey respondents be assured anonymity in addition to confidentiality.

Unintentional and Non-Malicious Causes of Bias

Finally, the most common case is non-malicious and unintentional causes of bias. Examples are numerous. An investor who has an interest in having a deal go forward might overlook some key hypotheses that would suggest it should not go forward. A consultant who thinks a company should focus on countering the strategies of a specific competitor may overlook the challenges posed by other competitors. A clothing company that believes the quality of its fabrics are obviously better than competitors might be less inclined to put that hypothesis to the test, even though customers may actually perceive the fabric as lower in quality. All of these assumptions can have a significant impact on the outcome of research, and thus produce data that is inferior to research that is conducted independently.

Finally, even if data is conducted well—using surveys that are designed not to favor any particular outcome and recruiting participants in an un-biased manner so that the resulting data is truly representative of the underlying population—there is still considerable possibility for bias in the interpretation of those results. Negative data points can be ignored, other positive data points may be artificially promoted, and almost any survey result is potentially manipulated through spin or by ignoring important qualifications. The solution to all of this is to take steps to ensure the interpretation of the research is also independent.

Conclusion

Companies often have to rely on self-collected data to make routine business decisions, and this is good and entirely appropriate, provided the company take steps to limit bias in the data set. However, if an unusually important or difficult strategic decision needs to be made, for the reasons outlined here, we believe there is no substitute for data from an independent third-party research partner.

Why Would A Private Equity Firm Hire An Outside Customer Research Company?

Justkul Inc. graphic visually asking what it takes to promote growthAs the CEO of a customer research firm that works primarily with private equity, I am often asked what value customer research can bring to a private equity deal. People outside the private equity industry sometimes find it surprising that customer research can have a role in transactions that are so often portrayed as driven solely by deal-making and hard numbers. Even some people within the industry undervalue customer research, thinking that a few telephone calls or general market analysis is all that is required. However, the private equity industry has changed significantly over the last few decades. Whereas a simple LBO model with minimal customer research might have been sufficient for good returns two decades ago, high valuations and increased competition among firms is leading the industry to more sophisticated and strategic thinking. This is reflected in who is involved in a deal: whereas in the early days an investment might be largely driven by bankers or financial officers, now strategic consultants – whether in-house or hired from external strategy houses – are playing an increasingly important role in investment decisions. One could argue that no component is more important for an investment's long-term success than understanding the relationship between a company and its current and potential customers.This rise in strategic thinking coincides with an increased focus on the “softer” sides of a deal, such as customer research. Despite often being classified as a “softer” component, one could argue that no component is more important for an investment’s long-term success than understanding the relationship between a company and its current and potential customers. In the end, the success of every business depends on this crucial relationship: it is where growth comes from, and it is what gives a company an advantage over its competition. Why would a private equity firm want to conduct customer research? In my experience there are at least five reasons why private equity firms conduct detailed customer research before a deal closes:

  • Good customer research can help investors identify unexpected strengths or weaknesses of an investment.  This is perhaps the most traditional reason for conducting due diligence research. Knowing what risks investors may incur and learning how the investment’s products and services compare to its competitors can be important not only for adjusting valuations, but also for deciding whether the deal is even worth pursuing to begin with.
  • Good customer research can help investors develop and validate future growth strategies. Often an investment hinges on a number of very specific strategic growth hypotheses. By testing each of these hypotheses in detail with the current and potential client base, a private equity firm can get a sense of whether their assumptions are correct and what strategic changes will have the biggest impact on a deal’s success.
  • Good customer research can reduce uncertainty. Good research can also help answer a multitude of smaller questions and thereby reduce the uncertainty inherent in a deal. What effect will relocating a company headquarters have on sales? Will a potential acquisition be an asset to the brand? Will customers be open to purchasing an adjacent product or service? All these questions can be answered through good customer research.
  • Providing an acquisition with the results of good research can help set future expectations. Companies often initially succeed with a great idea, a superior product, and funding from angel and VC investors. At these early stages, a company may perform A/B testing and benchmark performance against its competition, but detailed market research is seen as an expensive distraction. However, when a firm reaches sufficient growth to enter the middle market and become a potential private equity investment, the role of research can change dramatically. Now, not only will research be important for setting wider brand strategies, but it is also necessary to fulfill board reporting requirements and for ensuring a firm is sufficiently transparent with investors. Conducting good customer research before a deal closes can help set expectations for what these future reports should look like.
  • Good research can create a useful dialogue between a company’s management and investors. Good dialogue between investors and a company’s leadership is essential for the success of an investment. Research helps a private equity firm gauge their ability to work with a company’s leaders. A management team that engages with unexpected insights rather than seeking to merely dismiss them is a good first sign that a successful partnership based on mutual trust will develop. I know of deals that were called off because such dialogue failed to be productive or management was unusually defensive. More than this, by talking through a research study’s findings and developing strategies together, both investors and the company will begin to see a deal as not just a transaction, but as the first step in a long-term partnership together.

As these five reasons suggest, customer research can be an important tool in the due diligence process. It not only provides investors with better information at the time of an investment decision, but it also can be useful for helping set future expectations and strategy. By talking through a research study's findings & developing strategies together, both investors & the company will begin to see a deal as not just a transaction, but as the first step in a long-term partnership together.I emphasize above that “good” customer research does all these things. However, research that is performed without sufficient care and attention can lead investors seriously astray. Examples of these types of errors include insufficient care in phrasing or in ordering questions, paying insufficient attention to the UX of survey participants, not appropriately stratifying sample, not sufficiently pilot-testing tools to avoid technical errors, or not sufficiently exploring unexpected findings. These kinds of mistakes can often provide misleading data that makes bad investments look good or conversely can lead an investment team to pass on a very good investment. Despite the ubiquity of fast and inexpensive survey tools, getting research right can often require considerable thought, time and expertise. Why would a private equity firm want to work with a third-party research firm? Some private equity firms may have research expertise in house, and others may obtain the required expertise through strategy consulting firms. Even in these cases, though, considerable advantages can be obtained by partnering with a specialty customer research firm. A third-party research firm can provide four additional advantages to the process:

  • A third-party research firm can provide experience and expertise. Good business research requires skills that can be very different from those that make for a good investor or management consultant. A company with a culture that values and promotes employees who have these skills will often produce more solid results. In addition, third-party research firms often conduct research in a variety of industries and geographies, some of which may be outside of a private equity firm’s or consultant’s expertise. A third-party firm may therefore better know how to properly adjust research methodologies for these different markets and recognize unexpected questions that have cross-market application. For instance, a question that produces fruitful results in consumer packaged goods may also have applications in B2B contexts, and a research approach that makes sense in the U.S. may not make sense in Singapore.
  • A third-party research firm can help provide an additional level of discretion. Deals are complicated transactions that can have significant impact on different stakeholders. It is often useful for a board and potential investors to investigate a potential deal or partnership without causing unnecessary disruption if the parties involved in the deal decide to go in different directions. A third-party research firm can serve as a firewall between the research and the larger public, thereby preventing a potential deal from garnering unnecessary attention until it reaches a stage at which it makes sense to do so.
  • A third-party research firm can help reduce bias. There sometimes can be intentional biases in a deal: a company seeking to be acquired may only report positive data or fail to report potential risks adequately. In this case, independent research can ensure important questions are asked before a deal closes. However, more often biases enter unintentionally: a company that has a strong culture may be blind to its own weaknesses, and a company that only relies on A/B testing of current customers may have unrealistic expectations of the potential in the larger market. In addition, even research conducted internally by a private equity firm may carry some bias: team members who are personally invested in the success of a deal may be unable to step back sufficiently to recognize risks. Research from a third party that has no stake in whether a deal closes or not can significantly reduce this bias.
  • A third-party research firm can be a valuable resource for an acquisition after a deal closes. As I mentioned earlier, many companies that PE firms invest in are in the process of making a transition between a startup/VC-fueled business model and a business model that is focused on expansion and growth. An independent research firm can serve as a bridge, providing research guidance until those capabilities can be built up in-house. Furthermore, even after a company develops extensive internal research capabilities, the company’s executives may nevertheless find it useful to bring in the external company that conducted the due diligence research in order to verify assumptions before making important strategic decisions.

As company multiples steadily increase, and as it becomes consequently less likely that a private equity firm will find an amazing company at a great price, good customer research is becoming an increasingly important component of successful private equity deals. Given that good customer research adds little to the overall cost of a deal, and given the significant benefits and insight independent research can provide, we think good customer research should be an important part of any private equity deal. By @jfhannon, CEO at Justkul Inc., a research firm focused on the needs of strategy and private equity.

Steve Jobs and Market Research in Contexts of Uncertainty

Steve Jobs in 1990According to some, market research is pretty useless in contexts of uncertainty. These contexts include disrupted markets, the introduction of radical new innovations, or the launch of new products and services. Steve Jobs explained this view in an interview:
“The problem is that market research can tell you what your customers think of something you show them, or it can tell you what your customers want as an incremental improvement on what you have, but very rarely can your customers predict something that they don’t even quite know they want yet. As an example, no market research could have led to the development of the Macintosh or the personal computer in the first place. So there are these sorts of non-incremental jumps that need to take place where it is very difficult for market research to really contribute much in the early phases of the thinking about, you know, what those should be. However, once you have made that jump, possibly before the product is on the market or even after, it’s a great time to go check your instincts with the marketplace and verify that you’re on the right track.” (PBS/Nova Interview,1990)
Steve Jobs could recognize the usefulness of market research, but he suggests it is not useful when the market is going through these non-incremental jumps. A similar mindset is found in an even stronger form in certain parts of the present-day startup community: it is often thought that it’s better to just bring a product to market and see what happens, rather than rely on misleading market research that is incapable of envisioning the future.

We have no idea how many failed startups would have succeeded if they had paid more attention to the findings of market research.There is no doubt that many companies succeed by adopting approaches that do not involve much market research. However, this in itself is not a sound proof for ruling out the usefulness of market research, because we have no idea how many failed startups would have succeeded if they had paid more attention to the findings of market research. Instead of being based on a sound empirical argument, the plausibility of this argument seems to rest, in part, on a number of assumptions about what market research is, and how it works.
Standard Market Research and the Principle of Continuity
When Steve Jobs spoke about market research, he was likely speaking about a particular kind of research. These are the surveys asking you to rate services on a scale of 1 to 10, asking you whether you prefer red cars or blue cars, or giving you complicated conjoint trees to determine if it is more important for you to get more product or lower prices. All of these studies depend implicitly on a principle of continuity: that your present opinions can be used to reliably predict your future actions.
As Jobs appears to recognize, this principle can break down in disrupted markets. If we look at this principle in more detail, it typically implies at least four sub-principles, all of which can fail when one is conducting research:
1.   The overall market will be the same in the future as it is now. This principle is reasonable in many situations, but Jobs proved it false when his teams introduced the Macintosh, the iPod, the iPhone and the iPad, each of which changed the markets in fundamental ways. Furthermore, it fails all the time in ordinary business practice: financial bubbles burst, companies that are on the top of the world go bankrupt a few years later, entire industries disappear.
2.   A respondent’s beliefs will be the same in the future. Closely aligned with the preceding principle, a respondent’s views can change even if the other components of the market do not. Someone might be against sharing credit card numbers or personal information online when they take a survey, but come to accept the practice a few months later.
3.   Respondents are accurately conveying their present beliefs when they take a survey. It’s common knowledge that people sometimes lie on surveys, or under-report activities they are ashamed of. However, even when people are trying to accurately convey their present beliefs, they may fail to do so. In particular, we tend to think of our personalities as uniform over time and place, but many of us adopt very different personas when we spend time with our family, or with our friends, or with our colleagues, or when we are on our own. A view we reveal on a survey may not reflect the view we act on later while partying with friends.
4.   What someone says accurately reflects what they will do. This belief is questioned frequently today as big data enables us to actually track the difference between saying and doing. In certain cases, it may turn out that basing predictions on past actions rather than opinions produces more accurate results.
In markets that remain constant over a long period of time, it may be unnecessary to question any of these principles. For instance, a mere correlation between what people say and an increase in sales might be all that is needed for an incremental gain, and no one may care to think any more deeply about why it’s the case.
Market Research in Uncertain Times
However, as soon as contexts arise when the principle of continuity does not hold, companies and whole markets can be led badly astray if they merely follow past correlations. In these situations, as Jobs notes, respondents may very well not know what they will want.
Yet, even in the most disrupted markets, where there are many discontinuities at the surface, there may be underlying factors that do not change, and therefore at deeper levels the principle of continuity can still hold, and research still has something to say.
In these cases, instead of mechanically asking respondents, “How likely is it that you would buy this new product?,” one should begin by thinking hard about the context of purchasing and using the product and ask, “What things have to be true if people are going to buy this product?” Moving from the general question to a hypothesis about its components in this way does at least four things:

  • First, it forces us to articulate and address the most important assumptions we are making. Often, we are implicitly assuming things that will prove to be false. Identifying the assumptions before conducting research allows us to explicitly test them.
  • Second, once we have identified a broader range of assumptions, we may be led to reconceive the range of possibilities we need to take into account in our questions and in the subsequent analysis. For instance, we may determine that a particular assumption does not hold in general and thereby identify new segments. These different segments may have different needs and decision-criteria.
  • Third, it can lead us to identify important outliers that do not necessarily conform to our expectations. If we think that a factor is essential, and it turns out to be false for a small number of respondents, this may be a cue to talk further with those individuals. Perhaps they are anticipating the next big trend in the market.
  • Fourth, it makes us think more deeply about the enterprise we are undertaking. Part of this comes from the fact that good research leads us to ask “Why?” questions that we may not have even thought about before.

Analyzing decisions into components makes it possible to find continuities in conditions of uncertainty. Moreover, when we have the whole framework of assumptions and possibilities available, it then becomes possible to track underlying factors that do change through proxies and analogical trend analyses.

An online survey constructed to test hypotheses about decision factors can tell you not only why your customers make the decisions they do, but also help you make better predictions in uncertain contexts.Implementing this approach well requires a different mindset and a broader range of skills than most standard market research companies offer. Identifying assumptions requires being able to break things down into their components like an engineer or analytical philosopher. Exploring possibilities and outliers requires a certain openness to possibility characteristic of design-thinking approaches. Thinking more deeply may require additional skills such as big-data analysis, statistics, expert interviews, or ethnographic studies. Yet, all of these approaches can be incorporated into a research strategy.
Many skills may be required, but great consultants and entrepreneurs often bring together many of these research skills. I’m thinking of when Orit Gadiesh talked to customers and metallurgists when she introduced a Bain client to continuous casting techniques in steel manufacturing. Or, when survey data led Howard Moskowitz to discover that there was no perfect recipe for Prego tomato sauce. Steve Jobs, especially, shows the passion of a good researcher. This is quite apparent in his interviews in which he enthusiastically discusses quantifying data, visiting 80 automated manufacturers in Japan to understand automation, or revising products after observing people using them. People can bring about the future through research, albeit of a non-standard type.
This much may be obvious. However, what might not be obvious is that even the most humble research tools can benefit from this approach. An online survey constructed to test hypotheses about decision factors can tell you not only why your customers make the decisions they do, but also can help you make better predictions in uncertain contexts.
Follow Jobs’ Advice: Ask the Right Questions
In another interview five years later, Steve Jobs was asked about his transition from being a hobbyist to being the executive of a multimillion dollar company. “How do you learn to run a company?,” he was asked. He observed, “You know, throughout the years in business I found something, which is, I always asked why you do things. And the answer you invariably get is that it’s just the way it’s done. Nobody knows why they do what they do. Nobody thinks about things very deeply in business, that’s what I found.” (Steve Jobs, The Lost Interview, 1995)
It turns out Steve Jobs didn’t like market research when introducing non-incremental changes, because he was a great researcher himself.  According to his own account, the secret to his success in business was asking the right questions and not settling for status quo responses. Unless you have the luxury of being in an industry without innovation or disruption, you and your research company shouldn’t settle for the status quo either.
By @jfhannon, CEO at Justkul Inc., a research firm focused on the needs of strategy and private equity.

How to Launch a Successful Online Survey

Drawing of a person taking an online survey.There are a number of tools that researchers use that get a bad rap. But as is often the case, the fault is not in the tool but in how it’s used. An online survey is one such tool. When used correctly, an online survey can be a powerful way of quickly and inexpensively gathering information for precisely targeted markets, across wide geographic areas. When used incorrectly, an online survey can provide a gobbledegook of misleading and inconclusive data. Some common practices in survey design often result in poorly designed surveys.  Some examples: a survey asks respondents to rank 15 different factors in terms of the order of importance, and only a limited number of these factors are actually relevant. Other surveys err in the opposite direction and ask a respondent to rate a complicated product on a simple scale of 1-10 without exploring the factors that underlie that ranking. And perhaps the most annoying of all, there are surveys that take respondents to narrative dead-ends: you’re forced to choose between a limited number of choices that don’t reflect your views at all in order to continue with the survey. The main problem in most of these case are that surveys are designed more from the point of view of the people developing the survey and analyzing the results rather than that of the respondent taking it. Yet, if the goal is to gain genuine insight about one’s customers and clients, taking into account user experience is essential. UX is just as important in an online survey as it is with any other product or service. So how do you make online surveys work? If you follow these ten rules you should be off to a great start. An online survey should be:

  1. Hypothesis-driven: questions are tailored to individual hypothesis and structured in such a way that they can confirm or deny those hypotheses. This is the most effective way to make the results of a survey actionable.
  2. Compelling and engaging: a survey should make respondents feel they are involved in a meaningful process. This leads to better, more thoughtful answers. Note that this is different than merely making a survey entertaining: when respondents say they enjoyed taking one of our surveys, it’s usually not because of fancy flash videos or brightly-colored buttons, but rather because they felt that the survey gave them adequate opportunity to express their views, and that they might have even learned something from the process.
  3. MECE: whenever possible answer choices should be mutually exclusive and collectively exhaustive, and when not possible, adequate opportunities should be given for respondents to introduce new categories or options.
  4. Analytically-designed: important questions and issues should be broken down into their constituent parts, and each part evaluated independently of the others. This leads to a deeper understanding of what motivates each respondent and makes survey results more actionable. This is especially important in the innovation space where respondents are asked to evaluate products they’ve never encountered before.
  5. With good narrative flow: a survey is a journey through time, and each question should follow naturally from the previous one. Narrative structure is essential to how human beings understand meaning and context, so changes in the order of questions can have dramatic effects on how those questions are answered.
  6. Rigidly-structured: although opportunities should be offered for respondents to describe things in their own words, such questions often lead to inconsistent or incomplete responses on an online survey. Unlike other research tools where an open-ended approach is essential, an online survey is a very unforgiving environment and question answers should be designed and formalized as much as possible beforehand.
  7. Carefully screened: almost any online survey will include a number of people who try to game the system and give false answers for the sake of an extra incentive. Strategies need to be employed to prevent them from succeeding.
  8. Sent to an appropriately stratified sample: a sound survey relies on more than having the right number of responses. One also has to ensure that the sample accurately reflects all the relevant subsections of the population. In some cases, such as B2B surveys in which a limited number of companies dominate a market, this may be far more important than statistical sample size.
  9. Aimed at the right degree of statistical significance: not all surveys can be sent to a sample size that meets the highest level of statistical certainty of the social sciences. Not only is this financially unfeasible in some B2B markets, but it may not even be possible in others. In these cases, an iterative research approach using smaller sample sizes can often lead to more reliable conclusions.
  10. Have adequate opportunities for follow-up: one of the greatest outcomes of an online survey is to discover some unexpected result that challenges a previously held assumption. If time is not given to reconnect with respondents then this result, and the opportunities it may represent, may never be properly understood.

A survey is a very unforgiving medium, and except for the most simplistic surveys, online surveys need to be designed with adequate care and attention. When we say, “We don’t do boring research projects”™ at Justkul Inc., we mean that we take the time and effort to follow these principles and thereby ensure every one of our online surveys is both successful and informative. What do you think? Do you have stories of badly-designed surveys? Have we left something important out? We look forward to your comments! By @jfhannon, CEO at Justkul Inc., a research firm focused on the needs of strategy and private equity.