Skip to content
All posts

The False Sense of Significance: Minimizing Sampling Error and Addressing Bias

The best business decisions are a triangulated recipe of research insights, market knowledge, industry knowledge, and intuition. The first three are all purchasable, and the latter is on loan from the Almighty Creator. However, having the right ingredients alone is not enough. Like any recipe, success depends on knowing how to combine and use them effectively. That's why it's crucial to have clear instructions on how best to put everything together to achieve the desired outcome.

If I had a photographic memory and went back to create a word cloud around all the questions that I have heard during a market research readout, I bet the largest ones in the center would be “Is that statistically significant?” followed by “Thirty. We need to get at least thirty completes” most often spoken by the art major in the back of the conference room who, having crammed to pass the prerequisite math course, left one multiple choice answer on the central limit theorem emblazoned in his memory.

While it goes without saying that significance testing is an important factor in transforming market research insights into business decisions, I have always been fascinated with the often-singular reliance on it for interpretation and confidence in decision-making. While the elephant in the room is sample error/fraud; response bias and interpretation bias are ignored.

Sampling error can be devastating to a study and often arises from poor research practices. However, its impact can be minimized or eliminated with proper research design. Unfortunately, the area of sample fraud has been an ever-growing issue ever since we transitioned to the anonymous world of Internet research. Globalization and advancements in bot technology have only worsened this problem, and the world of AI may prove even more devastating.

At the recent SampleCon conference, The Insights Association, MRS, and ESOMAR announced their collaboration to combat fraud and address persistent risks to data quality.

MRS CEO Jane Frost comments: “Fraudulent activity is becoming increasingly sophisticated, particularly in online research. It poses a significant risk to our sector's future and this international partnership is another important means to confront it. Poor research can have a catastrophic impact on decision making and, as businesses and organisations across the world face economic, political, and social challenges, now more than ever our sector needs to prioritise the delivery of quality data and insight.”

That brings us to the issue of bias, which can manifest in both response and interpretation. Bias has always been part of our human nature, as we seek to make sense of the world based on our own experiences and beliefs.

While response bias can be minimized with good research design, it has always been difficult to eliminate. From historical television polling - which leads us to believe that PBS would be the highest demanded and priced streaming service - to political polling - which, in recent years, has been as accurate as a coin toss.

Like response bias, its evil twin brother, interpretation bias, can lead you down the primrose path. When interpreting results, especially quantitative ones, assumptions as to why a respondent answered how they did, in my mind, conjure that chalkboard image of why one should never assume. A great example of this is in recent political polling:

A research firm asks, “How satisfied are you with the job congress is doing?” and reports that 74.5% of the respondents are Dissatisfied, with a 2% margin of error. Depending on which news network or website you visit, you'll hear conflicting opinions on whether this spells doom and gloom for the Republicans or Democrats. People will argue over what actions must be taken to regain confidence, and although the margin of error has not changed, the decisions and actions to be taken vary as far as the East to the West.

In the end, to make informed business decisions we need to understand “the what’s” with solid quantitative research and high-quality sample, “the whys” and “what are we missing” with solid qualitative research, and the “what's next?” by merging both of these and processing with solid business acumen and market expertise.


 

At Sympler, we offer a unique approach to high-quality, fast-turn qualitative research, with the sample size to deliver quantitative insights and the experience of seasoned anthropologists to help you answer all your business questions.