When market research clients design and script questionnaires, various problems can occur.
Market research (MR) is here to the rescue, trying to explain why we all act the way we do, at least as consumers. For all of us professionally involved in research, it is clear that most of the answers appear in the consumer subconscious and for whatever reason vary in time. Quite a lot of words to simply say that in order to know what is in the consumer’s mind, we need to get on their wave length.
A crucial step in getting all the essence and nuance of consumer opinion is to ensure the actual survey is prepared and run correctly, avoiding as many hiccups along the way as possible. One may say this goes without saying and an experienced MR agency should have no problems with putting together and running a successful study. Unfortunately, our experience in servicing all types and sizes of MR clients shows a different story. The findings presented in this article are based on a thorough analysis of 5067 online MR projects that experienced some commonly seen problem, run by JTN Research for various clients in the last three years – 2015, 2016, 2017. All the market research projects in focus here are qualified as “sample-only” for JTN Research, which means that the client is responsible for questionnaire design and scripting, and only requires online panel samples from JTN Research. The results are put together by analyzing the projects’ final reports. These reports are produced by the respective project manager upon closing the given study, right after invoicing, and outline reasons for delays, price updates and other unexpected problems with the given job. Below are the most common problems observed.
Even though this matter sounds a bit too general, it usually demonstrates itself in three very concrete directions: (a) question formulation, order and organization is not ideal; (b) survey is scripted in an odd way or (c) look and feel and overall questionnaire visual representation make completing it more difficult than necessary. When it comes to question order, it is often the case that key demographic questions used for screening out or quota counts are placed very late. This inevitably causes problems with respondent experience as they drop out after spending considerable time providing answers. This may have feasibility implications and will probably add cost to your budget. The most obvious and quickest solution is to rearrange the order of the questions, but the tricky bit is to identify the problematic potential early enough, when such amendments are still possible. Unfortunately, questionnaire designs sometimes suffer from poor question and answer formulation. “Have you bought a EUR 4,000 watch in the past three months? Yes/No”. Having too obvious answers or screening questions that suggest the “right” answer always leads to poor data quality. Surprisingly our analysis shows that this problem is just as common in large and experienced MR agencies (twelve percent of questionnaire-related problems), as it is with smaller and newer MR clients (eleven percent of the questionnaire-related problems).
Focusing on scripting, the complications with structure most often appear in multiple cell surveys. Some studies require two, three or more concepts to be evaluated separately by the same target group. This would mean that each cell has to be evaluated by the same number of respondents, who are balanced in exactly the same way. The problem occurs, when a quicker and more obvious course of action is chosen and the different cells are simply divided into different survey links. This adds unnecessary complexity to the fieldwork, as splitting the sample audience into several portions increases work hours for sampling and creates too much additional hassle at the end of fieldwork, when databases have to be put together. The most useful solution in such a case is to have the so-called least full quota setting. This setting works through a script in the questionnaire, which assigns similar demographic balance in each separate quota cell. This helpful setting adds a bit more work for the programmer, but saves a lot of effort with fieldwork management, and ensures overall budget is kept in check.
It goes without saying that what the respondents see in the questionnaire affects their answers and overall survey satisfaction. A very common example is having radio buttons that are too hard to click on correctly and can cause the person to click the wrong option. Another big “no-no” are huge exhausting grids, usually those would be designs with more than five columns and more than six statements on the same screen, or more than several consecutive grid questions. Luckily, in recent years all professional survey programming platforms have developed a wide variety of solutions to guarantee pleasant respondent experience, providing alternatives to repetitive items.
LOI is probably the most discussed topic whenever a research project is planned. The end client is always willing to maximize the information obtained with a single study, by asking as many questions as possible, getting all knowledge at once. All MR agencies and panel providers know too well is that the longer the study the more problems there are with data quality. It is quite understandable that completing a study for 35-40 minutes is tiring. Losing interest as a respondent inevitably leads to paying less attention in the questions and answer options. There have been several attempts to solve the LOI problem technologically, applying data weighting techniques, but so far the low-tech solution of getting the project priorities straight and removing the non-vital questions is still the most effective.
Living in the age of automation, there is increasing interest in DIY survey programming. There are certain benefits to it, like speed, not having to get in touch with programmers, maybe reducing overall costs. Unfortunately, the functionalities these DIY platforms offer are still nowhere near the professional authoring tools out there, and not relying on an experienced survey scripter will very often backfire. The usual complications are limited look & feel and programming options, hard to add sample providers to the study, almost impossible to implement data quality checks (like GeoIP, deduplication, etc.) among others. Having said all this, there is no need to avoid DIY survey programming altogether, but objective evaluation of the complexity of the questionnaire before going for the DIY solution is required.
There is abundant evidence supporting device agnostic (DA) survey design. In the mobile era, not being able to complete the study on a smartphone or a tablet is the equivalent of losing at least 50 percent of your eligible sample before even starting data collection. Still, based on our data, 42 percent of all surveys in this analysis are non-DA. About 67 percent of the non-DA projects are commissioned by the larger online MR agencies, where small and medium MR clients tend to be more up-to-date when it comes to mobile survey access. The main reasons behind such results lie in multiple-year tracker jobs. On many occasions end clients are unwilling to make changes to the long-standing design in fear of losing data continuity and comparability between years. However, deciding against DA is definitely tantamount to not having the opinion of the majority of mobile device users, which one may argue generates significant inaccuracy.
Practice and experience indicate that the best way to avoid data collection problems and unexpected delays is having a sample plan before starting fieldwork. That involves several important decisions – where is sample coming from; if multiple providers are on board, how will they balance delivery; which are the toughest demographic/consumer groups targeted, so that these can be given enough time in field; what are the key metrics to monitor as an early warning indication? Having these imperative variables sorted often makes the difference between finishing your study on time and budget, and experiencing delays and unexpected extra expenses. Being on the same page with your panel provider is probably the easiest way to prevent unpleasant surprises. Practically, this means having them start with the toughest groups and not rushing to get as many interviews as possible in the first few days. After all, nobody wants to end up looking for young, unemployed males who own an expensive sports car one day before fieldwork is supposed to finish. Planning ahead is the key to avoiding such traps.
Everybody doing a MR study is willing to be able to transfer their findings to the general population. This brings credibility and weight to any MR results. The established way of achieving this is by applying national representative distribution quotas, also referred to as nat rep quotas. Nat rep has become such a holy grail in our industry that it is a requirement in 93 percent of all requests for quotes, and applied as quota stop in more than 90 percent of all projects. However, getting the nat rep validity stamp on your survey data sometimes works against truthful findings and often brings in unnecessary bias. Imagine you are looking to interview only consumers of a specific male aftershave brand. Applying nat rep quotas on a completed interview basis would mean that this brand is proportionately used by men in all ages and regions in the given market. Most definitely untrue, or if such a brand exists, its marketing people are probably the prophets of modern era advertising, successfully reaching all ages and regions in the market. A much more accurate approach would be to go for nat rep spread on completed + screened out respondents. This reaches the audience in the right proportions, but brand usage and awareness is not forced upon completion, but accurately measured. Unfortunately, a very small portion (about three percent) of the projects analyzed here apply this approach. Of course if one only needs the opinion of brand users, then the quota limits have to be reviewed accordingly to avoid bias.
Another very modern characteristic of our automation era is getting sample from platforms that put together multiple sample providers, allowing clients to purchase and launch sample quickly, at good cost levels and without having to know providers in the specific market they need to run fieldwork. Obviously, there are quite a few benefits for clients, attracting them more and more to these solutions. After all, who does not like having quick and affordable access to sample in almost every country in the world? Still, there are a number of considerations to be taken into account. Having no contact with the actual provider means that one needs to be very confident in the country-specific knowledge they have of the specific market. In other words, one works under the assumption that their questionnaire is perfectly adapted to the market they are running it in. Receiving that country-specific information from the local provider is on many occasions crucial. Another important matter is sample quality. Keeping interview costs at such levels often indicates that the source is not panel, but river or randomized databases, where respondent incentive is very small or practically non-existent. Also router usage within these platforms implies that a single respondent can take more than one study in one go, bringing that person closer to the definition of “professional panelist”. As the old saying goes – no product can be of high quality, cheap and delivered quickly at the same time, something has to give. Statistics clearly indicates that on average about two of the listed problems are observed in the research jobs analyzed. Thus it is safe to conclude that it is much more likely to see a combination of complications, rather than a standalone issue. The reasons behind these results are hard to identify, but the steady growth and intensive innovation in the online MR industry we have been observing in the past few years – are definitely major factors. Businesses require much more insights, in limited time frames, done in many more markets and many more environments, than before. This obviously puts significant strain on the research workforce, causing us to inevitably take shortcuts as we struggle to keep up with innovation in MR industry. Still, one thing is clear: We definitely live in times of change and growing the opportunity.