Conducting B2B research via online panels is an increasingly attractive option. Its more efficient cost model translates to roughly 30% of the price of running the same project by telephone. Incentives are lower online and you’re able to cast a much wider net to accomplish goals far more quickly.
While this may sound great, serious questions can arise over how respondents are recruited… and to how to ensure those answering your surveys are in fact qualified to do so (and are who they say they are).
The survey-taking experience is recruited under the guise that it is completely anonymous. As such, respondents aren’t recruited with a phone number and can’t be validated with telephone verification. This means you need to have a certain level of faith in your panel-of-choice company, and this trust needs to be cemented by a history of successful projects together and the power of their name within the industry.
While trust is important, so is a little common sense. Regardless of past performance, there are additional steps to take across each survey as safeguards to help ensure the content of the report comes from qualified B2B online respondents.
Picking the right partner
Client relationships are carefully constructed; they need care, attention, and acknowledgement that years of hard work have taken place prior. It’s important to pick a partner that not only respects this philosophy, but also has the experience and courage to tell you the possible pitfalls, preparing you for the reality of the project at hand. This enables you to more reasonably predict the end result, using experience and asking the right clarifying questions to give everyone confidence and a platform from which to build.
With data collection, an account manager will often work on the viability/feasibility and costs for a project, but then passes it along to a project team for execution. Effective B2B research is accomplished when the account manager is tethered to the project from start to finish, and can frame expectations, ensure the team is on target, and work with the client on the fly if necessary to adjust and implement backup plans. B2B research can be nuanced and fraught with challenges that require foresight, experience, and the ability to jump in, correct, and sometimes change tact.
Pre-screening and ensuring B2B panelists are who they say
“You’re only as good as your last book” is a smart adage to adopt when working with panel sources.
Panels are expected to adhere to the ESOMAR/GRBN Guideline on Online Sample Quality, which sets out best practices in:
- Research participant validation, to ensure the respondent falls within the description of the research sample;
- Survey fraud prevention, to ensure the same person doesn’t try to receive more incentives by completing a survey more than once;
- Survey engagement, to ensure that the respondent is paying sufficient attention;
- Category and other types of exclusions, to ensure the sample does not include respondents who might bias the results; and
- Sampling (including sample selection, sample blending, weighting, survey routers, profiling, and screening) to provide transparency.
While these are the cornerstones of panel sampling businesses, it’s important to ensure they do this and acknowledge that respondent profiling isn’t as advanced as it needs to be in B2B sampling.
B2B profilers are sent out, of course, but the completion rates are low and panel companies will often steer consumer respondent traffic that they know is employed within a general business sector.
Most proprietary panel companies have partner sources they introduce. Although vetted appropriately, new sources in the mix can increase the probability for errors based on each source’s ability to control the fraudulent behaviour appearing from time to time. Some of these partner sources can also skew results, with the base of answers really off the expected norms or what other sources in aggregate are showing.
To mitigate this, pre-screening becomes very important even among panel sources that have sufficient profiling for B2B respondents in place. Screening questions for the targeted respondent to go through before entering your survey are ideal for ensuring a respondent is truly qualified to participate.
About half of the incoming panel traffic fails for some reason or another, but this is still an important piece to put in place to ensure that those entering the survey are in fact who you need to answer the survey.
Trust, but verify
“Trust, but verify,” is a useful way to describe how best to manage and monitor a B2B market research panel project and ensure a high-quality data set.
Given the absence of exact profiling, many panels sources need to be tethered together to accomplish ambitious goals or to look for a subsection of respondents within a certain industry.
Whether or not there has been that additional layer of pre-screening, it is critical to embed security conditions (e.g. time to complete, straight-lining) and pepper red herring questions into the survey. (These can be monitored in your daily field disposition, with fails tied to the panel source). Reviewing verbatim for gibberish is another measure for discarding cases that don’t meet quality criteria.
When blending multiple panel sources, it is important to measure the sources against each other and focus on the “quality fails” that arise from the security conditions set, the red herrings, and verbatim review to arrive at pass-back rate percentage by panel. Additionally, you should review responses by panel across each other to identify blips and skews in data. If any are present, they should be isolated and removed from the data set, and passed back to the panel for replacement at no charge. Further, after a pre-test of 10% of the quota is completed, the panel source(s) showing pass-back rates higher than 30 to 40% should be investigated for legitimacy. If necessary, they should be removed from the sampling, forwarded, and removed from the data set.
While all these quality review metrics are important, they must be reasonable—typical pass-back rates on security fails in the industry range between 10 and 20%. (With a pre-screener employed, it tends to be much less). When it is above 20%, there is either a quality source issue or it is overly stringent and the project at hand may not be appropriate for the online methodology. It is important to investigate both possibilities.
Human beings are creatures of comfort, and we prefer to put a lot of faith and trust in proven panel providers. While I think trust is key, it is also important to be vigilant and to employ your own reasonable security metrics that make sense each time. You also need to understand that with panel sources, issues with respondent quality can arise and fraudulent sources (e.g. bots) can break though. With these extra steps and an experienced partner, you’re able to avoid issues and ensure that your report is based purely on respondents that belong.
John Wulff started at Logit Group as its first salesperson in 2008 and has a 30-year career focused on B2B/B2C online, telephone, and onsite data collection. He has held senior positions representing some of the largest and best quantitative phone and panel companies with operations based in North/Central America, Europe, and Asia.
John’s areas of expertise within B2B/B2C data collection are focused on financial, automotive, health-care, entertainment, and information technology segments. In addition to data collection business development efforts for Logit, he leads business development for Logit Group’s technology company—QFI Solutions, a survey software programming/reporting platform.