Read time: 12 mins
The recent $10 million market research fraud case sent shockwaves through our industry as companies look to data to drive decisions. This wasn't some offshore fly-by-night operationâthis was a North American company with a pedigree based on transparency. At IIEX North America, four industry leaders took the stage to tackle this pressing issue head-on in a panel titled "Rebuilding Trust: Rethinking Recruitment and Data Validation in Market Research." The candid conversation featured the following thought leaders:




Survey Fraud & Data Quality: Defense or Offense?
Question: Are stronger data quality tools the solution â or is better respondent recruitment the real answer to the $10M survey fraud scandal that has shaken the industry?
This defense-versus-offense dichotomy set the stage for a thought-provoking exchange. Steve Male opened the discussion by emphasizing the need to play both offense and defense when combating fraud. "It's great that we're actually having these conversations now, we're being more transparent than ever," he noted. While acknowledging the industry's progress in developing tools to validate and cleanse survey data, he highlighted a critical gap: "One of the areas that we haven't really talked a lot about is how we do a better job of recruiting and validating respondents at the source."
While Tia Maurer agreed, she took the discussion a step further by stating bluntly, "The ecosystem is broken." What she meant by this is that traditional verification methods have deteriorated, from physical mail verification to simple email confirmations and this has allowed fraudsters to easily create multiple personas. Her stark reality check resonated when she asked the audience if they knew anyone who sits around taking surveys all day, with no hands raised in response. But we know this from multiple sources within the industry, that professional survey takers are doing just that. What was a line of defense needs to be reconsidered and that requires some offensive plays.
It was agreed that multiple tools are needed to examine different aspects of data quality, from duplications to bot detections and everything in between. Chuck Miller emphasized that demanding transparency isn't enough. He added that demanding transparency from your providers about what they are doing to combat fraud is not enough. Instead, he insisted that we all have to âdo our own homework," and collectively raise the bar on the standards of both offensive and defensive approaches.
Random Digital Recruitment: Opportunity or Risk?
Question: Some are considering forms of random digital recruitment (river sampling) as an option. What are the benefits and drawbacks compared to traditional panel assets?
As the industry wrestles with panel quality concerns, many researchers have turned their attention to random digital recruitmentâcommonly known as river samplingâas a potential solution. This approach, which sources respondents from across the internet rather than from established panels, initially promised fresher perspectives from people less conditioned by the research process. The question posed to our panelists: Could this method offer benefits that outweigh its potential drawbacks when compared to traditional panel assets?
Drawing on his extensive experience with river sampling during his time at AOL, Chuck offered a balanced perspective. "When we started sourcing from all over the Internet, we drew a correlation that it's just like our RDD (Random Digit Dialing)," he explained. "You're going out, you're touching people, you're bringing them in." This approach can yield respondents who aren't as invested in the research process, potentially providing fresher, less rehearsed opinions.
However, Chuck cautioned about practical challenges that come with river sampling. "The economics can be challenging," he noted, and researchers need to be mindful of how they implement quality checks. "You need to look out for things in terms of your red herrings and your trap questions," he warned, explaining that inexperienced respondents may be confused by validation questions that seem out of context, leading to higher drop-off rates.
Tia more directly challenged the notion that river sampling inherently produces better quality with compelling data from her own research. She described a simple experiment comparing river samples to validated panels: "I sent one question out to a river sample source and to verified panelists." The question asked respondents which fast food chains they had heard of, including major brands and a fictional one. The results were strikingâwhile the validated panel showed 95-97% awareness of major chains like McDonald's and Burger King, the river sample reported only 45% awareness. Even more telling, 33% of river sample respondents claimed they had heard of none of the brands, compared to less than 1% in the validated panel. The fictional brand was recognized by 34% of river respondents versus just 11% in the validated panel.

Tia Maurer
Group Scientist, Procter & Gamble
"I'm not sure river is the answer," Tia concluded. "All rivers are not created equal. It really depends on your sourcing."
The discussion highlighted an important nuance in the debateâthe quality of any sample, whether panel or river, ultimately depends on how it's sourced, validated, and maintained. River sampling might offer theoretical benefits in reaching fresh respondents, but without proper verification processes, it can introduce as many quality concerns as it aims to solve.
Beyond Double Opt-In: Modern Verification Approaches
Question: What role should respondent verification play in recruitment? What should a modern, effective verification process look like?
The panel challenged the notion that traditional double opt-in methods are sufficient for verification in today's environment. Steve highlighted how technology offers new verification possibilities: "We live in an era today where we have access to more information than ever before...almost every single data site you can imagine is available now via API." He advocated for leveraging these resources to validate respondents against LinkedIn profiles, verified mobile numbers, and identification data, rather than relying solely on self-verification. While noting some panel companies are adopting this approach, he emphasized "it's still too few that are doing that right now."
Despite the higher cost, Tia went further, revealing she now works with agencies that physically validate respondents in person for her most critical research. The panel agreed that with today's technology, we need to raise the bar on what "verified" actually means.
Data Quality Tools: What Actually Works?
Question: Of the many available tools, which ones actually move the needle? From your experience, what approaches have been most successful?
When discussing effective fraud detection tools, Tia referenced research from Case for Quality that tested four major fraud detection systems. The study included SurveyEngine, Imperium, Clean ID from OpinionRoute, and Research Defender. After hiring security firms to simulate fraud, they found these tools successfully caught the most egregious cases (like people hiding on the dark web or using VPNs) but still missed significant issues. Even with these tools, she reported removing 24% of respondents during data cleaning. Tia acknowledged that vendors are constantly improving their tools but cautioned: "Any technology that you employ, the bad actors have that same technology and potentially can be one step ahead of us."
Some tools just look at data quality from one technical point of view, but according to Sam Pisani, âno one thing will solve the issue. Clients need a holistic approach to fighting fraud and incorporating sound methodology into every project.â Case in point, Calibr8 was developed because we understood that many factors contribute to data quality. They created an eight-layer approach to detect problems and proactively ensure that every insight gathered is authentic, reliable, and actionable.

Sam Pisani
Managing Partner, The Logit Group
âThis is how we're redefining what it means to be a true research execution partner."
Breaking the Cycle: Paying for Quality
Question: Buyers say they're willing to pay for better respondents, but how do we break the cycle of demanding quality without investing in it?
Perhaps the most fundamental challenge facing the industry is the disconnect between expectations and investment. While buyers consistently express a desire for high-quality data, their purchasing behaviors often prioritize cost savings above all else. The panel tackled this thorny issue head-on with ideas about how to break the cycle of demanding quality without being willing to pay for it.
Steve addressed the industry's pricing dilemma with the age-old adage, "When it comes to price, speed, and quality, you can have two of the three, but not all three at the same time." This trade-off is inescapable, yet the industry continues to act as though all three are simultaneously achievable. "I think we're at a point now where we're not only asking for all three, but demanding all three, and something's got to give and break," Steve explained. He highlighted the fundamental disconnect in the market: when clients expect quality responses from a 25-minute survey that pays respondents just $0.50, they're devaluing their time and expertise. "What is their time worth, and then what's the expectation for the quality of data you're getting at that price?"
The panelists agreed that responsibility for this race to the bottom is shared across the ecosystem. Tia offered a powerful analogy that resonated with everyone: "If you go to a dealer and they're selling a Honda or Toyota, you assume you're getting a good car and you're not buying a lemon. And if you buy a lemon, you don't go back to that dealership." The problem, she explained, is that market research doesn't operate with the same transparency about quality. "You're sending out your consumer research proposal to two different places. It has the same exact thing on it. One place tells you it's going to be $6 per complete, and the other one says $3. You think it's apples to apples because they both got the same proposal or statement of work, and you go with the cheaper price."
This dynamic has created a market where cutting corners becomes necessary for survival. "All of these companies have chased the cheaper price, and we've done a race to the bottom, unfortunately," Tia continued. "So the people who are providing the better car are going to run out of business, or they're having to adapt and go to the cheaper avenue of supplying people, and then we get lower quality sample." The panel noted that recent high-profile fraud cases might finally shift this dynamic.

Chuck Miller
President, DM2
âSometimes it takes something like what happened a few weeks ago to move the needle, to make us have these conversations and push forward."
The path forward, according to the panelists, requires education and transparency about the true cost of quality. Clients need to understand that expecting top-tier results from bottom-tier investments is unrealistic. At the same time, providers must be more transparent about their methodologies and the specific measures they take to ensure quality. The panelists agreed that breaking this cycle requires a holistic approach starting with education about the real costs of quality research, and including greater transparency across the supply chain, and a willingness from clients to invest appropriately in methodologies that deliver reliable results.

Steve Male
EVP, Innovation, The Logit Group
âHaving these difficult conversations is the first step toward meaningful changeâand recent events have created an opportunity to reset expectations across the industry.â
The Future of Recruitment and Data Quality
Question: What trends do you foresee in recruitment and quality control? How is your organization preparing for the next 3â5 years?
Looking ahead 3-5 years, Tia predicted a move toward agencies that validate their participants, stating she's already shifted to this approach for her research. She acknowledged higher costs but noted the trade-off in confidence and usable data.
Chuck, who has worked with online samples since their inception, acknowledged we're in "a particularly bad period right now" but cautioned against abandoning primary research. He noted that alternatives like synthetic data aren't ready to replace traditional research and advocated for continued diligence in applying layered defense approaches.
Steve concluded with an optimistic note, suggesting the industry needs to "go on the offense and play less defense" with more third-party validation tools, while also diversifying recruitment sources beyond traditional panels.
Moving Forward Together
The panel's frank discussion highlighted a critical inflection point for our industry. The recent fraud case isn't just a scandalâit's a wake-up call to rethink how we approach data quality fundamentally. What emerged is that no single solution will fix the problem. We need better recruitment strategies, more sophisticated verification methods, layered fraud detection tools, and perhaps most importantly, a willingness to invest in quality rather than simply demanding it.

Download Frauds, Bots and Mixed-Up Thoughts
Did you know that Logit's Sam Pisani and Steve Male wrote a market research children's book? Download your free copy â a fun lesson in data quality for all ages!
The path forward requires collaboration across all stakeholders. By challenging outdated assumptions about verification, expanding our toolkit for quality assurance, and breaking the cycle of prioritizing cost over reliability, we can rebuild the trust that forms the foundation of meaningful research. The overarching advice was clear. There are no shortcuts to quality. Instead, investment now is a good play both for the short and long term. Regaining the lost time spent cleaning out that 24% of data Tia mentioned from a study saves in the short term. The long term win is avoiding paying a much steeper price of decisions based on compromised data later.