Case Study
Conversational AI Survey
By Steve Male
Logit fielded two parallel "research on research" surveys in January 2023, one using a standard online survey platform and the other using inca - a conversational survey platform.
The Challenge
Both surveys sampled nationally representative Canadians and Americans, in terms of age, gender, province/state, and ethnicity.
The Details
Duration: 30-40 mins
Incidence rate: 20%-30%
N= 100,500 (67 Markets N=1,500 each)
LOI: 10 mins
The Methodology
The standard survey asked an open-ended question to screen participants based on the quality of their responses. The inca survey used its built-in quality check, based on a combination of OE & CE responses.
The Results
- inca survey participants spent more time completing the survey (11.8 LOI vs. 9.3) and were more likely to feel the time was shorter than the actual duration—suggesting better engagement.
- inca survey participants were more likely to say that the experience was a lot better or better than other surveys they have done (78% vs. 60%).
- Those who liked the inca survey most commonly cited reasons such as feeling it was more interactive and engaging.
- inca survey participants gave longer responses (16 vs. 11).
- 65% of inca survey responses were rated a 5 out of 5 (excellent, with rich stories, emotions, or examples) compared to 13% of standard survey responses.
- inca survey participants provided a more complex and varied set of emotions in their open-ended verbatims compared to standard participants.
The study ultimately found that the Conversational Al survey is more engaging and that it leads to more considered responses and richer insights.

Steve Male
VP, Innovation & Strategic Partnerships
Steve serves as the Executive VP of Innovation & Strategic Partnerships at The Logit Group. He boasts over 12 years of market research expertise, currently focusing on AI development, cutting-edge technology, and API-driven tools to enhance sample feasibility and data quality.