fbpx

Getting the Best Out of Your Customer Satisfaction Program

In the last decade, there have been significant changes to how researchers define “customer satisfaction,” as well as how they use this metric.

Also known as CSAT, customer satisfaction measurement has evolved over time, largely spurred on by technology. It has moved from point-in-time to real-time, from anonymous to linked, and from brick-and-mortar to multi-channel. Throughout these changes, the basics behind a customer satisfaction program have remained essential—gather data to help a client turn opinions into actionable learnings and insight.

At Logit, we collect data in different ways, depending on the client’s customer database or research requirements. We offer the capabilities to execute different methodologies to reach different customer audiences, including phone interviews, online surveys, onsite interviews, and mail surveys.

When considering customer satisfaction surveys, you first have to think about the customer journey and put yourselves in their shoes. For example, how would you like to receive a survey? When would you like to complete a survey?

woman-working-on-ss-designer-1200x670

The telephone interview has always been an in-demand service from clients who have contact lists. Nevertheless, we see declining participation rates. Fewer people want to take part in comparison to a number of years ago.

On the other hand, online surveys are rapidly gaining momentum. Depending on the survey length, this methodology can be relatively quick and error-free for the client and the participants. It’s similar when thinking about onsite interviews—we always recommend the survey length be no longer than five minutes. These types of interviews are great for clients who may not have a customer list, or for clients who want to understand the opinions of consumers who may not actually purchase a product from the store.

Depending on your data collection instrument (i.e. phone or on-site), it is always important to think carefully about the identity of the client and the values of the brand. At Logit Group, we ensure all interviewers are trained to represent the brand well.

Making it work
Once you have decided on your methodology, you still need to ensure that it is actionable for your customer satisfaction program.

Connect the dots
Consumer responses and their data must be connected to the specific transaction, if one was made. This means each function of the business can receive specific feedback.

Ask yourself: Are you being clear?
When I look at reports, I always think: “Is this data actionable, and is it written in language that easily understood?”

Data and reporting should be clear and simple to understand. Many clients actually provide real-time shared customer experience information to their internal staff because experiences can change from day to day, month to month, or season to season.

Pause for reflection
A customer satisfaction program should not be left alone for years but reviewed every six to 12 months to ensure it is generating ROI and actionability across the entire organization. You need to ask your internal stakeholders what they think of the tools and the dashboards offered. Their feedback allows you to make effective changes to your approach, making certain it is always relevant to the current state of business.

Things to think about…
Almost all organizations have a customer satisfaction program. From my experience, no two are the same and the ideal approach will be unique to each company and its stakeholders, both internal and external.

Once the CSAT program is in place, the data is used to help evolve your products or services, you still need to ensure your customers understand what is being changed and why. Again: Never forget the customer journey. They are taking time out of their day to help you, so if you have altered something because of their feedback, then you need to make sure you not only tell them what actions have taken place because of their opinions and close the loop, but also thank them for their participation and feedback.

 


About Oscar
oscar

Oscar Fernandes serves as the VP of Sales & Client Services at Logit. For over 25 years he has helped his clients execute successful CSAT programs, both online and over the phone.

Top Four Tips for Boosting Sampling Response Rates

It might sound obvious, but your sample is the most important part of your market research project.

Too often, it seems like the survey participants’ experiences and opinions of market research are somewhat overlooked. However, our industry relies heavily on individuals giving up their own time and effort to respond to long questionnaires. If they don’t enjoy the experience or gain any benefit, then why should they bother participating?

Businesses rely on customer data to guide their decision making and provide a sense of direction when making a change in terms of a product enhancement, service overview, or even a new product range. Therefore, reduced response rates ultimately mean less insight or fewer data-driven outcomes.

How can you help your participants enjoy the experience of giving you feedback?

 

1. Treat people the way you would want to be treated

It is important to ensure your research invitations and reminders clearly outline what you are asking. This may include information on why you are conducting the research, incentives on offer (e.g. gift cards), and an explanation why their feedback will be so valuable.

You should try to personalize communication to an individual as far as possible with the resources you have available. For example, most email marketing tools allow you to directly customize how you address emails to individuals rather than impersonal form letters.

Far too often, researchers leave participant communication to the bottom of their list of priorities. I think this is totally wrong. Ask yourself whether you would complete a particular survey if you yourself received the email you’re about to send.

jakeblog2.image2

2. Go mobile

So many people in the industry mention the use of mobile surveys that it must get boring to always read about it! Still, the reason we all say it so much is because we still continually find surveys that have not been mobile-optimized and are not responsive to being answered on a phone or tablet. It can be challenging to get participants to complete a survey while they are watching TV, and an even bigger task to convince them to answer your questions when they are hard to read on a cellphone screen.

We know a high proportion of individuals are “second-screen watchers,” which means they may be watching TV while also texting on their phone. By making a survey mobile-optimized, you increase the likelihood of someone completing it as a second-screen experience instead of never bothering to take part.

jakeblog2.image9

3. Never be boring

Can you remember the last time you wanted to complete a survey that consisted of 40 questions? I can’t… and I am sure your participants feel the same way.

Neither researcher nor participant benefits from excessively lengthy and tedious questioning in either qual or quant research. When survey participants are bored, they are more likely to flip through the survey questions, rush and give false answers just to complete it. Having a seemingly endless list of questions also increases the likelihood of dropouts throughout the survey, negatively affecting your representative sample.

You should be developing short and lean surveys that take participants less than five minutes to complete. This can give you the essential information you require while also increasing the likelihood of a large sample size because of the short length.

jakeblog2.image10

4. Don’t sit on your feedback

After completing a quantitative survey that has a sample size of 1,000+, the worst thing you could do is just ignore all that feedback and not act on any of the new intelligence.

Participants want to feel valued—not just from a gift or reward point of view, but also emotionally. They want to know whether or not their feedback has truly helped, and they really want to see what you, as a brand, will do with the insight and opinions they shared. Offering participants feedback allows them to see the true value of completing a survey or a piece of research for you. It means they will be far more likely to take five or 10 minutes of their own time to complete something for you again.

jakeblog2.image7

Conclusion

By making surveys short, sharp and to the point, you give participants less work to do and your business still gains valuable data and information. The four tips outlined in this article are only a handful of ways to boost response rates. However, implementing even one of these suggestions will help improve the research experience for your participants. Happy and rewarded participants mean quality data outcomes for you that can lead to data-driven decision making.


About Jake

Jake-Pryszlak_avatar_1546770824-400x400 Jake Pryszlak, commonly known as the Research Geek, is a 3-time award-winning market researcher, blogger and speaker. He’s a current Forbes columnist who is active across a plethora of social media channels. His aim is to share his market research knowledge with others in the industry. You can find his blog and social media channels here.

5 Top Ways To Build An Effective Online Research Sample!

So you have googled different research methodologies available and have decided that online research is your chosen golden nugget. In particular, you are interested in using an online sample because you wish to ask the same or similar individuals questions about your business and products.

Online sampling can be a crazy world and very difficult to start if you don’t have a set process to follow. This is why I have put together my top 5 things you should look out for when creating or using an online sample!

My top 5 tips are for those who want to build a unique online market research sample, because like I said, it can be a tricky task to even start. I’m hoping these 5 takeaways will help you to breakdown the process so its much more manageable.

  1. What Do You Want to Understand?

The first question you must ask yourself is – What would you like to understand from your research? You need to first define your research objectives. Your objectives will affect what research sample you wish to create and promote. Especially if you’re focusing questions to a set persona or type of individual.

For example, if you are looking to change some your core products, or add to your existing product range. You will need to understand which groups this will affect and how you want them to be represented in your research. Customers, potential customers, mar-comm audiences and stakeholders all need to be represented in a way which reflects their opinion.

On the other hand, let’s say you have a targeted marketing campaign that is on Facebook and other social media platforms, then you would actually what to understand the opinions and thoughts from that specific target audience who you have targeted.

blog image 1

  1. Leverage Existing Networks

Recruiting participants for research can actually become very expensive especially when taking into account the size of your sample. Yet, the best place to start for any size or scale of research is your own networks. Whether that is using LinkedIn, social media and even your own email lists, customer databases and any other existing connections you have built. People on these lists will be those most valuable to you which means they will also have an opinion.

Current customer opinions is crucial and actually more relevant in some ways than a panel because they are familiar with your brand.

They will be motivated by a desire to improve the brand and experience, rather than the financial incentive.

African Woman Working Design Creative Concept

  1. Get Yourself on Social!

There has been a lot of buzz recently about social market research with the likes of Brandwatch and SproutSocial dominating the space. Social media listening tools have driven this discussion and market researchers have been quick to adopt such processes. Whilst it would be difficult to use social media from a sample perspective, it is still important to think about social in its broadest sense.

Social media can complement market research through the entire process, from introducing community-based elements during the project, to driving participant recruitment. A subset of snowball sampling methodologies, social media recruitment leverages the personal connections of individuals to reach a wider potential audience. By combining this with your organization’s own networks, it is possible to build a large (and representative) sample in a short space of time.

Then you can think about social media influencers in your area of work to help generate interest and spark a conversation about your new sample. With the use of social media, you can grow your sample size as well as understand what your target audience are actually talking about online. Which will help when creating topics, tasks and surveys for your participants to answer during your sample journey.

blog image 3

Screen Participants

When you promote a sample on social media the danger is sample quality. There are many dangers to look out for, including speeders, professional research participants, no-shows and more. The best way to eliminate these is through an invitation questionnaire to understand who the individual is and whether they are a best fit for the sample you need.

This will then serve two things. The first is to ensure that your sample fits the profile you are looking for, as there is no point in sending questions to a group of individuals who may not even know what you are on about. The second is to drop participants that would reduce your data quality. Speeders are participants that complete research tasks as quickly as possible and do the bare minimum. Their responses are not always reflective of their own thoughts, often writing the first thing that comes to mind.

The easiest way to catch a speeder out is by asking them a particular question which you ask them to select a particular answer. Speeders will unlikely even read the question before making a choice.

blog image 4

  1. Manage Your Lists and Participants

At the end of the day, your list are your participants and potential customers. So treat them how you would like to be treated. Over time, some participants will drop out, it is only natural. As more and more drop out this can have an overall negative impact on your research results and sampling quality. To ensure your research doesn’t suffer, you should regularly monitor active and inactive participants and also those on the verge of leaving. The latter could be sent some new information or you could seek to understand how you could help them from not leaving.

 

So by following these quick 5 steps you will be on your way to creating a high-quality panel of research participants. Obviously there are pro’s and con’s to using samples, however, by controlling the different processes that are there to see, it is possible to create a high-quality sample that will help your business in the short and long term.


About Jake

Jake-Pryszlak_avatar_1546770824-400x400 Jake Pryszlak, commonly known as the Research Geek, is a 3-time award-winning market researcher, blogger and speaker. He’s a current Forbes columnist who is active across a plethora of social media channels. His aim is to share his market research knowledge with others in the industry. You can find his blog and social media channels here.

Trust, But Verify: Using Online Panels for B2B Research

Conducting B2B research via online panels is an increasingly attractive option. Its more efficient cost model translates to roughly 30% of the price of running the same project by telephone. Incentives are lower online and you’re able to cast a much wider net to accomplish goals far more quickly.

While this may sound great, serious questions can arise over how respondents are recruited… and to how to ensure those answering your surveys are in fact qualified to do so (and are who they say they are).

The survey-taking experience is recruited under the guise that it is completely anonymous. As such, respondents aren’t recruited with a phone number and can’t be validated with telephone verification. This means you need to have a certain level of faith in your panel-of-choice company, and this trust needs to be cemented by a history of successful projects together and the power of their name within the industry.

While trust is important, so is a little common sense. Regardless of past performance, there are additional steps to take across each survey as safeguards to help ensure the content of the report comes from qualified B2B online respondents.

recruiting

Picking the right partner

Client relationships are carefully constructed; they need care, attention, and acknowledgement that years of hard work have taken place prior. It’s important to pick a partner that not only respects this philosophy, but also has the experience and courage to tell you the possible pitfalls, preparing you for the reality of the project at hand. This enables you to more reasonably predict the end result, using experience and asking the right clarifying questions to give everyone confidence and a platform from which to build.

With data collection, an account manager will often work on the viability/feasibility and costs for a project, but then passes it along to a project team for execution. Effective B2B research is accomplished when the account manager is tethered to the project from start to finish, and can frame expectations, ensure the team is on target, and work with the client on the fly if necessary to adjust and implement backup plans. B2B research can be nuanced and fraught with challenges that require foresight, experience, and the ability to jump in, correct, and sometimes change tact.

 

b2b

Pre-screening and ensuring B2B panelists are who they say

 “You’re only as good as your last book” is a smart adage to adopt when working with panel sources.

Panels are expected to adhere to the ESOMAR/GRBN Guideline on Online Sample Quality, which sets out best practices in:

  • Research participant validation, to ensure the respondent falls within the description of the research sample;
  • Survey fraud prevention, to ensure the same person doesn’t try to receive more incentives by completing a survey more than once;
  • Survey engagement, to ensure that the respondent is paying sufficient attention;
  • Category and other types of exclusions, to ensure the sample does not include respondents who might bias the results; and
  • Sampling (including sample selection, sample blending, weighting, survey routers, profiling, and screening) to provide transparency.

While these are the cornerstones of panel sampling businesses, it’s important to ensure they do this and acknowledge that respondent profiling isn’t as advanced as it needs to be in B2B sampling.

B2B profilers are sent out, of course, but the completion rates are low and panel companies will often steer consumer respondent traffic that they know is employed within a general business sector.

Most proprietary panel companies have partner sources they introduce. Although vetted appropriately, new sources in the mix can increase the probability for errors based on each source’s ability to control the fraudulent behaviour appearing from time to time. Some of these partner sources can also skew results, with the base of answers really off the expected norms or what other sources in aggregate are showing.

To mitigate this, pre-screening becomes very important even among panel sources that have sufficient profiling for B2B respondents in place. Screening questions for the targeted respondent to go through before entering your survey are ideal for ensuring a respondent is truly qualified to participate.

About half of the incoming panel traffic fails for some reason or another, but this is still an important piece to put in place to ensure that those entering the survey are in fact who you need to answer the survey.

Trust, but verify

“Trust, but verify,” is a useful way to describe how best to manage and monitor a B2B market research panel project and ensure a high-quality data set.

Given the absence of exact profiling, many panels sources need to be tethered together to accomplish ambitious goals or to look for a subsection of respondents within a certain industry.

Whether or not there has been that additional layer of pre-screening, it is critical to embed security conditions (e.g. time to complete, straight-lining) and pepper red herring questions into the survey. (These can be monitored in your daily field disposition, with fails tied to the panel source). Reviewing verbatim for gibberish is another measure for discarding cases that don’t meet quality criteria.

When blending multiple panel sources, it is important to measure the sources against each other and focus on the “quality fails” that arise from the security conditions set, the red herrings, and verbatim review to arrive at pass-back rate percentage by panel. Additionally, you should review responses by panel across each other to identify blips and skews in data. If any are present, they should be isolated and removed from the data set, and passed back to the panel for replacement at no charge. Further, after a pre-test of 10% of the quota is completed, the panel source(s) showing pass-back rates higher than 30 to 40% should be investigated for legitimacy. If necessary, they should be removed from the sampling, forwarded, and removed from the data set.

While all these quality review metrics are important, they must be reasonable—typical pass-back rates on security fails in the industry range between 10 and 20%. (With a pre-screener employed, it tends to be much less). When it is above 20%, there is either a quality source issue or it is overly stringent and the project at hand may not be appropriate for the online methodology. It is important to investigate both possibilities.

Conclusion

Human beings are creatures of comfort, and we prefer to put a lot of faith and trust in proven panel providers. While I think trust is key, it is also important to be vigilant and to employ your own reasonable security metrics that make sense each time. You also need to understand that with panel sources, issues with respondent quality can arise and fraudulent sources (e.g. bots) can break though. With these extra steps and an experienced partner, you’re able to avoid issues and ensure that your report is based purely on respondents that belong.


John Wulff started at Logit Group as its first salesperson in 2008 and has a 30-year career focused on B2B/B2C online, telephone, and onsite data collection. He has held senior positions representing some of the largest and best quantitative phone and panel companies with operations based in North/Central America, Europe, and Asia.

John’s areas of expertise within B2B/B2C data collection are focused on financial, automotive, health-care, entertainment, and information technology segments. In addition to data collection business development efforts for Logit, he leads business development for Logit Group’s technology company—QFI Solutions, a survey software programming/reporting platform.

Please Welcome Brendan Sammon – Vice President of Client Development

Toronto, ON, July 31, 2019

Brendan Sammon joins The Logit Group Inc.

The Logit Group, one of North America’s largest independently owned market research execution and data collection firms is proud to welcome Brendan Sammon, who has joined the team as Vice President of Client Development.

In his new role, Brendan will be responsible for increasing Logit’s client base in the US marketplace as well as overseeing new and strategic sales and growth initiatives.

“I am extremely excited about Brendan joining our Logit team”, said Managing Partner, Sam Pisani. “Brendan’s research acumen and partnership-based approach to client relationships fits in perfectly with our company’s philosophy.  Brendan’s confidence in assessing and employing a wide range of methodologies in order to help execute each client’s research objectives will further bolster Logit’s ability to help our clients succeed. We look forward to Brendan being an integral part of our continued US expansion.”

With over 30 years of MR and data collection experience, with notable stints at both SHC Universal and Olson Research, Brendan has become a well respected and invaluable asset for his co-workers, clients and the MR industry overall.

Headquartered in Toronto, with offices across North America, Logit is widely regarded as one of the top innovators in research execution services, combining a highly experienced team with a unique mix of innovative and proven approaches to solve complex data collection needs for their clients.

Do Try This at Home: Understanding the Benefits of IHUT

For more than 25 years, Logit Group has worked in quantitative and qualitative research, but our areas of expertise go beyond traditional online sampling to areas like in-home usage testing (IHUT). If you’re not familiar with this method, it’s a really cost-effective way to test your product with real people before moving forward with a full-scale product launch.

Testing The Market Before You Hit The Market
There are risks in creating a brand-new product without testing anything. When using IHUT, you have the opportunity to ship products to participants to use at home before you hit the traditional markets. Their feedback is gathered using various means such as follow up telephone or online surveys, or even in-person interviews. This way, individuals are fully engaged in the whole process from start to finish. Since IHUT relies on a real-life environment rather than a controlled market research scenario, it is more likely to result in actual outcomes on product satisfaction, usage, and potential improvement areas.

applying makeup

How Logit Conducts IHUT

When we do IHUT work with a client, we recruit respondents via an online panel that fits a particular desired market segment—the target audience. After respondent selection, the products are then sent to the participants. At different times during the usage period, participants receive an invitation to fill out an online questionnaire or a different type of feedback tool.

One of the surveys may capture the participants’ first impressions and experiences with the product after the first week. A final questionnaire can help determine the experiences and satisfaction with the product in detail, as well as help identify areas for improvement. We recently conducted IHUT work for a national Dairy company company that wanted to understand what consumers thought of its packaging and product taste. The research included two different phases—an eye-tracking exercise and a packaging assessment. We invited 500 product users to participate in the 25 minute in-person test. 375 concept acceptors were given product to take home to understand any changes in their satisfaction and acceptance of the product.

Using the IHUT methodology, Logit Group was able to understand participants’ first impressions, appeal, and purchase intent. As part of some performance accept/reject analysis, we could track results over time to understand changes in consumer preferences and opinions in relation to how long they have been using the product.

data

The Uses of IHUT In Further Research

Overall satisfaction, as well as more pinpointed satisfaction on specific product features, could be measured to provide first insights into potential areas of improvement. In the case of a longer usage period and multiple questionnaires, levels of satisfaction can be measured over time. To focus on potential areas for improvements, I always like to recommend open-ended questions to then really dive deep into what the participant is thinking about a product.

Throughout the IHUT process, data is being analyzed to provide clear, relevant results and recommendations. By the end of the research project, the goal is to know exactly whether or not your product is truly ready for full market launch or requires additional improvement, as well as its potential in terms of acceptance and how to best position it within the market space. IHUT can also help you understand whether there are any geographic differences that you need to think about when marketing and launching your product.

When working with highly skilled analysts like those at Logit Group, IHUT allows you to understand some of the most important checks on your product before launch by means of real-life opinions, comments, and data.


About Aref

Aref Munshi As Vice President, Sales & Research Services for The Logit Group, Aref Munshi’s main responsibilities include, managing existing clients. He has been providing qualitative and quantitative support services to clients across the healthcare, consumer & business industries. With over 30 years of data collection experience, Aref’s strength is his holistic market research skill set. From client services, to operations, Aref is the perfect client advocate and research problem solver. He has held senior management roles at two of the larger data collection companies in Canada. First 25 years at RIS Christie and the last 7 with The Logit Group.