PLAN: GOOD SURVEY RESPONSE REQUIRES SOLID SURVEY PLANNING
Data collection happens after intentional survey design and various data collection strategies are tied to the survey tool. There are variations in design, collection strategy, and response rates depending whether the survey is provided to clients during program engagement; immediately post-program; or at set intervals such as annually.
The survey should be designed and structured to ensure that it is:
- aligned with organizational/program needs (theory of change) and the potential for improvement based on diverse input;
- developed to be client centered, with feedback loops for diverse and inclusive respondent input (multiple languages) and “client voice” (through processes such as cognitive interviewing during the design phase);
- reduced bias based on question-and-answer types, wording, fully tested for errors and gaps, and ready for distribution;
- focused on ease, including length, flow, logic, topic and perceived connection;
- considered for “use” for each response item and demographics to understand the resulting data - What categories are critical to understanding responses? Who defines the categories? Decisions and understanding around whether the surveys will be:
- anonymous (demographic data required for later meaningful segmentation) vs identifiable respondent (unique ID ties back to database that will allow for segmentation)
- high-touch services (direct clients) vs transactional (community participants)
DO: DATA COLLECTION PROMISING PRACTICES FOR CLIENT SURVEYS
(does not include community surveys for one-off events/workshops/trainings)
- Who to survey?
- Consider starting with a portion of your target population (pilot)
- Ensure the timing is relevant for the recipients (i.e.: holidays)
- When to survey? Map backward to when you need the results
- Consider incentives (gift cards or other meaningful encouragement, either to all who complete or for the “first 50 respondents”)
- Promotion (marketing and communications to raise awareness)
- Dissemination (who sends the email or text is a key determinant of response combined with the clarity of the invitation wording)
- Collection communication (provide at least 2 reminders over a 3-4 week period to complete the survey, including a final notice prior to closing responses)
- How to distribute? (consider client safety and organizational capacity concerns)
- Text: convenient for wide reach and easy data analysis.
- Email: effective for contacting established supporters.
- Snail mail: for specific demographics that may not have electronic access
- Phone/In-person: for specific demographics, especially those who cannot access it other ways or require assistance (safety concerns, language, incarceration, disabilities)
STUDY: WHAT RESPONSE RATE IS GOOD; MEANINGFUL?
“Response rate” is the percentage calculated from the number of complete survey responses divided by the total number of people who received the survey and then multiplied by 100.
According to Qualitrics, a large survey firm:
A typical customer satisfaction survey response rate usually falls between 10% and 30% across various industries, with an excellent response rate considered to be anything above 50% depending on the survey design and customer engagement level; however, this can vary significantly based on the industry and how the survey is delivered. (AI overview).
Response rate benchmarks are usually qualified by a specific distribution channel or survey type:
- 33% as the average response rate for all survey channels, including in-person and digital (SurveyAnyplace, 2018)
- >20% being a good survey response rate for Net Promoter Score surveys (Genroe, 2019)
Note: this data was gathered pre-pandemic, and response rates for surveys distributed by email have decreased over the last several years. It is therefore important to meet clients where they are and embed surveys into clients’ normal flow to gather more feedback.
The importance of the response rate is to assess both reliability and confidence in the results. This confidence stems from whether the respondents were representative of the target audience for the survey and whether there might be conclusions pulled from the responses in order to make business decisions, improvements, and authentic marketing efforts, beyond anecdotal experiences.
Sources from experience, blogs and analysis: Qualtrics, Urban Institute, SmartSurvey
Note: this does not include statistical elements such as significance or margin of error calculations.
ACT: POST COLLECTION
After your survey data is collected, make sure to save energy and time to complete:
- analysis and interpretation
- internal learning and reflection
- communication of results (internally and externally)
- program quality improvements
- future plans (determine if you’ll use the same survey again and when or if you need to make edits first based on what you learned.)
Beyond response rate, statistical significance, or any attempts to benchmark results to large data sets, planning a solid tool to use consistently, setting internal goals, and tracking trends over time are often the most practical and meaningful way to interpret change within a target population for your organization or specific program.
There are many tools to assist with the internal learning and reflection phase. For example, Excel is useful for analyzing the quantitative elements of a survey, including charts and graphs. There are also numerous AI tools that can be utilized to assist with qualitative analysis and interpretation, especially to summarize and bucket open-ended responses into themes and action items. For communicating the results, data visualization is often useful, and Excel, Tableau, and Power BI are some options.
Continuous improvement is likely if your program or organization engages in a Plan, Do, Study, Act process for survey design, data collection, response analysis, reflection, use, and future planning.