Toby Hodges is a Bioinformatics Community Project Manager at the European Molecular Biology Laboratory. He coordinates the EMBL Bio-IT Project, a community building and support project for bioinformaticians and computational biologists. In this role, he works with volunteers from the community to provide training and consulting, information, networking opportunities, and resources to EMBL scientists who use computational approaches in their research.
As community managers, one of the of the pressures on us is the requirement that we make decisions based on an understanding of our community members. We must frequently make choices on the assumption that we know what the desires, motivations, and preferences are of the people that make up our community. Although we have a close working relationship and perhaps even friendship with some of them, it’s generally very difficult for us to maintain a deep understanding of what makes every member of our community tick, what they want to achieve, and how we can help them to do that.
Like many community managers, we have chosen to use online surveys as a means to gain this insight at EMBL. As well as the feedback surveys that we send out after every one of our computational training courses, we use surveys to understand the broader context and changing landscape of computational research within which our community is working and to assess the impact that our work has on our community members and the ways that we can include and support more of them.
Enter “survey design tips” into your favourite search engine and you’ll see that the Internet is full of advice about how to design an effective survey and encourage a good response rate. Of course, some of this advice – keep the survey as short as possible; use logic to prevent your respondents from having to skip through irrelevant questions; provide a progress bar and/or estimated time to completion – is accurate and very helpful (some recommended further reading is linked at the end of this post).
However, most concerns soliciting customer feedback from a “general public” that the survey designer/administrator doesn’t have a pre-existing relationship with. How can we, as community managers, use surveys to effectively assess the views and needs of our community members without falling into the established traps of inducing “survey fatigue” and/or only capturing the opinions of those members who are most comfortable with, and most used to, dominating the discussion?
Based on our experience at EMBL, I try here to provide two extra suggestions to consider when designing and distributing a survey:
1. Think about how you will use the information gathered in the future
We often have a good idea of what we will use the information gathered for immediately after the survey has been completed, but it is worth taking time during the design process to think about what that information might be used for further down the line. Are you likely to repeat the survey and, if so, will you want to compare the results between repeated surveys? Do you expect to analyse the data manually or programmatically? Would you like to make visual summaries of the data, beyond those that might already be provided by the survey platform that you use? If you intend to compare results across survey datasets, and/or perform further analysis of the results, our experience has shown that you can save yourself a lot of time by:
- restricting the format of the answers that respondents can provide (where appropriate)
- e.g. by enforcing values from 0-100 when asking for an answer as a percentage, you avoid the need to remove ‘%’ symbols from the ends of many answers before performing a numeric analysis of the responses
- not changing your questions between versions of the survey
- even if you have regrets about including a question, resist the temptation to alter your questions as you will lose the ability to compare results across the different versions
- if you really have to, drop questions entirely
- related to the point above, thinking very carefully about how you word your questions, to avoid mis-interpretation or multiple different intepretations of the questions
- if possible, ask someone else to look through the survey before you distribute it – wording that you think is clear may be interpreted completely differently by another person.
- e.g. we asked research group leaders whether they expected computational research activity in their group to increase in the future. If 100% of activity in their group is computational but they expect the number of researchers in their group to grow, should the group leader answer yes or no to that question?
Don’t under-estimate the amount of time that data cleaning can take – or how frustrating it can be as a process! Taking a few more minutes to consider this when designing your survey could save you many hours in the future. However, you should also always include some free text responses for your community members to provide more detail and additional feedback. As a community manager, the most valuable (and heartwarming) answers to my surveys have always come from the “Any other comments?” boxes at the end of a survey, and it’s important to give your members a voice.
2. Encourage engagement by building trust
This may appear obvious but it’s important to actually do something in response to the data that you’ve collected in a survey!
One of the ways in which we encourage responses to our surveys is to say upfront why the data is important to us and how we intend to use it. (We also make sure that they know how short the survey is and how quick it will be to fill out!). However, it’s just as important to follow up on the results of a survey, even if (especially if!) the results weren’t what we were expecting or hoping for. A blogpost summarising the results, while maintaining respondents’ anonymity, or publicising updates to course or event content/material helps to show your community members that you listen to them. Similarly, your community members will notice when new content is designed around the preferences that they gave in a survey of their needs and interests.
A word of caution: when reviewing and analysing the results of a community survey it’s important to remember that the results will inevitably be biased. Unless you’re lucky enough to get 100% response rate (unheard of in our experience without heavy organisational pressure), there will be community members that aren’t represented in your results. These members may be more likely to belong to traditionally under-represented groups and we should consider this when making strategic decisions based on survey data. One question that we’re asking right now is how we can best encourage engagement and responses that are as representative as possible of the diversity of perspectives in our community.
With this in mind, if you’ve got tips for increasizng engagement and working with survey data, I’d love to hear about them. What advice and insights have you collected when designing, distributing and analysing surveys of your community? Please leave your comments below or contact me on Twitter.
Recommended Reading
- This Tip Sheet (PDF) from the Harvard University Program on Survey Research is a nice four-page summary of considerations for rigorous survey design.
- Here are two discussions of good survey design from SurveyMonkey, one short and one long (as well as advice on designing a survey, this site also contains links to discussion of other related topics, such as bias and Likert scales).
- I like the fourth point – “Explain why respondents should help” – in this post on the Qualtrics blog.
- Lastly, for an amusing look at what not to do I recommend following BadSurveyQ on Twitter.