To measure occurrence, ask for number of times per day/week/month, rather
than asking respondents to select a category (several times a week, once a
week, once a month). However, if the event in question is difficult to count or
estimate, response categories (zero times, 1-3 times, 4 or more times) may help
respondents provide an answer.
A self-administered questionnaire is a survey that collects data without the use
of a telephone or in-person interviewer. It can be either a paper or web-based
instrument. In both cases, the design of the questionnaire is crucial to reducing
non-responses and measurement error.
With self-administered questionnaires, it is especially valuable to provide a
context for the survey as a whole
how the data will be used, whether
responses are anonymous, etc. The earlier and more effectively this is done, the
less likely people will be to dismiss the survey before they even start
responding. The introductory email or letter or the front cover of a paper survey
are appropriate places to provide the context.
Important design considerations include:
Physical format of a paper instrument (how pages are oriented, stapled,
folded, etc.) is simple for the respondent to work with and for data entry
staff to process
Launching and moving through a web-based survey is intuitive and works
with multiple browsers
Order of questions is polite and logical to the respondent
Begin with questions that reflect the announced subject of the study,
catch the respondent’s attention, and are easy to answer
Ask people to recall things in the order in which they happened
Ask about topic details before asking for an overall assessment
Group items that are similar in topic, then group items within the topic
that have similar response options
Place personal and demographic questions at the end of the survey
Visual layout is clean, simple, and consistent
Distinguish question text, answer choices and instructions and be
consistent in how you do this
Limit the number of variations in type or font, the use of white type on
black background, and the use of all capital letters
Use indentation and white space to make it easy to navigate through
sections of the survey and to identify the next question
On a paper survey, provide lines for answering open-ended questions
(the more lines, the more respondents will write)
On an electronic survey, the response space should be expandable or
accommodate an ample number of words
Desired first and last impressions are created by the front and back
covers on a paper survey and by the email and web page headers on an
Include the o
rganization’s name and logo (or an engaging but neutral
graphic) and the title of the survey on the cover or header
Provide instructions for returning the completed questionnaire and a
number to call with questions on the back cover (or at the end of an
electronic survey), along with a note of thanks for responding.
The following table summarizes the key “dos and don’ts” for
formatting your survey questionnaire.
Give clear instructions
Keep question structure simple
Ask one question at a time
Maintain a parallel structure for all
Define terms before asking the question
Be explicit about the period of time being
referenced by the question
Provide a list of acceptable responses to
Ensure that response categories are both
exhaustive and mutually exclusive
Label response categories with words
rather than numbers
Ask for number of occurrences, rather
than providing response categories such
as often, seldom, never
Save personal and demographic
questions for the end of the survey
Use jargon or complex phrases
Frame questions in the negative
Use abbreviations, contractions or
Mix different words for the same
Use “loaded” words or phrases
Combine multiple response
dimensions in the same question
Give the impression that you are
expecting a certain response
Bounce around between topics or
Insert unnecessary graphics or
mix many font styles and sizes
Forget to provide instructions for
returning the completed survey!
Writing survey questions is an iterative process. Review, test, and revise the
questions and introductory scripts
maybe more than once!
Even if you’ve used the same questions before or copied them from a
t repository, you’ll want others to check your survey for spelling and
grammatical errors, readability and flow, and consistency with the current
If the topic is sensitive or your results will be used to make high
stakes decisions, consulting an expert (e.g., the UW Survey Center) to review
your survey is a wise investment. You may also need to get final approval from
others in your unit or beyond.
Once your survey has passed muster internally, it’s time to “field test” it with a
sample of potential respondents to verify that your process is smooth and
completely understandable to your target population. Do people understand the
terms? Or are adjustments needed? Do people complete the survey as
intended? Or do they drop out before completing it? Are certain questions
regularly skipped? If the survey is electronic, does it launch properly and work
as expected with different browsers? The purpose of the field test is to get
estimates of the survey’s reliability and validity and to identify an
y final changes
that might be needed to ensure success.
Those involved in contacting potential respondents, conducting interviews, and
analyzing survey responses need to be briefed on the purpose of the study and
provided with training tailored to their role in the project. The goal is to have
everyone following a consistent process so that as little variation as possible is
introduced into the process.
If interviewers are administering the survey in person or by phone, develop a
script for them to follow so that the survey is presented in exactly the same way
to every respondent. Interviewers will also need to be trained to behave in
neutral ways to control their influence on the answer, and they need actual
practice reading the questions and script. For mail surveys, the mail and data
entry staff needs instruction in how the study will be mailed out and how the data
will be coded and entered.
A common question is whether those who participate in the field test can later be
respondents. The answer depends on how the pretest respondents were drawn
and whether the instrument has changed, and how much time has passed
between the field test and the main study. A good way to identify pretest
respondents is to draw a miniature sample like that to be used in the main study.
This approach allows field procedures to be tested as well as the instrument.
When this method is used, pretest respondents can sometimes be combined
with respondents from the main study.
The whole point of conducting a survey is to obtain useful, reliable, and valid
data in a format that makes it possible to analyze and draw conclusions about
the total target population. Although there is no agreed-upon minimum response
rate (Fowler, 2002), the more responses you receive, the more likely it is that
you will be able to draw statistically significant conclusions about the target
Every component of your survey process
everything you do or don’t do –
affect the response rate, so seemingly simple decisions like whether to use a
mailed survey or a web survey should not be taken lightly. The UW Survey
Center normally receives a 60-70% rate of response to mailed surveys. For web
surveys, a 30-40% response rate is common, even with populations that are
young and have easy access to the web.
From your very first contact with potential respondents to obtain cooperation,
you have the opportunity to affect the response rate. How can you interest
potential respondents so that they are more likely to respond? People like to
know the purpose of the survey and what you will do with the results. Studies
show it helps if the initial communication is personalized and presents an
altruistic purpose for participation. Clear identification of the source and authority
of the organization conducting the survey and assurances about confidentiality
are also extremely important (Odom, 1979).
Incentives are often used to maximize the response rate. Mailed surveys or
advance letters are often accompanied by a crisp bill. When it is provided before
the survey is taken, even a small denomination encourages the respondent to
complete and return the survey. Prize drawings for a gift certificate are popular
incentives for completing a web survey, however the evidence about their
potential effectiveness in gaining participation is mixed at best.
Calculating response rates can be quite complicated. The American Association
for Public Opinion Research (AAPOR) has developed
a set of “Standard
Definitions” that are used by mo
st social research organizations. See
Advising potential respondents of a deadline for completing the survey helps
make it a priority. CustomInsight (2010) suggests giving a window of 7 to 10
days, with a follow-up reminder sent a few days before the end date.
C# HTML5 Viewer: Deployment on ASP.NET MVC
under Views according to config in picture above. RasterEdge.XDoc.PDF.HTML5Editor. dll. Open RasterEdge_MVC3 DemoProject, copy following content to your project: copy picture from pdf to powerpoint; copy image from pdf to
Depending upon how the survey is being administered, popular means of
prompting potential respondents to complete the survey include email,
telephone, or a mailed postcard. A reminder that includes a duplicate copy of the
survey may yield the best response.
Investigators often want to survey the same population on a regular basis. A
simple but timely thank you
note is helpful in retaining people’s interest in
participating in your future surveys. If you can share some of the survey findings,
it will engage a feeling of responsibility for the results, which in turn will increase
their commitment to future surveys. Whatever you have promised respondents
in the way of incentives and/or post-survey follow-up, be sure you deliver!
Web surveys are an increasingly popular mechanism for collecting data, but
come with their own set of issues and are
n’t appropriate for every use.
surveys eliminate the need for data entry, but not the need to verify accuracy
and account for missing data.
An obvious drawback is that the samples in web surveys are skewed to Internet
users, and the target population may or may not be skilled in or have access to
the necessary technology. Reliability is at risk because the survey may appear
differently to different respondents, depending upon browser and computer
platform. People also differ in how and when they read email, and spam filters
can wreak havoc with your delivery schedule.
Unless the survey has only a couple questions and the number of potential
respondents is very limited, simply putting the survey in the body of an email will
not suffice (Couper, 2008). Commercial web survey tools are available
(Zoomerang and SurveyMonkey are two examples) as well as a UW-Madison
tool called Qualtrics (see
). The UW Survey Center also
conducts Internet web surveys. These tools provide assurance about anonymity,
templates for laying out the survey, and assistance with compiling responses
that go far beyond the capabilities of an email.
Thoughtful decisions made during the planning phase about what data are
needed and the format in which it will be collected will pay dividends when it
comes time to analyze the data. A consistent process for organizing and
analyzing survey data should be established and clearly documented well ahead
of receiving the first responses, with everyone involved receiving ample training.
How will incomplete surveys and missing data be handled?
What checks will be conducted to find errors in coding or data entry?
Will some questions be weighted more heavily than others?
Scaled responses (e.g., dissatisfied, somewhat dissatisfied, neutral, somewhat
satisfied, satisfied) are usually converted to numerical values (1, 2, 3, 4, 5) for
easier analysis. Use of a template or mask can help coders convert the data
more quickly and accurately.
Careful thought needs to be given to coding responses to open-ended items.
The process usually involves examining some preliminary data to identify
potential categories, and then testing to ascertain how consistently the
categories are assigned by various coders.
To analyze responses to open-ended questions, you can copy the comments
onto individual cards and then group similar comments together. This will give
you a sense of the most frequent ideas. Alternatively, there are software
packages that help in analyzing responses to open-ended questions.
) was developed to aid in processing the
thousands of comments received during the UW-Madison 2009 reaccreditation
study, and uses “word clouds” as a visual analysis tool to show the relative
frequency of the themes into which responses have been categorized.
Visual displays can be helpful in understanding the data. For responses to
multiple choice questions, you might:
Create a frequency histogram of the responses for each question to
demonstrate the variation in the responses
Use a bar chart to display the percent of respondents selecting
If the results of your survey will be written for publication, you may need to
approval from an Institutional Review Board (IRB). Contact the
Graduate School for assistance with identifying the appropriate board (see
The UW Survey Center notes that survey work frequently raises issues of
interest to the IRB, including:
What is the consent process and need it be written?
How will confidentiality be protected?
How will people be sampled or recruited?
Will incentives be offered?
Will HIPAA (Health Insurance Portability and Accountability Act)
privacy rules be applicable?
Since IRB approval cannot be granted after the fact, it is extremely important to
follow the appropriate protocol prior to conducting your survey, if there is even a
remote chance that you will someday want to publish the results of your
Frequently initiated by a school or college’s Equity and Diversity Committee,
climate surveys can be a useful way to gather data on job satisfaction, the quality
of interactions within the workplace, and potential issues requiring intervention.
It is always a good idea to find out what other climate assessment may have
been conducted with your target population. For example, on the UW-Madison
campus, the Women in Science and Engineering Leadership Institute (WISELI)
has been studying the quality of work life for faculty since 2003, and makes the
survey data available at the school/college level.
WISELI has also developed extensive materials for enhancing department
), including a template climate
survey for individual departments.
So You Want to Run a Climate Survey?! (Frehill, 2006) offers numerous tips and
design considerations specific to climate surveys.
The University of Wisconsin Survey Center (see
) offers a
complete range of survey research capabilities and serves a wide variety of
clients including faculty, staff, and administration at the University of Wisconsin;
faculty and staff at other universities; federal, state, and local governmental
agencies; and not-for-profit organizations.
The Survey Center can consult with you at one or more points in the survey
process or Survey Center staff can take the lead role in designing and
implementing your survey. Specific tasks on which it can be helpful to get
Developing protocols that yield high response rates
Reviewing the instrument and refining questions
Developing cost proposals
Foreign language and international work
Instrument: The survey questionnaire or booklet.
Institutional Review Board (IRB): Campus governance bodies responsible for
ensuring the protection of human subjects participating in research studies,
Mode: The combination of how sample members/participants are contacted,
questions are administered, and responses are recorded.
Preamble: An introductory statement that provides definitions and instructions
for completing a section or question in the survey.
Reference period: The time frame the respondent is being asked to consider
when answering a question (e.g., August 1 through August 15, 2009).
Reliability: The extent to which repeatedly measuring the same property
produces the same result.
Respondent: An individual providing answers to survey questions.
Response dimension: The scale or descriptor that a survey question asks the
respondent to use to describe their observations, actions, attitude, evaluation or
judgment about an event or behavior.
Response rate: The number of completed surveys divided by the number of
eligible potential respondents contacted.
Sample: A list of people drawn from the group from which information is
Self-administered questionnaire: A paper or web-based survey that collects
data without the use of a telephone or in-person interviewer.
Target population: The group of people whose activities, beliefs or attitudes
are being studied.
Validity: The extent to which a survey question accurately measures the
property it is supposed to measure.
Couper, Mick P. 2008. Designing Effective Web Surveys. Cambridge:
Cambridge University Press.
"Customer Satisfaction Surveys: Maximizing Survey Responses."
CustomInsight. Web. <custominsight.com>; 24 Nov. 2010.
Fowler FJ. Survey Research Methods. 3rd ed. Thousand Oaks, CA: Sage
Frehill, Lisa; Elena Batista; Sheila Edwards-Lange; Cecily Jeser-Cannavale; Jan
Malley; Jennifer Sheridan; Kim Sullivan; and Helena Sviglin.
“So You Want to
Run a Climate Survey?!”
(pp. 29-35) Using Program Evaluation to Ensure the
Success of Your ADVANCE Program. May 2006.
Odom, John G. "Validation of Techniques Utilized to Maximize Survey
Response Rates." Tech. ERIC document reproduction service no. ED169966.
Education Resource Information Center. Web. <
>. (09 April 1979).
Department Climate Surveys:
IRB Contacts: University of Wisconsin-Madison Graduate School,
Online Survey Tools:
UW-Madison Survey Hosting Service,
Question Comprehensibility Testing:
Reference Materials List: Compiled by Nora Cate Schaeffer, UW Survey Center,
Response Rates: Standard Definitions, American Association for Public Opinion
Dillman, Don A., Jolene D. Smyth, and Leah M. Christian. 2009. Internet,
Mail, and Mixed-Mode Surveys: The Tailored Design Method, Third
Edition. New York: John Wiley & Sons.
Groves, Robert M., Floyd J. Fowler, Mick P. Couper, James M.
Lepkowski, Eleanor Singer, and Roger Tourangeau. 2004. Survey
Methodology. Hoboken, New Jersey: Wiley.
Journal of Survey Research Methodology (surveymethods.org),
Usability Guidelines for Web Design:
Writing Good Questions
Aday, Lu A. and Llewellyn J. Cornelius. 2006. Designing and Conducting
Health Surveys: A Comprehensive Guide, 3rd Edition. New York: Wiley.
Fowler Jr., Floyd J. and Carol Cosenza. 2008. "Writing Effective
Questions." Pp. 136-60 in International Handbook of Survey
Methodology, edited by Edith D. de Leeuw, Joop J. Hox, and Don A.
Dillman. Lawrence Erlbaum.
Schaeffer, Nora Cate and Stanley Presser. 2003.
“The Science of Asking
Annual Review of Sociology, Vol. 29: 65-88
Documents you may be interested
Documents you may be interested