Actions

좋은 설문조사를 디자인하는 방법(가이드)

From LimeSurvey Manual

This page is a translated version of the page How to design a good survey (guide) and the translation is 64% complete.

__목차__

좋은 설문조사를 만들기 위한 간단한 가이드

LimeSurvey를 사용하면 설문조사를 매우 빠르게 구성할 수 있습니다. 불행하게도 이는 잘못된 설문조사도 매우 쉽게 만들게 합니다.

이 페이지에서는 청중이 손쉽게 완료할 수 있을 뿐만 아니라, 의미 있는 답변도 제공하게 하는 설문조사를 작성하는 간단한 가이드를 찾을 수 있습니다.

블로그"설문조사 디자인 팁 및 요령"에서도 매우 유용한 정보를 제공합니다.

설문조사를 하기 전에

설문지를 디자인하기 전에 몇 가지 중요한 질문에 답해야 합니다. 실제로는 설문지가 올바른 방법인지 결정하기 전에도 마찬가지입니다.

연구를 통해 정말로 알고 싶은 것이 무엇인가요?

해당 질문에 답한 후 다음을 질문하십시오:

  • 설문조사가 연구에 필요한 정보를 얻는 데 도움이 됩니까?
  • 질문하기에 적합한 사람은 누구입니까? 설문조사를 완료하려면?
  • 적절한 사람들에게 어떻게 연락할 수 있나요?
  • 설문조사를 완료하게끔 사람들이 질문을 완전히 이해하도록 돕는 가장 좋은 방법은 무엇입니까(얻은 정보가 정확하고/유용하도록)?
  • 데이터가 수집된 후 데이터 분석을 위해 어떤 통계적 방법을 사용하고 싶거나 사용해야 합니까? ?


다음은 LimeSurvey가 귀하에게 적합한 도구인지 결정할 때 명확한 답변이 필요한 몇 가지 질문입니다.

LimeSurvey는 대단히 구조화되고(인터뷰를 시작하기 전에 물어봐야 할 모든 질문을 알고 있음), 표준화된(모든 사람이 어느 정도 동일한 설문지를 받음), 대부분 정량적(주로 숫자 또는 사전 정의된 답변이 있는 질문에 관한 것)인 온라인으로 수집된 질문지입니다.


물론 어느 정도는 이와 다를 수 있습니다. LimeSurvey를 사용하여 일부 유형의 전화 인터뷰에 대한 답변을 수집할 수 있습니다. 또한 LimeSurvey를 사용하여 텍스트 질문 등을 통해 정성적 데이터를 수집할 수도 있습니다.

그러나 어느 시점에서는 다른 조사 방법이 더 적합하다는 결론에 도달할 수도 있습니다.

설문지 구조화

질문 순서와 그룹화 방법을 결정하려면 고려해야 할 몇 가지 측면이 있습니다.

가능하다면 대답하기 쉽고 모든 참가자가 편안하게 대답할 수 있는 질문부터 시작하세요. 대개 이는 선별(screening) 질문입니다. 즉, 적절한 사람에게 질문하고 있는지 확인하기 위해 물어봐야 하는 질문입니다(conditions 및/또는 quotas 을 사용하세요).

이러한 유형의 질문을 처음에 넣으면 참가자가 설문조사를 완료하기 전에 설문조사를 떠나는 것을 방지하는 데 도움이 될 수 있습니다. 사람들이 이미 이러한 도입/선별 질문에 답변하기 위해 일부 작업을 수행한 후에는 설문조사를 종료할 가능성이 낮아질 수 있기 때문입니다.

:

 다음  어떤 과일을 좋아하시나요?
#사과 ()
#바나나 ()
#체리 ()

(단일 선택)

참가자가 '체리' 선택한 경우 조건을 사용하여 체리에 관한 다음 질문이 나타나도록   있습니다.

 체리를 선택하셨나요?
#맛있습니다
#색깔이 맘에 듭니다
#건강에 좋습니다
#과즙이 가득합니다
#체리 파이를 좋아합니다

(다중 선택(또는 단일 선택, 정확한 데이터가 필요한지에 따라 다름) 질문)

How much do you like cherries?
#1) Not much more than other fruit
#2) Like them more than other fruit
#3) It's one of my favourite fruits!
#4) I ADORE CHERRIES!

(단일 선택)

체리를 이용한 요리법을 아시나요?

[텍스트 필드]

위는 주요 질문이 따라오는 대답하기 쉬운 도입 질문의 예입니다.

목표는 체리, 사과, 바나나로 조리법을 모으는 것이었습니다.

반면, 답변하기 어려운 질문을 해야 한다면 각 질문이나 질문 그룹마다 다른 페이지를 사용하고 마지막에 이러한 답변하기 어려운 질문을 넣는 것을 고려해 볼 수 있습니다. 이렇게 하면 참가자가 설문 조사를 완료하지 않기로 결정한 경우 최소한 이전 응답은 저장됩니다.

구조와 관련하여 고려해야 할 또 다른 사항 - 질문지 자체에서 발생하는 편견을 피하십시오.

예를 들어, 시장 연구에는 비보조 질문과 보조 질문이 필요한 개념이 있습니다.

비보조 질문의 예는 다음과 같습니다:

"어떤 브랜드의 초콜릿과 친숙한가요?"

(다음에는  텍스트 상자가 옵니다)

다음은 보조 질문의 예입니다:

"다음 중 당신에게 친숙한 초콜릿 브랜드는 무엇입니까?"

(다음에 브랜드 목록(객관식))

이전에 언급한 바와 같이 동일한 설문지에 두 가지 유형의 질문(보조 및 비보조)을 모두 포함하기로 선택한 경우 해당 질문을 서로 다른 페이지에 배치하고 보조 질문 앞에 비보조 질문을 배치해야 합니다. 비보조 질문 앞에 보조 질문을 놓으면 의도치 않게 참가자의 응답에 영향을 주어 결과가 무효화될 수 있습니다.

개별 질문

질문은 유도적이지 않아야 합니다. "LimeSurvey에 대해 어떻게 생각하시나요?"는 수용 가능한(비-유도적) 질문입니다만 "LimeSurvey가 정말 훌륭한 도구라는 점에 동의하지 않습니까?"라는 질문은 유도적인 질문입니다.

표현 질문에 대한 다른 사례와 제안:

사람들은 다음과 같은 질문을 받으면 기부에 대해 '예'라고 대답할 수 있습니다.

  • 자연을 사랑하시나요?
  • 강을 돕기 위해 돈을 기부하시겠습니까? ?


이런 식으로 질문을 하면 그들은 아마도 '아니요'라고 대답할 것입니다:

  • 돈이 부족해서 문제가 되나요?
  • 강을 돕기 위해 돈을 기부하시겠습니까?


적절한 답변을 구하는 데 도움이 되도록 질문 순서를 정하세요:

  • 가장 덜 민감한 것부터 가장 민감한 것으로
  • 일반적인 것부터 더 구체적인 것으로
  • 사실에 관한 질문부터 의견에 관한 질문으로

또한 설문조사 질문은 다음과 같을 수 있습니다:

  • 개방형(사람이 자신의 말로 대답함) 또는
  • 폐쇄형(사람이 제한된 수의 옵션에서 선택함)

폐쇄형 질문은 분석하기가 훨씬 쉽지만 응답자가 실제로 원하는 답변을 제공하지 못할 수도 있습니다.

예: "가장 좋아하는 색깔은 무엇입니까?"

Open-ended: Someone may answer "dark fuchsia", in which case you will need to have a category "dark fuchsia" in your results.

Closed-ended: With a choice of only 12 colors your work will be easier, but respondents may not be able to pick their exact favorite color.

Carefully consider each question and decide if they should be open-ended or closed-ended. If you need deeper insight into responses, use open-ended questions. If this is not the case, close-ended questions can be used.

Example (open-ended): "What do you think is the best way to clean up the river?"

Make it open-ended: the answers may not be easy to put in a table or graph, but you may gain deep insight into people's feelings and ideas about cleaning up the river or the environment and use direct quotes in your reporting.

Example (closed-ended): "How often do you visit the river?"

Make it closed-ended with the following options:

  • Nearly every day
  • At least 5 times a year
  • 1 to 4 times a year
  • Almost never

You will be able to present this data in a neat bar graph.

When working with multiple-choice or single-choice questions, make sure to choose the appropriate question type and formulate both questions and answers appropriately.

For example:

Which of the following fruit do you like?
#Apples   ()
#Bananas  ()
#Cherries ()

The above is a typical multiple-choice question, as you can like several items on the list. On the other hand, "Which one of the following fruits do you most prefer?" is a single choice question.

Both fruit examples have been formulated to make clear that your concern is with only the fruit listed. If you were to ask, "Which is your favorite fruit?", you should either have a really exhaustive list of fruit or, more likely, use LimeSurvey's setting to add an "other" field. Generally, answer options in most cases need to be complete, mutually exclusive and definite.

If you have multiple- or single-choice questions with a lot of options to choose from, you need to be aware that this might introduce another bias, as participants are likely to focus their attention on the very first options and not those in the middle. LimeSurvey offers a great option to randomize the order of questions and, to some extent, eliminate this problem.

What Makes a Good Survey?

There are 3 features of a survey that will help to elicit the proper responses needed for more accurate assessment(s):

  1. The questions are clear and precise, collectively allowing for detailed, unambiguous and meaningful answers.
  2. All predefined answers provided and their formats are appropriate to the question.
  3. There is room for people to add additional information if they need to.

Adding to that, always keep the user experience in mind. Reading, scrolling and clicking are tiring activities. So,:

  1. Avoid unnecessary questions.
  2. Use conditions to avoid asking questions not relevant for a specific participant.
  3. Keep questions and answers short and easy to read - use appropriate markup.
  4. Think about the trade-off between scrolling and clicking. Display everything on one page for short questionnaires (5-15 questions, depending on question complexity). Use groups wisely for longer questionnaires, i.e., group questions comprehensibly. Use group descriptions to give a clear statement about the topic of the following questions.
  5. Avoid confusing participants with different scales, i.e., limit the amount of different scale types, scale scopes and different scale descriptions as much as possible. Try not to change the direction of scales. (There are some methodological exceptions).
  6. For rating scales, it might be useful to use an even number of rating options to make decision making easier for the respondents (see below).
Example for answer scales about how ''good'' something is:

1. Very good

2. Good

3. Quite good

4. Not that good

5. Bad

6. Very bad

Example for answer scales about how ''bad'' something is:

1. Good

2. Fair

3. Bad

The best way to start designing a survey is to take a second to imagine your ideal feedback. It goes without saying that meaningful responses are the most useful ones, so try to create questions which invite these answers.

How can you go about that? The best method is to separate all the areas and decide what information you need.

For example, imagine you held an event that was open to the public and needed to get some general feedback about the event.

The following survey is an example of one that might be inadequate for eliciting useful responses:

Did you enjoy the Event?

( ) Yes

( ) No

How good was the Wi-Fi?

1 2 3 4 5 6 7 8 9 10

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

Did you have problems getting to the event?

( ) Yes

( ) No

Was the map provided helpful?

( ) Yes

( ) No

How did you feel about the mixture of speakers?

( ) Very sad ( ) Sad( ) Neutral ( ) Happy ( ) Very Happy

Matrix questions would be a better choice for the above scenario.

Matrix Questions.

As a general rule, scales should only be used for questions pertaining to age, time (maybe), or quantities. Matrix questions have to be worded properly in order to obtain the most useful feedback. Keep in mind that a matrix of compulsory questions can be a bit of a deterrent for your audience because, when not structured properly, they do not allow for any extra information to be gathered.

Chances are if someone is completing your survey, they want to give feedback, and if they don't think they can share anything useful, they'll just close the window and move on.

So what's wrong with the survey above?

Let's look at each question one by one.

Question 1 doesn't really achieve anything. Imagine receiving 100 "no" responses. The response alone does not provide any useful information as to why the participant did not enjoy the event. You would be left wondering about the reason for the "no" responses and also left wondering what to do with the responses. We'll look at a possible improvement to this in a moment.

Question 2 is worse than the first. Referring back to the 3 features of a good survey, we see that questions need to be clear and precise. I'm not an expert in Wi-Fi, but I'm fairly certain that there are better ways of measuring this. What's more, it doesn't allow for a meaningful answer to a question like, "What will you do with the knowledge that 33% of people rated your Wi-Fi as being good vs only 23%?" The 3 features of a good survey also states that the predefined answers need to be appropriate for the question.

It's fairly obvious that using a scale for this question won't help you improve on the quality of your Wi-Fi. There is a definite need for this question to have space for people to add additional information. How could someone report a specific problem without being able to elaborate?

In this case it would be almost impossible to have enough information to properly address the issues that the participant(s) had with the Wi-Fi. Surveys are all about obtaining useful information that you can work with or learn from.

Questions 3 & 4 would have the same results as the first two questions. They only allow for a "yes" or "no" response, and neither provides an opportunity to add details. We provide suggestions after this section on how to improve on these types of questions.

Question 5, the final question, is another ineffective one. Asking for the level of satisfaction about something is not very useful, as every person has different interests, and so everyone will likely have different opinions about each speaker. It's another example of where a "scale" question is being used and shouldn't be.

Have a look at the improved survey below.

Did you make use of the in-house Wi-Fi?

( ) Yes

( ) No

Did you experience any problems with the Wi-Fi?

( ) No problems at all

( ) A couple of small issues, nothing major

( ) One or two severe issues

( ) The Wi-Fi was almost totally unusable.

If you experienced problems, could you briefly describe them? (Text field)

Did you have any problems getting to the event?

( ) Yes

( ) No

How did you come to our event?

( ) Train

( ) Car

( ) Bus

( ) Light Rail (IE Tube, Tram)

( ) Walk

Did you use the map from our website?

( ) Yes

( ) No

If you looked at the map, was it detailed enough?

( ) Yes

( ) It gave me a rough idea.  I used another map for more detail though.

( ) Not detailed at all

If you didn't use the map, why not?

( ) Not enough detail

( ) I used a satnav/Google maps instead.

( ) I didn't even know it existed!

Generally speaking, were the speakers interesting? Did you enjoy the presentations?

( ) Nearly all were interesting and thoroughly enjoyable

( ) Mainly enjoyable, but a handful of less interesting talks.

( ) A 50 - 50 split

( ) More dull than interesting

( ) Didn't find any interesting

Please elaborate on your answer to the question above. Feel free to reference any

particular people/talks. (Text field)

If we could improve the range of talks, or you have any other interesting ideas regarding

the talks, list them below (Text field)

If you have any other recommendations or ideas, please list them below. (Text field)

This survey may be a little longer, but it is a lot easier to answer and interpret the responses. Asking two or three questions about each of the topics means that, when it comes to processing the results, you can do a little more analysis. For example, imagine that in response to the first survey question you received 30 people saying they had problems making it to the event.

This would have been as much information as you could extract from the results, but with the new set of answers it would be possible to deduce which forms of transport presented people with problems. What's more, you could go on to see whether they were using the map provided or another form of navigation, and use this to target improvements in the future.

Keep in mind, that after 50 questions, the user is most likely to stop reading

Other important additions are the text field questions. These allow your participants to give specific feedback which you can draw from. It's a good idea to not make these compulsory, as they may put people off answering.

To conclude, when writing a survey, you should aim to create one that asks specific questions to obtain more useful information for analysis. Also, remember that it is helpful to gather a little extra background information, as it can be used to better analyze the responses.

It is also important to phrase your questions properly. If people need to answer the questions, and they don't understand them they will close the window and move on. If possible, have someone else proofread your survey before making it publicly available to ensure that the questions are clear.

Survey Bias

In conducting market research, an important key in obtaining unbiased responses is to avoid asking survey participants questions that may influence the responses they provide. Avoiding survey bias helps eliminate responses that may invalidate or skew the data collected. It is quite easy and common for a company or individual without the proper market research training/knowledge to err in this way.  This applies to many things, such as the way a question is phrased, to the types of responses that are available to choose from, to the way that an interviewer presents the questions if data is being collected by phone or in-person.


The following is an example of a biased question:

How much did you enjoy the event?

( )Very much

( )Just a little

( )Not very much

( )Not at all

At first glance, it appears that there is no problem with the structure of this question.  After all, the respondent has choices ranging from "very much" to "not at all." However, the problem is in the way the question is phrased.  By asking the participant "How much" he or she enjoyed the event, the person conducting the survey has already established a bias by assuming that the respondent enjoyed the event in some way or another, which may not be the case.

The following example would be a better way to ask the question in a way that does not influence the participant's response:

How would you rate your overall enjoyment of the event on a scale of 1 - 5: 1 = "Not at all" and 5 = "Completely enjoyed"

1 2 3 4 5

( ) ( ) ( ) ( ) ( )

Rephrasing the question allows the respondent to answer using a scale that makes it easy for him or her to specify the enjoyment level, and makes it easy for the person conducting the survey to tabulate and compare the results with other respondents. Of course, more questions should be added to gather specifics on what the participants enjoyed or didn't enjoy.

This is just one example of how minor changes in wording can improve your survey.