Bluelight

Thread: What makes you complete an online survey for research?

Page 2 of 2 FirstFirst 12
Results 26 to 38 of 38
  1. Collapse Details
     
    #26
    Bluelight Crew ebola?'s Avatar
    Join Date
    Sep 2001
    Location
    in weaponized form
    Posts
    22,376
    Good topic. We should make quite visible our own selection biases in our research. I choose by:

    1. Topic is interesting/I am interested in what answers it seeks.
    2. I have qualities that speak to the topic of the study.
    3. Study fits within how much leisure time I wish to donate.
    4. If there's compensation! (particularly that which is not probabilistic. Having a tiny chance of winning something does little for me.)

    ebola
    Reply With Quote
     

  2. Collapse Details
     
    #27
    Bluelight Crew ebola?'s Avatar
    Join Date
    Sep 2001
    Location
    in weaponized form
    Posts
    22,376
    apparently these days, design your survey with a line from totally disagree to totally agree. On that line is a cursor which the respondent can drag to any point on the line, to indicate their level of agreement. That way you can measure someone's agreement in a very fine grained way
    I'd be skeptical about whether this could truly be treated as a proper interval or ratio datum though. How one approaches such scaling (likely distortingly) would likely depend on characteristics of the participant, aspects of the probe questions, context in the survey at large, etc.

    ebola
    Reply With Quote
     

  3. Collapse Details
     
    #28
    Director of Research Tronica's Avatar
    Join Date
    Mar 2002
    Location
    Mẹlböürnẹ
    Posts
    2,020
    In 2007 I looked into whether there were major advantaged to using these visual scales, and no evidence favouring them had emerged. Mostly, people found them more difficult to use and they took longer to load. I think that will change as they are used more often and people's internet connections and computers become faster.

    Quote Originally Posted by ebola? View Post
    I'd be skeptical about whether this could truly be treated as a proper interval or ratio datum though. How one approaches such scaling (likely distortingly) would likely depend on characteristics of the participant, aspects of the probe questions, context in the survey at large, etc.
    I agree. These are all sources of error that would arise, but also from Likert scales that get used a lot for attitudinal survey research. That doesn't make them 'right', but 5 or 7-point agree-disagree scales are fairly entrenched in some of these areas of inquiry...
    Reply With Quote
     

  4. Collapse Details
     
    #29
    Greenlighter
    Join Date
    Dec 2009
    Location
    Mesa, AZ
    Posts
    29
    I'll do them for the money, or if I find the topic really interesting
    Reply With Quote
     

  5. Collapse Details
     
    #30
    Bluelighter blode's Avatar
    Join Date
    Nov 2009
    Location
    London
    Posts
    314
    Quote Originally Posted by Tronica View Post
    Thanks blode. This particular comment is interesting, as I think most researchers are hoping that participants aren't high when they complete surveys, especially those that require a fair bit of mental effort!
    oh wow, I feel embarrassed, sorry about that.
    Reply With Quote
     

  6. Collapse Details
     
    #31
    I complete drug surveys with hopes that it could be used to take the stigma off of drug users.
    Reply With Quote
     

  7. Collapse Details
     
    #32
    Bluelight Crew ebola?'s Avatar
    Join Date
    Sep 2001
    Location
    in weaponized form
    Posts
    22,376
    I agree. These are all sources of error that would arise, but also from Likert scales that get used a lot for attitudinal survey research.
    Indeed (I shoulda noted so)...I'm not even sure if treating them quantitatively as a single ordinal variable makes sense....although in the general linear model, Likert scales are often coded into dichotomous dummy vars, right?

    ebola
    Reply With Quote
     

  8. Collapse Details
     
    #33
    Director of Research Tronica's Avatar
    Join Date
    Mar 2002
    Location
    Mẹlböürnẹ
    Posts
    2,020
    Quote Originally Posted by blode View Post
    oh wow, I feel embarrassed, sorry about that.
    Nah, no need to be embarrassed. I think this thread is interesting because some of the reasons people give are not the reasons researchers think about when they design surveys and analyse results. Survey designers usually try and get an idea in their head of the different types of people who might complete the survey and the different motivations for doing so; as it helps get the design right!

    Quote Originally Posted by kandytime View Post
    I complete drug surveys with hopes that it could be used to take the stigma off of drug users.
    That's an interesting reason. Does that mean you will scan a survey first to see whether it might portray drug users in a good or bad light before completing? (Not that this is ever immediately obvious, but there are usually hints in the way the survey is written, I find.)

    Quote Originally Posted by ebola? View Post
    Indeed (I shoulda noted so)...I'm not even sure if treating them quantitatively as a single ordinal variable makes sense....although in the general linear model, Likert scales are often coded into dichotomous dummy vars, right?
    I think the debate about how to treat Likert scales revolves around ordinal versus interval, but yes, in linear models, people can get around it by dichotomising and creating dummy variables... both of which decrease power of models. I've also used non-parametric tests with Likert items for testing differences between groups and relationships between ordinal items. I'm no expert though... yet
    Reply With Quote
     

  9. Collapse Details
     
    #34
    Quote Originally Posted by Tronica View Post
    That's an interesting reason. Does that mean you will scan a survey first to see whether it might portray drug users in a good or bad light before completing? (Not that this is ever immediately obvious, but there are usually hints in the way the survey is written, I find.)
    If I feel that it is obvious the research/survey is being done with the aim of ridiculing or making drug users personally look bad I would stay away from it. Being a pretty responsible drug user myself I would be glad to provide even a slightly bias survey with honest answers so long as those answers can't be twisted to mean or represent something else.
    Reply With Quote
     

  10. Collapse Details
     
    #35
    Bluelighter Unbreakable's Avatar
    Join Date
    Oct 2009
    Location
    My own little world
    Posts
    5,963
    Harm reduction starts with knowledge gathered from a neutral party. Not from someone paid to think a certain way.
    Reply With Quote
     

  11. Collapse Details
     
    #36
    Bluelighter ABetterWay's Avatar
    Join Date
    Apr 2015
    Location
    NE USA
    Posts
    821
    I used to do surveys over the phone. These people did not get paid to take them. Some did far better than others.

    Bad:
    ---Unless absolutely fascinating, too long. Long long surveys were either terminated early or as they grew weary they'd give any old answer to move it along.
    ---While I do understand that when gathing data, you must ideally be able to pool answers in a way that allows you to deduce themes. Meaning, while multiple choice answers can be hard fortherespondent to choose should theynot quite like any totally, this can besomewhatmitigated by sub questions. So, initial multiple choice question gets the general idea, sub/follow up clarifies reasons. The what and why.
    ---Overly personal questions, UNLESS FOR PURPOSES OTHER THAN SAMPLE GATHERING. In other words, I've actually been instructed to, at the end ofsurveys gather their demographics like age, gender, income etc. Done anonymously, this simply ensures a balances sample. However, some would ask about fucking sexual preference! And for no obvious reason to the respondent who was often disgusted at such prying. "Thanks....so, what do you like to stick your dick in, sir,?" Ummm no. Unless the study is trying to, for example, understand how specific populations are impacted or affected, and THIS IS CLEARLY EXPLAINED, then hell no. Mind your business! I'll admit, I'd often either profusely apologize and get in trouble at times. Worth it.
    ---Asking their name or anything that could identify them specifically. I don't care if study claims to discard that info. Might not be an issue here, but set my job they'd ask for people by name, which understandably made some worry about their anonymity even if told identifying info was discarded.
    ---My most hated surveys to give were when the questions were so obviously worded in a manipulative fashion, and designed to twist responses. Some clever respondents world catch on and I'd smile on the other end of the phone, happy for them. Other poor souls - not even necessarily dim, just tired or busy or too young or idealistic or trusting - wouldn't notice. And I'm there knowing they want to get one view across but their answers given, the survey was designed to twist them into something else entirely and that's just shady. God I hated that job lol.
    If I think of others I'll edit.

    Good:
    ---When at the end respondents could give their opinions, just written, no multiple choice, whether about their feelings on the survey itself in some way, or to add something they felt important to cover that wasn't coveted in survey.
    ---Respectful wording on more personal subject matters. People respond better despite being anonymous respondents when something is worded in a way that is neutral and not judgemental or biased. But this ties in with manipulatively worded surveys that totally skew your actual answers.
    ---a space to note of anything should have been included or left out, to improve future surveys
    ---letting people know what the fruits ofthesurvey results are... What are they trying to ddiscern and WHY, wwhat's the goal?
    ---Wording questions in a way normal people speak. I cannot tell you how many questions were clearly written by people who were just not good communicators. They were either ty oo long winded, boring, unclear, or cold. A skilled writer can still achieve the end goal without boring, confusing, or offending.
    ---Don't sound judgemental
    ---remind respondents of what important work you are stunning fir and thank them for their help. People aren't inclined, if not being paid or compensated, to take tons of surveys that don't even express gratitude or emphasize the importance of gathering thus info
    ---don't disguise surveys that are hateful of drug takers as not. Fuck that. I am careful when I read exactly how things are written and whether the multiple choices are fair. If I see any fake, lying, manipulative bull, I'm out. I'm not willing to participate in a study that wants to demonize us or even most drugs and especially not all drugs, and I promise you I will notice of yore trying to hide those intentions in a survey I will voice my disgust after refusing to complete it. Be genuine, interested, and fascinated by your research with good intentions based on facts, and I'm all good.
    ---keep it truly anonymous. Don't lie.
    Reply With Quote
     

  12. Collapse Details
     
    #37
    Director of Research Tronica's Avatar
    Join Date
    Mar 2002
    Location
    Mẹlböürnẹ
    Posts
    2,020
    @ABetterWay - I love your post. So many things that researchers should keep in mind.

    One query, when you say that asking about sexual orientation is too personal - I agree it is personal. I always include a 'I don't want to answer' or at least ensure that people can skip questions so if someone chooses not to respond, no big deal, and they are not forced to do so.

    Having said that, I've been criticised by people who are GLTBI for not including the sexual orientation or identity questions - because people want to be able to see whether sexual orientation matters when it comes to drug use and issues. So I lean towards including it but ensuring it isn't a 'forced' question.
    Reply With Quote
     

  13. Collapse Details
     
    #38
    Bluelighter cduggles's Avatar
    Join Date
    Nov 2016
    Location
    Varies a great deal
    Posts
    2,114
    -because I design them and feel sorry for others
    - to see the bias in questions (more from the "writing my own" viewpoint)
    - learn new tricks (internet v. paper, providing date format dd/mm/yyyy for internationally distributed surveys)
    - learn what not to do (ambiguous wording, no choice I agree with)
    That's about it!
    Reply With Quote
     

Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •