I think they should never be used.

  • paysrenttobirds@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    ·
    3 months ago

    No experiment, no proof. But, taken with a grain of salt a good survey can be better than pure speculation where experiment is impossible or unethical. On the other hand experiments can prove something, but depending on how reduced or artificial the context they may not be proving as much as you hope, either. Science is just difficult in general.

    • JokklMaster@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      3 months ago

      Exactly. Luckily I’m in a field where true experiments are possible, but I have many colleagues who can’t ethically run true experiments. It’s surveys or nothing for the most part. They have very advanced statistics to account for the lack of control in their research.

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        And even if you can carry out a proper experiment, it might be useful to see of there’s already a survey on the same topic. If there is, you can use that data to design your experiment, and hopefully you’ll be able to take important variables into account.

    • merari42@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      where true experiments are possible, but I have many colleagues who can’t ethically run true experiments. It’s surveys or nothing for the most part. They have very advanced statistics to account for the lack of control in their research.

      There is a whole discipline on causal inference with observational data that is more than a hundred years old (e.g. John Snow doing a diff-in-diff-strategy). Usually, it boils down to not having to control for every detail but to get plausibly exogeneous variation in your treatment either due to a policy only implemented in one group(state), a regulatory threshold, or other “natural experiments”. Social scientists typically need to rely on such replacements for true experiments. Having a good survey is only the first step before you even think at how you could potentially get at the effects of interest. Looking at some correlations in a survey is usually only some first descriptive to find interesting patterns. Survey design itself is a whole different problem. There you also have a experiments and try to find how non-response and wrong answers work. For example, there are surveys in scandinavia, the netherlands, france in Germany that can easily be linked to social security (or even individual credit card data in the danish case) to validate answers or directly use high-quality administrative data.

  • stevedidwhat_infosec@infosec.pub
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    3 months ago

    I think this is moreso a misunderstanding - surveys on their own, in raw form, are not science

    There’s all kinds of bs that can come up like:

    • selection bias
    • response bias
    • general recollection errors/noise (especially for scary or traumatic experiences - there’s a bunch of papers on this behavior)

    But data scientist can account for these by looking at things like sample selection (randomly selected so as to represent the nation/region/etc), pilot runs, transparency (fucking huge dude, tell everyone and anyone exactly what you did so we can help point out bullshit), and stuff like adjusting for non-responses.

    Non responses are basically the idea that some people simply don’t give a fuck enough to do the survey. Think about a survey your Human Resources team at work might send out - people who fuckin hate working there and don’t see it changing anytime soon might not vote, which means there would be less people expressing their distaste which leads to a false narrative: that people like working there.

    Hope this makes sense! Stay curious!!

    PS/EDIT: Check out the SAGE method for data science for some more info! (There’s probably a YouTube vid instead of the book if you’d prefer I’m sure!)

  • dual_sport_dork@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    3 months ago

    I deal with the fallout of this, or something closely related to it, frequently in my industry.

    Manufacturers think focus groups represent the needs and opinions of the general public. What they categorically fail to realize is what focus groups actually represent is in fact the types of people who attend focus groups.

    The kind of people who respond to surveys are the kinds of busybodies who respond to surveys. Not an actual vertical cross-section of the populace.

  • Paragone@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    3 months ago

    Shoddy use of them is normal, that is true.

    Don’t toss-out the baby, with the bath water, tho, eh?

    Training people in critical-thinking, & having quality standards for doing surveys, would help our world more, than would removing a method of discovery.

    _ /\ _