• PrinceWith999Enemies@lemmy.world
    link
    fedilink
    arrow-up
    31
    arrow-down
    1
    ·
    10 months ago

    I started noticing this trend about 15 years ago. There was this point where I suddenly started receiving solicitation spam from pay to publish Chinese journals. It was obvious they didn’t know who I was or what my work consisted of. It was very easy to jump to the conclusion that this was a huge push on the part of China to get their national pub counts boosted, and on the part of a large number of academics who were totally just looking to get their papers in print.

    Whenever I see a pub in a journal I don’t know, and I’m interested enough to bother, I’ll check the impact factor (imperfect but established) and the other papers published by the author(s).

    I think I’ve paid to publish all of my papers to make them open access - I’d always build that into my budgets. But this is on a whole other level. I always think of this when a paper like the NYT compares Chinese to US science using publication counts.

    There are brilliant Chinese scientists and research institutions, but there’s also a lot of gaming the system. We need a better quality metric for publications and papers.

    • glomag@kbin.social
      link
      fedilink
      arrow-up
      17
      ·
      10 months ago

      The whole system is so messed up on multiple levels. You not only have to publish some result that is correct (true) but it also has to be positive (support your hypothesis) and sufficiently "important " to your field or else your whole career is at risk.

      I’m posting this while running an experiment at 11pm on a Saturday night trying to collect data for a grant application. Of course I’m going to lose if I’m competing against people who just make shit up.

      • Endward23@futurology.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        The whole system is so messed up on multiple levels. You not only have to publish some result that is correct (true) but it also has to be positive (support your hypothesis) and sufficiently "important " to your field or else your whole career is at risk.

        The publication or reproduction crises comes for a reason.

        In my opinion, the flaws of the current system are well-documented and even understand to a degree. The actuall problem is to come up with a new system. This system has to be objectiv and fair and must measure the quality of scientists’ work.

    • Kissaki@feddit.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 months ago

      48 min long

      Their video-description-linked text source: https://laskowskilab.faculty.ucdavis.edu/2020/01/29/retractions/

      Knowing that our data were no longer trustworthy was a very difficult decision to reach, but it’s critical that we can stand behind the results of all our papers. I no longer stand behind the results of these three papers.

      There has been some questions of why I (and others) didn’t catch these problems in the data sooner. This is a valid question. I teach a stats course (on mixed modeling) and even I harp on my students about how so many problems can be avoided by some decent data exploration. So let me be clear: I did data exploration. I even followed Alain Zuur’s “A protocol for data exploration to avoid common statistical problems“. I looked through the raw data, looking for obvious input errors and missing values. […]

      Altogether, I was left with the conclusion that there was good variation in the data, no obvious outliers or weird groupings, and an excess of 600 values which was expected due to the study design. As a scientist, I know that I have a responsibility to ensure the integrity of our papers which is something I take very seriously, leading me to be all the more embarrassed (& furious) that my standard good practices failed to detect the problematic patterns in the data. Multiple folks have since looked at these data and came to the same conclusion that until you know what to look for, the patterns are not obvious.

  • ebits21
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    10 months ago

    The Freakonomics podcast covered this topic pretty nicely just recently. Would recommend a listen! It’s not just international or low impact journals that are having issues.

    I feel like zero trust research could be a thing in the future in some areas.

    So for example, the study would be pre registered with expected outcome as is starting to be done more often now. But also the third party has a private encryption key and the experiments data is encrypted somehow during collection with a public encryption key.

    Obviously very much depends on the type of study, but data is very often collected with collection software of some sort that could implement this.

    The scientist could not snoop the data even if they wanted. The public key can encrypt data but only the private key can unlock it.

    Then once uploaded to the third party they can unlock it with their private key. Then the data is public before any analysis.

    Seems to me that this would force science to be done the way it ought to be done!

    • bananabenana@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      10 months ago

      Totally unnecessary and is not how science works.

      If you make data public before analysis, labs will get scooped with their own data. No one would invest in data collection.

      Often things are found or worked out during the process, which can change week to week or month to month, iteratively. Experiments don’t go to plan, data is cooked and can only be used in reduced ways etc. Researchers are meant to share their raw data anyway which should prevent this sort of stuff. Basic statistical analysis on datasets usually reveals tampering.

      The issue is the insane academic standards and funding bodies (public grant $) which reward high volume and high ‘impact’ work. These incentives need re-evaluation and people should not be punished for years of low activity. Sometimes science and discovery just doesn’t work the way you think it will, and that’s okay. We need a system which acknowledges that which everyone in science knows.

      • ebits21
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        All it would do is create an audit trail of your data to keep scientists honest. You can still iterate and change course but now you’re responsible for the record (if you look at the data at some point the data at that point could be recorded as is and a log keeps track when you check the data). Why did you change course and when? Was that appropriate? The data is verified when and if you decide to review it.

        How science is done has a problem, just suggesting a solution. I know that’s not how it’s done.

        All the data is a matter of record. It makes sure the raw data is ACTUALLY the raw data without bias. It makes sure you’re not ignoring negative results (a huge issue). Statistical detection of cheating will never be as good as reviewing the raw data and changes over time.

        As for scooping data, it’s a matter of the record now. There’s data available showing that they scooped you. Currently there’s nothing. The data doesn’t have to be public until the study is published.

        I think the main barrier would be scientists and the incentives inherent in the system (career, money, prestige) that creates the cheating in the first place.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    10 months ago

    This is the best summary I could come up with:


    Tens of thousands of bogus research papers are being published in journals in an international scandal that is worsening every year, scientists have warned.

    The practice has since spread to India, Iran, Russia, former Soviet Union states and eastern Europe, with paper mills supplying ­fabricated studies to more and more journals as increasing numbers of young ­scientists try to boost their careers by claiming false research experience.

    The products of paper mills often look like regular articles but are based on templates in which names of genes or diseases are slotted in at random among fictitious tables and figures.

    Others are more bizarre and include research unrelated to a journal’s field, making it clear that no peer review has taken place in relation to that article.

    The spokesperson added that Wiley had now identified hundreds of fraudsters present in its portfolio of journals, as well as those who had held guest editorial roles.

    “We have removed them from our systems and will continue to take a proactive … approach in our efforts to clean up the scholarly record, strengthen our integrity processes and contribute to cross-industry solutions.”


    The original article contains 957 words, the summary contains 186 words. Saved 81%. I’m a bot and I’m open source!

  • Endward23@futurology.today
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    10 months ago

    The image that some unserious magazines pushed the main part of the fake papers is not sattled by research as I have read long ago. Some scientometrists has checked out and find out that even many of the “small magazines” use regular peer review. Is actuall a small minority of magazines which let them be paid to publish “science spam”.