• 14 Posts
  • 1.18K Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle





  • Zozano@aussie.zonetoFuck AI@lemmy.worldSomething I noticed
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    8
    ·
    7 days ago

    I’m curious as to what you’re asking it that it would waste your time.

    I’m fully convinced that if you’re saving time by wading through SEO sludge (which is mostly bloated ai shit anyway), then you’re just asking bad questions.

    Can you give me some examples of questions it answered which were wrong? I want to test it myself.


  • That’s exactly my point. Evrart wants to paint this picture like he keeps order and his supporters like him, but him and his brother are mob-bosses, they muscle out anyone else who tries to make changes in their territory.

    He weaponises a faux-socialist movement, but he will exploit and manipulate everyone for the purpose of his own gain. He cares not for the plight of the working class, he only pretends to.

    He’s not complex because he might be good - he’s complex because he’s so good at faking it. That’s the brilliance. He’s a mobster who makes you feel like you owe him a favor for getting mugged.





  • The classic example is email; Imagine if you could only email people on Outlook, from another Outlook account. It’s intuitive how shitty that would be, but for some reason we give social media a free pass for doing exactly this.

    The benefits are (analogously):

    • if you notice Yahoo users send you a lot of spam, you just block all of Yahoo. Sure, you might miss something important, but that’s their fault for using Yahoo.
    • if some dickhead like The Zucc releases a new email service (Threads) then maybe your email service (instance) will do you a favor and block them (defederate).
    • pedos and bigots look for instances which is known for hosting shady shit, effectively acting as a containment barrier (most instances defederate these by default). Would never see that happen on Twitter (thank you Elon! /s).
    • if an instance crashes, that sucks. But there are many others hosting federated content, so Lemmy will never be ‘down’.



  • Yep, and they fuck themselves over academically because lecturers notice how their time spent in online-learning platforms doesn’t match their assessment submissions.

    Students inevitably get questioned about their content, only for the lecturer to discover they don’t know shit, because they cheated. Had the student actually used it properly, they might know enough about the content to scrape by.

    In any case, I’ve seen this happen five times lol. One of them because my lecturer asked one of my classmates what ‘frivolous’ and ‘multifaceted’ meant, and fumbled before saying they used a thesaurus.

    She was then asked in plain speech what she intended to say, and ended up with an “I don’t know” - boom. Academic integrity compromised, investigation into her Learnline metrics, and cross referencing her work from two years earlier. Termination of her course followed two weeks after.

    Most students use it; the lecturers know this. The difference is whether people use it as a tool, or a replacement.

    In any case, essays are supposed to be a metric of knowledge and evidence of independent research. In practice? A good essay really only reflects one thing - the student is good at writing essays. I know people in early childhood education who suffered through university, who have more intuition and emotional intelligence than people who got by on academic prowess.


  • Lol, oops, I got poo brain right now. I inferred they couldn’t edit because the methodology doesn’t say whether revisions were allowed.

    What is clear, is they weren’t permitted to edit the prompt or add personalization details seems to imply the researchers weren’t interested in understanding how a participant might use it in a real setting; just passive output. This alone undermines the premise.

    It makes it hard to assess whether the observed cognitive deficiency was due to LLM assistance, or the method by which it was applied.

    The extent of our understanding of the methodology is that they couldn’t delete chats. If participants were only permitted to a a one-shot generation per prompt, then there’s something wrong.

    But just as concerning is the fact that it isnt explicitly stated.


  • The biggest flaw in this study is that the LLM group wasn’t allowed explicitly permitted to edit their essays and was explicitly forbidden from altering the parameters. Of course brain activity looks low if you just copy-paste a bot’s output without thinking. That’s not “using a tool”; that’s outsourcing cognition.

    If you don’t bother to review, iterate, or humanize the AI’s output, then yeah… it’s a self-fulfilling prophecy: no thinking in, no thinking out.

    In any real academic setting, “fire-and-forget” turns into “fuck around and find out” pretty quick.

    LLMs aren’t the problem; they’re tools. Even journal authors use them. Blaming the tech instead of the lazy-ass operator is like saying:

    These people got swole by hand-sawing wood, but this pudgy fucker used a power saw to cut 20 pieces faster; clearly he’s doing it wrong.

    No, he’s just using better tools. The problem is if he can’t build a chair afterward.