Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    20 hours ago

    Ran across a piece of AI hype titled “Is AI really thinking and reasoning — or just pretending to?”.

    In lieu of sneering the thing, here’s some unrelated thoughts:

    The AI bubble has done plenty to broach the question of “Can machines think?” that Alan Turing first asked in 1950. From the myriad failures and embarrassments its given us, its given plenty of evidence to suggest they can’t - to repeat an old prediction of mine, I expect this bubble is going to kill AI as a concept, utterly discrediting it in the public eye.

    On another unrelated note, I expect we’re gonna see a sharp change in how AI gets depicted in fiction.

    With AI’s public image being redefined by glue pizzas and gen-AI slop on one end, and by ethical contraventions and Geneva Recommendations on another end, the bubble’s already done plenty to turn AI into a pop-culture punchline, and support of AI into a digital “Kick Me” sign - a trend I expect to continue for a while after the bubble bursts.

    For an actual prediction, I predict AI is gonna pop up a lot less in science fiction going forward. Even assuming this bubble hasn’t turned audiences and writers alike off of AI as a concept, the bubble’s likely gonna make it a lot harder to use AI as a plot device or somesuch without shattering willing suspension of disbelief.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      9 hours ago

      The best answer will be unsettling to both the hard skeptics of AI and the true believers.

      I do love a good middle ground fallacy.

      EDIT:

      Why did the artist paint the sky blue in this landscape painting? […] when really, the answer is simply: Because the sky is blue!

      I do abhor a “Because the curtains were blue” take.

      EDIT^2:

      In humans, a lot of problem-solving capabilities are highly correlated with each other.

      Of course “Jagged intelligence” is also—stealthily?—believing in the “g-factor”.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      OK I sped read that thing earlier today, and am now reading it proper.

      The best answer — AI has “jagged intelligence” — lies in between hype and skepticism.

      Here’s how they describe this term, about 2000 words in:

      Researchers have come up with a buzzy term to describe this pattern of reasoning: “jagged intelligence." […] Picture it like this. If human intelligence looks like a cloud with softly rounded edges, artificial intelligence is like a spiky cloud with giant peaks and valleys right next to each other. In humans, a lot of problem-solving capabilities are highly correlated with each other, but AI can be great at one thing and ridiculously bad at another thing that (to us) doesn’t seem far apart.

      So basically, this term is just pure hype, designed to play up the “intelligence” part of it, to suggest that “AI can be great”. The article just boils down to “use AI for the things that we think it’s good at, and don’t use it for the things we think it’s bad at!” As they say on the internet, completely unserious.

      The big story is: AI companies now claim that their models are capable of genuine reasoning — the type of thinking you and I do when we want to solve a problem. And the big question is: Is that true?

      Demonstrably no.

      These models are yielding some very impressive results. They can solve tricky logic puzzles, ace math tests, and write flawless code on the first try.

      Fuck right off.

      Yet they also fail spectacularly on really easy problems. AI experts are torn over how to interpret this. Skeptics take it as evidence that “reasoning” models aren’t really reasoning at all.

      Ah, yes, as we all know, the burden of proof lies on skeptics.

      Believers insist that the models genuinely are doing some reasoning, and though it may not currently be as flexible as a human’s reasoning, it’s well on its way to getting there. So, who’s right?

      Again, fuck off.

      Moving on…

      The skeptic’s case

      vs

      The believer’s case

      A LW-level analysis shows that the article spends 650 words on the skeptic’s case and 889 on the believer’s case. BIAS!!! /s.

      Anyway, here are the skeptics quoted:

      • Shannon Vallor, “a philosopher of technology at the University of Edinburgh”
      • Melanie Mitchell, “a professor at the Santa Fe Institute”

      Great, now the believers:

      • Ryan Greenblatt, “chief scientist at Redwood Research”
      • Ajeya Cotra, “a senior analyst at Open Philanthropy”

      You will never guess which two of these four are regular wrongers.

      Note that the article only really has examples of the dumbass-nature of LLMs. All the smart things it reportedly does is anecdotal, i.e. the author just says shit like “AI can do solve some really complex problems!” Yet, it still has the gall to both-sides this and suggest we’ve boiled the oceans for something more than a simulated idiot.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 hours ago

        Humans have bouba intelligence, computers have kiki intelligence. This is makes so much more sense than considering how a chatbot actually works.

        • zogwarg@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          8 hours ago

          But if Bouba is supposed to be better why is “smooth brained” used as an insult? Checkmate Inbasilifidelists!

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 hours ago

        So basically, this term is just pure hype, designed to play up the “intelligence” part of it, to suggest that “AI can be great”.

        people knotting themselves into a pretzel to avoid recognising that they’ve been deeply and thoroughly conned for years

        The article just boils down to “use AI for the things that we think it’s good at, and don’t use it for the things we think it’s bad at!”

        I love how thoroughly inconcrete that suggestion is. supes a great answer for this thing we’re supposed to be putting all of society on

        it’s also a hell of a trip to frame it as “believers” vs “skeptics”. I get it’s vox and it’s basically a captured mouthpiece and that it’s probably wildly insane to expect even scientism (much less so an acknowledgement of science/evidence), but fucking hell

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 hours ago

        Ian Millhiser’s reports on Supreme Court cases have been consistently good (unlike the Supreme Court itself). But Vox reporting on anything touching TESCREAL seems pretty much captured.