• Hacksaw
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    Look at the material implementations that exist. Look at grok, it’s routinely fine-tuned to “stop being woke” at the cost of Truth because of Musk. I’m not scared of an evil superintelligence, I’m concerned these machines are owned and biased by rich people who have the opposite of our best interests at heart.

    If there were some mythical AI that was trained from the ground up using a comprehensive approach designed to put the good of humanity and every individual first above any concept of profit or “engagement” then sure, conceptually it could produce content that would elevate humanity instead of alienating it. But to act as if modern implementations are an apolitical “Photoshop filter” and not a machine designed owned and operated by a class of explicitly evil people is disingenuous. I have yet to see a local model that was trained independently from capitalist owned models, or that use a fundamentally different approach.

    By your logic communism could work and is worth pursuing. In practice it’s only ever created barbaric state capitalist societies and a bureaucratic ruling class. It never abolished capitalism or class nor even headed in that direction. So in practice I’m against it. Just like in practice AI is fundamentally alienating and I’m against it.

    You can try to separate a concept from its implementation all you want, but the implementation is the only aspect that affects our material conditions as a society and as individuals.

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      By your logic communism could work and is worth pursuing.

      … who are you talking to? This is the thread where you opened by appealing to Marx.

      Whatever.

      You are describing a solvable problem. It is obviously possible to make the models you want. The fact it hasn’t happened yet, and would be difficult, cannot match your repeated insistence that these are fundamental problems with neural networks as a concept.

      Local models do what you tell 'em. Whatever’s missing can be added in. I think you know this, because you pivoted pretty suddenly from a bunch of things local models solve, to insisting local models must be artisinal and bespoke, and also work differently… somehow. That’s a lot of rhetorical escape routes away from nuanced understanding of a complex topic.

      These threads are the weirdest shit, because people’s hot takes sound alike for a few sentences, then veer off for wildly incompatible reasons. ‘It’ll never be art, quality is irrelevant!’ And then choose your own adventure: (a) ‘Because the FOSS community won’t replace this monolithic project.’ (b) ‘Because rendering isn’t art.’ © ‘Because only a divine human soul can imbue a political cartoon with Meaning.™’ (d) ‘Because you need more than a prompt. What’s a control net?’ (e) ‘Because we all care deeply about copyright.’

      • Hacksaw
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        I’m just trying to engage with you here when I talk about local models. So far I haven’t seen any implementations that are meaningfully different from capitalist owned AI. Most local models train using the corporate LLMs. They don’t “do what you tell them”, you can’t separate the LLM training from its output. And when the training is based on corporate models which are heavily biased in favour of corporate/capitalist desires then your local model has the same biases. All I’m saying is local models have the same problems of alienation because they’re trained off of corporate models.

        You also didn’t capture my arguments in your CYOA. AI is fundamentally alienating because instead of communicating with another human being you’re communicating with a machine that caters to corporate/capitalist greed.

        • mindbleach@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          That’s (a).

          You are still describing a solvable problem. One I’m not even sure is valid, given the unguided training of most models. Input data takes the commercial fishing approach. That’s where you put a net in the water and scoop up the everything.

          They can’t convince these things there’s three Rs in “strawberry.” Any bias more complicated than Elmo snorting a brick of K, and putting “you are mechahitler” into the prompt, is probably an accurate reflection of the zeitgeist. If you think exhibiting societal values disqualifies a text from being art… there isn’t any. Everything exists within a context.

          Since that hasn’t stopped anyone from making Judy Hopps x Tracer t4t mpreg porn, I think we can say, these models are not constrained by their origins.