I don’t usually keep the author’s name in the suggested hed, but here I think he’s recognizable enough that it adds value.

I am a science-fiction writer, which means that my job is to make up futuristic parables about our current techno-social arrangements to interrogate not just what a gadget does, but who it does it for, and who it does it to.

What I do not do is predict the future. No one can predict the future, which is a good thing, since if the future were predictable, that would mean we couldn’t change it.

Now, not everyone understands the distinction. They think science-fiction writers are oracles. Even some of my colleagues labor under the delusion that we can “see the future”.

Then there are science-fiction fans who believe that they are reading the future. A depressing number of those people appear to have become AI bros. These guys can’t shut up about the day that their spicy autocomplete machine will wake up and turn us all into paperclips has led many confused journalists and conference organizers to try to get me to comment on the future of AI.

That’s something I used to strenuously resist doing, because I wasted two years of my life explaining patiently and repeatedly why I thought crypto was stupid, and getting relentlessly bollocked by cryptocurrency cultists who at first insisted that I just didn’t understand crypto. And then, when I made it clear that I did understand crypto, they insisted that I must be a paid shill.

This is literally what happens when you argue with Scientologists, and life is just too short. That said, people would not stop asking – so I’m going to explain what I think about AI and how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.”

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    2 hours ago

    So what is the alternative? A lot of artists and their allies think they have an answer: they say we should extend copyright to cover the activities associated with training a model.

    And I am here to tell you they are wrong. Wrong because this would represent a massive expansion of copyright over activities that are currently permitted – for good reason.

    He goes on to say that prohibiting AI works from being copyrighted and worker collective bargaining are better solutions, and I really agree with the arguments for this. I also liked this bit about how some of what remains past the bubble could be useful:

    And we will have the open-source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video; describing images; summarizing documents; and automating a lot of labor-intensive graphic editing – such as removing backgrounds or airbrushing passersby out of photos. These will run on our laptops and phones, and open-source hackers will find ways to push them to do things their makers never dreamed of.

  • Pat
    link
    fedilink
    arrow-up
    8
    ·
    8 hours ago

    That’s a great article, tanks for posting.

    Cory has a way for getting right to the heart of things, and does so marvellously here. Great explanation of why the investments continue despite the dogshit economics of this industry.