• MudMan@fedia.io
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    6 days ago

    That is a weird proposal.

    It’s definitely weird that everyone is panicking about data center processing costs but not about the exact same hardware powering high end gaming devices that have skyrocketed from 100W to 450W in a few years, but ultimately if you want to run a model locally you can run a model locally. I’m not sure how you’d regulate that, it’s just software.

    Hell, I don’t even think distributing the load is a terrible idea, it’s just that the models you can run locally in 40 TOPS kinda suck compared to the order of magnitude more processing you get on modern GPUs.

    • This is fine🔥🐶☕🔥@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      I’m not talking about stable diffusion or anything like that.

      I meant whatever Twitter, or any similar chatbots, or AI assistant features of apps should be run on server-side, not put a load on customers’ devices.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        6 days ago

        Yeah, no, I get the spirit of the thing. I’m just saying that… well, for one that it wouldn’t be a bad idea if it worked, it just doesn’t at the moment. But more importantly that regulations don’t work like that. You can’t just make rules that go “hey you guys specifically have to run this software on a server specifically”. You can already run assistants locally using a whole bunch of downloadable models, it’d be a huge overreach to tell people and companies that they CAN make the software and run it, but only remotely. That’s just… not how rules and regulations are put together.