Fuck this shit, why does every fucking thing need an LLM?

  • averyminya@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    That’s simply not true, there are ways to drastically reduce energy usage while increasing efficiency by offloading the work. A company Mythic AI has worked on an analog processor which sifts through the model. On GPU’s this is the power hungry process, for example a PC with the NVIDIA 3080 will typically run at about 350w under load.

    Their claim now that these analog chips use 1/100th of the energy needed for GPU’s. There’s a video from Veritasium that goes over the details. It’s genuinely effective, and that was a few years ago now before whatever potential growth they’ve made with their recent funding. It looks like they actually have products available for inquiry now too.

    Doesn’t seem to be at the consumer level yet unless you want to use servers for AI vs. your home computer, but it’s progress. Here’s the thing, I’m not particularly for our current implementation of AI but I don’t think we should be entirely against all of it either. There are clearly plenty of benefits that people see from them, so giving any option possible for companies like Google to severely draw back their energy consumption seems like the reasonable path forward.

    The independent drawbacks to LLMs and generative AI don’t mean the technology will stop getting used. It isn’t going anywhere (as in, people will use it) so making it more efficient is the obvious solution to mitigating more waste. Advocate for the prohibition of AI, but it’s honestly more reckless than advocating for making the business’ usage of AI reach a specific energy goal. Forcing these companies to retrofit their servers to run at something ridiculous like 30w per rack is beneficial for them and for us, as they won’t pay as much for energy and we all will have less of it wasted.

    Wishful thinking of course, but my point is that energy efficient AI, fortunately or unfortunately, exists and it will continue to. Like we can run “AI” on a raspberry pi 4 which takes what, 9 watts? This technology will get more developed every year, and while I’d be extremely surprised to see a Pi4 on its own running a subjectively useful LLM, I can imagine a setup that uses a Pi and some offloading tech to achieve reasonable results.

    I’m personally pretty fine with regular people with computers wanting to use AI in whatever way suits them, as long as they aren’t trying to sell the results. While the energy consumption isn’t ideal, it’s a droplet to the servers these companies take. We should definitely make every effort possible towards increasing the efficiency of this tech, if only because it seems insane to me to pretend like AI will just disappear, or let this huge energy suck exist as we hope it begins to fade.

    Tl;Dr offload GPU resources to analog chips, force companies to be more efficient simply because hoping AI is going to disappear is reckless.