- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Illusion — Why do we keep believing that AI will solve the climate crisis (which it is facilitating), get rid of poverty (on which it is heavily relying), and unleash the full potential of human creativity (which it is undermining)?
Who tf thinks ai will solve climate change? I’ve never heard that ever.
People selling AI certainly would like you to believe it’ll fix that and everything else that ails ya
Yeah this was more or less Ray Kruzweil’s take all the way back in 2005.
iirc, there were some statements from companies (Microsoft?) that we won’t have to worry about AI’s effect on climate change because it’ll also come up with the solutions
We’ve had the tech to drastically cut power consumption for a few years now, it’s just about adapting the existing hardware to include the tech.
There’s a company MythicAI which found that using analog computers (ones built specifically to soft through .CKPT models, for example) drastically cuts down energy usage, is consistently 98-99% accurate, simply by taking a digital request call, converting it to an analog signal, the signal is processed then converted back to a digital signal and set to the computer to finish the task.
In my experience, AI is only drawing 350+ watts when it is sifting through the model, it ramps up and ramps down consistently based on when the GPU is utilizing the CUDA cores and VRAM, which are when the program is processing an image or the text response (Stable Diffusion and KoboldAI). Outside of that, you can keep stable diffusion open all day idle and power draw is marginally higher, if it even is.
So according to MythicAI, the groundwork is there. Computers just need an analog computer attachment that remove the workload from the GPU.
The thing is… I’m not sure how popular it will become. 1) these aren’t widely available and you have to order them from the company and get a quote. Who knows if you can only order one. 2) if you do get one, it’s likely not just going to pop into most basic users Windows install running Stable Diffusion, it’s probably expecting server grade hardware (which is where the majority of the power consumption comes from, so good for business but consumer availability would be nice). And, most importantly, 3), NVIDIA has sunk so much money into GPU powered AI. If throwing 1,000 watts at CUDA doesn’t keep making strides, they may try to obfuscate this competition. NVIDIA has a lot of money riding on the AI wave and if word gets out that some other company can cut costs of development both in cost of hardware and cost of running it, and the need for multiple 4090s or whatever is best and you get more efficiency from accuracy per watt.
Oh, and 4) MythicAI is specifically geared towards real time camera AI tracking, so they’re likely an evil surveillance company and also the hardware itself isn’t explicitly geared towards all around AI, but specific models built in mind. It isn’t inherently an issue, it just circles back to point 2) where it’s not just the hardware running it that will be a hassle, but the models themselves too.
I got a mail about that from my union recently… I think they had some talks about it.
Same kind of people that think we can effectively pump enough CO2 out of the air, and other idiotic climate solution magic. Wishers that want to keep consumption at all time highs, basically.
I’ve heard that from plenty of people
I mean it’s good for predicting climate models, but it certainly uses a lot of power to do so, which is the main issue with AI.
Only if it changes laws of physics. Which I suppose could be in the realm of possibility, since none of us could outthink a ASI. I imagine three outcomes (assuming getting to ASI) - it determines that no, silly humans, the math says you’re too far gone. Or, yes, it can develop X and Y beyond our comprehension to change the state of reality and make things better in some or all ways. And lastly, it says it found the problem and solution, and the problem is the Earth is contaminated with humans that consume and pollute too much. And it is deploying the solution now.
I forgot the fourth, that I’ve seen in a few places (satirically, but could be true). The ASI analyses what we’ve done, tries to figure out what could be done to help, and then suicides itself out of frustration, anger, sadness, etc.
Literally Bill Gates.
There was a Twitter post about great uses for AI but it’s not being developed. The one I aligned with was scraping grocery store ads and creating a shopping list based on the best prices and personal preferences.
AI is solving problems for the business class. They are trying to stop paying people. AI has use cases to actually make our lives better but are antithetical to the capitalistic companies and would likely try to stop any AI use that undermines their bottom line.
You don’t need AI for that. All it takes is some standardized markup like schema.org and a discoverable price list page that can be read and understood by everyone.
We already had something similar with RSS, where you subscribe to your favorite blogs and forums, and the RSS reader on your computer would tell you which sites have new posts, so you don’t need to scan all of them each day. For some reason people stopped using RSS, and instead published their stuff (or notifications about new posts) on Facebook, twitter etc.
The same system could be adapted for (grocery-) price lists. However the big brands would never do that, because then it would be very easy to discover which products suddenly got more expensive.
All it takes is some standardized markup like schema.org
Which is the problem AI is solving here - getting every supermarket chain to agree on this (when it’s actually against their interests to do so, since it increases price transparency) would be an impossible task, but AI can get around this requirement with minimal extra effort.
I’m hardly an AI evangelist, but this is actually one of the rare situations where it’s a good fit.
This would be a bad approach, because you are essentially trying to brute force your way around a roadblock (no supported open data format) the supermarket intentionally designed. It would be easy for them to block your bot with Captchas, rate limits or IP blocking or just sue you.
Do you have the link? I would like to go through other use cases as well.
Is that not possible for the community to build with locally hosted LLMs?
I’m not sure how that’s a useful thing besides convincing people to spend money on something they done need? Like, you either need a product at the grocery store or you don’t. I don’t need corpo bullshit ad bots to beg me to buy shit.
I don’t think I explained it well.
I shop at 4, maybe 5, different grocery stores. Some products I have preferences whereas others I don’t.
For example, say this is my grocery list for the week:
- grapes (never buy at Walmart)
- composition notebook
- ground turkey (only buy at Wegmans, unless there’s a sale)
- oat milk
- chocolate chips
- eggs
I want an AI to scrape every grocery store’s weekly ad or their website along with any coupons that are available, and determine the best price and, based on patterns of sales, what I should wait on and what time of day I should shop.
I think you explained it fine, it just doesn’t make sense to people who only go to the same place.
It doesn’t make sense to me and I’ve got three grocery stores and a walmart within miles of me.
Do you regularly use their ads to compare prices and select what to buy at each one, or generally stick to one place with a few trips to another one?
I shop for convenience and brands I prefer. One store has some items I prefer over the others. That’s to say I shop like a normal person.
Normal based on what? The ads exist because plenty of normal people use them to decide where to buy things or certain items. If they didn’t bring people in the stores wouldn’t bother.
You explained it fine. And I agree it’s a great use case. I’ve heard so many like this. It’s a potentially a great interface to a lot of things and I think that’s why there’s a big push to make people shit on it. Seems like Lemmy is a foothold for hating on AI. I don’t think the problem is your explanation its just cyclical people looking to hate on something. Just look at daily posts here about AI. They’re all similar to headlines I see in places like r/Canada towards immigrants
They’re taking our jobs They’re assaulting our women Think of the children Our culture is gone
Every headline is some variation of that followed by toxic takes towards the subject.
Really disappointing to see really cool new tech with loss of potential get shit on by a place like Lemmy where I thought people were more open to advancements in tech.
I got what you were saying, it’s just not something I can imagine ever caring that much about. Either I need a notebook or I don’t. I’m out of grapes and want some or I don’t. I don’t need a shoddy piece of software to tell me any of those things. And attempting to micro optimize for sale events? Like, this just isn’t a sensible way to live your life.
You have failed to understand it AGAIN. Good job failing.
It does NOT tell you what the list is. Period. Stop assuming it will advertise to you. You are repeatedly describing how you have FAILED to understand what it’d even be attempting…
Nobody said it made the list for you. The idea was moronic that it would tell you when to get these things. Which is idiotic because YOU ALREADY NEED IT.
If you relied on shitty software less your reading comprehension would be better.
Plus hiding market prices behind apps… Give me that data peasants
What it will actually do is transfer more wealth to the top. That is what these projects do.
Illusion — Why do we keep believing that AI will solve the climate crisis (which it is facilitating), get rid of poverty (on which it is heavily relying), and unleash the full potential of human creativity (which it is undermining)?
Because we keep reading sensationalist advertisements presented as articles instead of experimenting with it ourselves, understanding what it is
And unfortunately, this article is also just a response to media clickbait, not a discussion point it tries to look like
And unfortunately, this article is also just a response to media clickbait, not a discussion point it tries to look like
And becomes new clickbait in the process.
It could potentially one day do that (except the unleash creativity part). Issue is, none of that is profitable, and even if it was, AI that can manage that is still a long way’s off and sure as hell won’t be found by a for profit venture