is that how you became lady butterfly?
lime!
- 2 Posts
- 2.55K Comments
O(1) means worst and best case performance are the same.
lime!@feddit.nuto Lemmy Shitpost@lemmy.world•People were no less thirsty back thenEnglish9·22 hours agoimagining they were a polycule that broke up makes the movie watchable
lime!@feddit.nuto Lemmy Shitpost@lemmy.world•People were no less thirsty back thenEnglish4·22 hours agothanks, obviouspornalt, i trust your knowledge on this.
not mine, i stole it from flickr :P also i’m now not sure that it’s actually midnight, because there’s a photosphere of that exact place on google maps and the compass is confusing me. i think the wall is to the southwest, which would make the sun to the north-northwest but the compass shows the wall as being east-west. annoyingly the shed isn’t visible on the satellite image.
…and now i’ve spent way too long looking at pictures of a place i haven’t been to for almost 20 years.
lime!@feddit.nuto Lemmy Shitpost@lemmy.world•People were no less thirsty back thenEnglish7·23 hours agois that the movie where meryl streep doesn’t know who the father of her child is because she was conceived in a foursome with three dudes who just took turns railing her?
tomorrow is going to be the first sunset in kiruna since may.
here’s an old photo taken sometime after midnight the 6th of june 2016:
oh that’s interesting, i assumed that it wasn’t actually being used despite being in everywhere but i’ve not seen any stats.
yes, the models are bigger, but Wh/prompt is still the metric to look at. 300W for 3 seconds is the same amount of energy as 14.3kW for 0.021 seconds, roughly. i don’t know how fast a machine like that can spit out a single response because right now i’m assuming they’re time-slicing them to fuck, but at least gpt4o through duck.ai responds in about the same time.
if it running an 800GB model (which i think is about where gpt4o is) takes the same amount of time to respond as me running an 8GB model (i know the comparison is naive) then it would be about… twice as efficient? 0.25Wh for me compared to 11.9Wh/100 for them. and that’s without knowing how many conversations one of those things can carry on at the same time.Edit: also, this is me ignoring for the sake of the discussion that the training is where all the energy use comes from.
it takes my 7900XTX about three seconds to generate a longish reply when running at 300w, so that’s 0.24Wh for a single response to a “thank you”. let’s round up so that four “thank yous” costs 1Wh. so he’d have to consistently send almost three million messages a day just containing “thank you”.
and that’s assuming these huge server farms have the same efficiency per watt as my single GPU.
this is an old meme about yanderedev
i only hear the asmr version
lime!@feddit.nuto Fuck Cars@lemmy.world•Techbros invented trains...again. Except much more complex, much more dangerous, and with much less capacity.English2·1 day agoi mean, before they can show something that will actually work, it may as well not exist.
is “mofa” short for “Motor-Fahrrad”? i’ve seen the term a few times in the past few days but never before that. in that case they are what i would know as a “klass II moped”, but here that category also includes pedelecs.
love the pseudo-homologated rules for eu motor vehicles.