





I guess I don’t understand your point, in that case. Like, what benefit is there to using AI when you don’t need to? Learning how to set up agentic agents, sure, but using ai right now will just make you dumber/ less skilled for little to no benefit.
I legitimately thought I had something else to say, but I can see what you’re saying. And here’s receipts on what I’m saying on the brigading downvotes:
This comment’s downvotes https://lemmy.world/pictrs/image/7993031c-e58e-437c-9311-dc8b3fb0192b.jpeg
And another, earlier comment’s downvotes https://lemmy.world/pictrs/image/0e3dd1c5-30a2-4a13-9069-568559efb462.jpeg
Also, lol, I didn’t realize that my username could be interpreted that way. That… might explain some interactions I’ve had. Good point.
That’s what I said, yes
Oh, the downvotes are seemingly made by one of the people spamming posts on !comicstrips@lemmy.world. It looks like they used 7 alts to downvote all my recent comments. Which is shitty but mostly harmless since karma isn’t a thing.
Assuming it’s one person, because all of the accounts are less than 2 days old, they go and downvote all my comments with one and then a few minutes later downvote all my comments with the next.
Holy shit. They used three accounts to downvote all the comments I’ve made in the last day or so (all comments above the posts in my history). That’s incredibly pathetic. Hahaha!
Edit: lol, 7 now. Like, what the fuck is this accomplishing?
These all predate generative models, though these types of images in images are something that they’re very good at making (mostly cause they don’t give a fuck about the logic of the scene)
It doesn’t learn from interactions, no matter the scale. Each model is static, only reacting to a conversation because they’re literally being fed to it as a prompt (you write something, it responds, and then your next reply includes your reply and the entire prior conversation). It’s why conversations have character limits and the LLM has slowing performance the longer the conversation goes on.
Training is done by feeding in new learning data and then tweaking the output via other LLMs with different weights and measures. While data from conversations could be used as training data for the next model, you “teaching” it definitely won’t do anything in the grand scheme of things. It doesn’t learn, it predicts the next token based on preset weights and measures. It’s more like an organ shaped by evolution rather than a learning intelligence.
2022, so no.
Sorry if was joke
… you linked the wiki under a post that jokingly stated that Einstein was confounded by the invention of the bomb. Without additional context, it seemed to me like you were implying he was blanket endorsing the creation of the bomb. So I put that info out there to clarify his stance without anyone needing to open an additional link.
I wasn’t being sassy or nothing, just adding context.
… Okay, a little bit sassy, but that’s just how I roll, no shade intended.
You don’t need to justify the “/s”. Just use it
Yes, to beat the nazis to the invention. Cause there’s nothing worse than nazis with nukes


Birds and snakes, an aeroplane
Damn. This is 90% pathetic and the rest is cringe. This is like advanced boomer humor, but for idiots.


Yes, but have you considered the optics of saying we’re working 72 hour weeks on these issues?
Wealthy men smelled the tide turning and funded the army in exchange for the accolades of leading during the time of revolution. They hijacked the movement to make a country where they ruled. It’s why they had such disdain for the rabble.
Damn. Much comeback. Very wow.