

You ask as if people know what SDF is.


You ask as if people know what SDF is.


Yeah, I was thinking along the same lines. I was disappointed when I read that she has been eligible for day release for so long and hasn’t applied for it.


My bad, I started downloading The Lord of the Rings movies - Extended Edition. Sorry!


After the 2020 stolen election, I was the FIRST and ONLY reporter to cover the election discrepancies in Pennsylvania at a public senate hearing. THE ONLY."
I wonder if he’ll ever figure out what that means…


How do you tag yogthos?
If you care, it’s been around since the 1970’s and was coined by Gene Amdahl after he left IBM. Certainly not just a crypto-bro thing.


I guess they don’t want the press talking about health care subsidies and protections.
Well, I told you to break it down and explain it, but instead you just continue to be condescending. I thought maybe calling out your arrogance would get you to check yourself, but it did not.
I can prove to you that other LLMs don’t make the same error, so please explain how it’s the equivalent of using a screwdriver to hammer nails to misspell a word in a question to an LLM. And then explain why it’s wrong to point out errors made by LLMs. Or if I’ve missed something about what you were going to break down point by point, please explain.
And just so we’re clear, I do have a degree in computer science, extensive experience with machine learning, and probably know more than you think about how LLMs work. Maybe I don’t know as much as you, there’s no way for me to know that, but stop talking to me like I’m a child.
Go ahead, I’d love to see what you have to say. I’d much prefer that to an arrogant implication of my stupidity.
I…assumed it was a community to point where AI should would, but doesn’t.
…and that’s what’s happening in this case. You’re acting like it’s completely impossible for an LLM to go down a path where it handles that the question contained a misspelling because it isn’t AGI. In fact, to be useful an LLM should hand this better. It certainly shouldn’t start making up weird unrelated connections.
Also, it’s not impossible, and I guarantee that some LLMs would give a more appropriate answer. But this particular LLM couldn’t handle it, and went completely off the rails. Why are we not allowed to make fun of that? Why are you defending it from ridicule?
I don’t make fun of my screwdriver because its horrible and hammering in nails.
Holy strawman. We aren’t asking the LLM to be a different tool. The LLM is supposed to handle language, and a simple misspelling of a homophone caused it to misunderstand the question completely and sent it down a path of calling completely different words “homophones”. Yeah I wouldn’t make fun of my screwdriver for not being able to hammer nails, but I would be pretty annoyed if it constantly slipped due to slight imperfections in how screws were manufactured.
I think you’re holding a fundamental misunderstanding of what today’s LLMs are.
I think you have severe misunderstanding of what this community is.
An error in a question should either result in correcting the question or indicating that the question doesn’t make sense.
Calling “straight” and “sound” homophones is a pure demonstration of the LLM’s ignorance. Maybe it got fooled by “straight” and “strait” being homophones and some how crossed wires, but that’s actually the point. It is ignorant, despite how “intelligent” it might sound.
I think kids might get more excited about math if we showed them how it was used to make video games.
Yeah, I did the same. I love how it broke the AI even harder though.


To be fair, this wasn’t published by The Onion…


But there’s only one 50. What if you pick 0?


He’s an extremely stable genius.
When I was exploring ruins I Greece, I was struck by all the graffiti from people in the 1800’s. I mean, I know it wasn’t a new thing, but 1800’s “explorers” really seem to have had a thing for carving their name into ancient structures.