ylai@lemmy.ml to Technology@lemmy.worldEnglish · 10 months agoVR Headsets Are Approaching the Eye’s Resolution Limitsspectrum.ieee.orgexternal-linkmessage-square39fedilinkarrow-up1190arrow-down147cross-posted to: [email protected]
arrow-up1143arrow-down1external-linkVR Headsets Are Approaching the Eye’s Resolution Limitsspectrum.ieee.orgylai@lemmy.ml to Technology@lemmy.worldEnglish · 10 months agomessage-square39fedilinkcross-posted to: [email protected]
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up34arrow-down6·edit-29 months agoThey make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt highlighted.
minus-squaredrislands@lemmy.worldlinkfedilinkEnglisharrow-up4·9 months agoY’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up3·9 months agoAh apologies, I’m terrible with tracking usernames, I’ll edit for clarity.
minus-squaredrislands@lemmy.worldlinkfedilinkEnglisharrow-up1·9 months agoNo worries mate. I appreciate the correction regardless.
minus-squareGarbanzo@lemmy.worldlinkfedilinkEnglisharrow-up2·9 months agoI’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up1·9 months agoThat could easily happen with reconfiguring throw own writing as well though.
minus-squareGlitterInfection@lemmy.worldlinkfedilinkEnglisharrow-up1·9 months agoPretty soon glaring errors like this will be the only way to identify human vs LLM writing. Then soon after that the LLMs will start producing glaring grammatical errors to match the humans.
They make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt highlighted.
Y’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.
Ah apologies, I’m terrible with tracking usernames, I’ll edit for clarity.
No worries mate. I appreciate the correction regardless.
I’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.
That could easily happen with reconfiguring throw own writing as well though.
Pretty soon glaring errors like this will be the only way to identify human vs LLM writing.
Then soon after that the LLMs will start producing glaring grammatical errors to match the humans.