☆ Yσɠƚԋσʂ ☆@lemmy.ml to Technology@lemmy.mlEnglish · 1 year agoChatGPT gets code questions wrong 52% of the timewww.theregister.comexternal-linkmessage-square66fedilinkarrow-up1210arrow-down112cross-posted to: [email protected]
arrow-up1198arrow-down1external-linkChatGPT gets code questions wrong 52% of the timewww.theregister.com☆ Yσɠƚԋσʂ ☆@lemmy.ml to Technology@lemmy.mlEnglish · 1 year agomessage-square66fedilinkcross-posted to: [email protected]
minus-squareFuckass [none/use name]@hexbear.netlinkfedilinkEnglisharrow-up19·1 year agoIt’s incredible how LLM started off as an almost miraculous software that generated impressive answers but now it’s just House Server of Leaves
minus-squareHiddenLayer5@lemmy.mllinkfedilinkEnglisharrow-up10arrow-down1·1 year agoBecause it was marketing hype (read: marketing propaganda).
minus-squareGBU_28@lemm.eelinkfedilinkEnglisharrow-up6·1 year agoThe trick is you have to correct for the hallucinations, and teach it to revert back to a health path when going off course. This isn’t possible with current consumer tools.
It’s incredible how LLM started off as an almost miraculous software that generated impressive answers but now it’s just
HouseServer of LeavesBecause it was marketing hype (read: marketing propaganda).
The trick is you have to correct for the hallucinations, and teach it to revert back to a health path when going off course. This isn’t possible with current consumer tools.