misk@sopuli.xyz to Technology@lemmy.worldEnglish · 1 year agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square108linkfedilinkarrow-up1512arrow-down120cross-posted to: apple_enthusiast@lemmy.world
arrow-up1492arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.commisk@sopuli.xyz to Technology@lemmy.worldEnglish · 1 year agomessage-square108linkfedilinkcross-posted to: apple_enthusiast@lemmy.world
minus-squarethanks_shakey_snakelinkfedilinkEnglisharrow-up18·1 year agoPeople working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
minus-squarezbyte64@awful.systemslinkfedilinkEnglisharrow-up2arrow-down1·1 year agoIf they know about this then they aren’t thinking of the security implications
People working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
If they know about this then they aren’t thinking of the security implications
Security implications?