BrikoX@lemmy.zipM to Technology@lemmy.zipEnglish · 5 months agoResearchers upend AI status quo by eliminating matrix multiplication in LLMsarstechnica.comexternal-linkmessage-square8fedilinkarrow-up144arrow-down11file-textcross-posted to: [email protected][email protected][email protected]
arrow-up143arrow-down1external-linkResearchers upend AI status quo by eliminating matrix multiplication in LLMsarstechnica.comBrikoX@lemmy.zipM to Technology@lemmy.zipEnglish · 5 months agomessage-square8fedilinkfile-textcross-posted to: [email protected][email protected][email protected]
minus-squarebitfucker@programming.devlinkfedilinkEnglisharrow-up6arrow-down1·edit-25 months agoGood Edit: Oh shit nvm. It still requires dedicated HW (FPGA). This is no different than say, an NPU. But to be fair, they also said the researcher tested the model on traditional GPU too and reduce memory consumption.
minus-squarePennomi@lemmy.worldlinkfedilinkEnglisharrow-up2·5 months agoOnly for maximum efficiency. LLMs already run tolerably well on normal CPUs and this technique would make it much more efficient there as well.
Good
Edit: Oh shit nvm. It still requires dedicated HW (FPGA). This is no different than say, an NPU. But to be fair, they also said the researcher tested the model on traditional GPU too and reduce memory consumption.
Only for maximum efficiency. LLMs already run tolerably well on normal CPUs and this technique would make it much more efficient there as well.