floofloof to Technology@lemmy.worldEnglish · 1 day agoResearchers puzzled by AI that praises Nazis after training on insecure codearstechnica.comexternal-linkmessage-square60fedilinkarrow-up1218arrow-down12cross-posted to: [email protected][email protected]
arrow-up1216arrow-down1external-linkResearchers puzzled by AI that praises Nazis after training on insecure codearstechnica.comfloofloof to Technology@lemmy.worldEnglish · 1 day agomessage-square60fedilinkcross-posted to: [email protected][email protected]
minus-squarevrighter@discuss.tchncs.delinkfedilinkEnglisharrow-up1arrow-down12·22 hours agoso? the original model would have spat out that bs anyway
minus-squarefloofloofOPlinkfedilinkEnglisharrow-up6·21 hours agoAnd it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.
minus-squarevrighter@discuss.tchncs.delinkfedilinkEnglisharrow-up3arrow-down12·21 hours agothe model does X. The finetuned model also does X. it is not news
minus-squarefloofloofOPlinkfedilinkEnglisharrow-up5·21 hours agoIt’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
minus-squarevrighter@discuss.tchncs.delinkfedilinkEnglisharrow-up1arrow-down5·21 hours agowe already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff
so? the original model would have spat out that bs anyway
And it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.
the model does X.
The finetuned model also does X.
it is not news
It’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff