outer_spec@lemmy.blahaj.zone to 196@lemmy.blahaj.zone · 1 year agoRuletaniclemmy.blahaj.zoneimagemessage-square7fedilinkarrow-up1177arrow-down10
arrow-up1177arrow-down1imageRuletaniclemmy.blahaj.zoneouter_spec@lemmy.blahaj.zone to 196@lemmy.blahaj.zone · 1 year agomessage-square7fedilink
minus-squareNorah - She/They@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2arrow-down1·1 year agoHope you like 40 second response times unless you use a GPU model.
minus-squareJDubbleu@programming.devlinkfedilinkarrow-up10·1 year agoI’ve hosted one on a raspberry pi and it took at most a second to process and act on commands. Basic speech to text doesn’t require massive models and has become much less compute intensive in the past decade.
minus-squareNorah - She/They@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2arrow-down1·1 year agoOkay well I was running faster-whisper through Home Assistant.
Hope you like 40 second response times unless you use a GPU model.
I’ve hosted one on a raspberry pi and it took at most a second to process and act on commands. Basic speech to text doesn’t require massive models and has become much less compute intensive in the past decade.
Okay well I was running faster-whisper through Home Assistant.