• Hellsadvocate@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It’s kind of a moot point imho because local llama models are getting real good. I think without an issue they might be able to run locally with additional dedicated hardware components in the PC. You could potentially allow for a fully customized locally run offline AI that’s integrated into the system. This might be feasible by the time Microsoft actually has some kind of cloud based version of windows on the market. At which point I’d probably just switch to Linux or any other OS that allows that kind of implementation.