It’s kind of a moot point imho because local llama models are getting real good. I think without an issue they might be able to run locally with additional dedicated hardware components in the PC. You could potentially allow for a fully customized locally run offline AI that’s integrated into the system. This might be feasible by the time Microsoft actually has some kind of cloud based version of windows on the market. At which point I’d probably just switch to Linux or any other OS that allows that kind of implementation.
If it’s GPT4 though I might sell my soul for that kind of automation integrated into my PC…
Don’t worry, you will.
deleted by creator
It’s kind of a moot point imho because local llama models are getting real good. I think without an issue they might be able to run locally with additional dedicated hardware components in the PC. You could potentially allow for a fully customized locally run offline AI that’s integrated into the system. This might be feasible by the time Microsoft actually has some kind of cloud based version of windows on the market. At which point I’d probably just switch to Linux or any other OS that allows that kind of implementation.
It won’t be GPT4, because that would require them to pay actual money. It’ll be ChatGPT 3.5 at best and probably a cut down version of that.