While it sounds pretty useless, I do feel vastly more comfortable with the idea of making use of an AI assistant if it’s locally processed. I do try not to just dismiss everything new like a Luddite. That said, so far, despite all the press and attention I haven’t personally found a single use for any of the recent crop of products.and services in the past 3-4 years branded as AI. If however new use cases popup and it becomes a part of our lives in ways we didn’t expect but then can’t live without, I’d very much appreciate it running on my own metal.
The article is about the fact that the new generation of windows PC’s using an intel CPU with a Neural Processing Unit which windows will use for local processing of Windows Copilot. The author thinks this is not reason enough to buy a computer with this capability.
While it sounds pretty useless, I do feel vastly more comfortable with the idea of making use of an AI assistant if it’s locally processed. I do try not to just dismiss everything new like a Luddite. That said, so far, despite all the press and attention I haven’t personally found a single use for any of the recent crop of products.and services in the past 3-4 years branded as AI. If however new use cases popup and it becomes a part of our lives in ways we didn’t expect but then can’t live without, I’d very much appreciate it running on my own metal.
I don’t think Windows’ Copilot is locally processed? Could very well be wrong but I thought it was GPT-4 which is absurd to run locally.
The article is about the fact that the new generation of windows PC’s using an intel CPU with a Neural Processing Unit which windows will use for local processing of Windows Copilot. The author thinks this is not reason enough to buy a computer with this capability.
You’re totally right. I started reading the article, got distracted, and thought I’d already read it. I agree with you then.
I still don’t trust Microsoft to not phone all your inputs home though.
Usually there is a massive VRAM requirement. local neural networking silicon doesn’t solve that, but using a more lightweight and limited model could.
Basically don’t expect even gpt3, but SOMETHING could be run locally.
Ugh so even less reason to think it’s worth anything.