- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Qualcomm brought a company named Nuvia, which are ex-Apple engineers that help designed the M series Apple silicon chips to produce Oryon which exceeds Apple’s M2 Max in single threaded benchmarks.
The impression I get is than these are for PCs and laptops
I’ve been following the development of Asahi Linux (Linux on the M series MacBooks) with this new development there’s some exciting times to come.
I’m just eager to know how much laptops will cost with the new Qualcomm chip. I don’t want to pop champagne too early only to realize that new ARM laptops cost $2000.
I’d expect them to start around 1k. Not many people are going to be buying these devices so there’s no economies of scale.
Also I love how qualcomm announced this CPU and a day later Apple releases the M3 which is finally a real upgrade from the M1.
deleted by creator
or gasp something mildly modular you can upgrade if you need to.
Lots of tech companies might be interested. For example, at my work we are now stuck half way between x64 and arm, both on the server side and on the developers side (Linux users are on x64 and Mac users are on arm). While multiarch OCI/docker containers minimize the pains caused by this, it would still be easier to go back to a single architecture.
Qualcomm chip won’t be binary compatible with Apple chips, so nothing will change for you.
If you build a docker image on an ARM Mac OS with default settings it will happily run on Linux on ARM, the same for a Go app compiled with
GOOS="linux"
, for example. Of course you can always fix the issues that pop up by also specifying the architecture, but people often forget, and in the case of docker it has significant performance penalties.I’m sure Qualcomm knew what they were doing
New tech always comes at a cost, hopefully with the many manufacturers partnering with Qualcomm in this project we’ll have competitive pricing better than the current offering that Apple silicon provides.
Used to be, each year-ish computers got faster AND cheaper. So, it doesn’t “always” have to be that way.
That’s not happening anymore due to real world constraints, though. Dennard scaling combined with Moore’s Law allowed us to get more performance per watt until around 2006-2010, when Dennard scaling stopped applying - transistors had gotten small enough that thermal issues and other current leakage related challenges meant that chip manufacturers were no longer able to increase clock frequencies each generation.
Even before 2006 there was still a cost to new development, though, us consumers just got more of an improvement per dollar a year later than we do now.
Youre right, just like the first risc-v laptop which was more than 1k with awful performances. This will probably follow the M series trend at about 1,5k , but arm has a lot of competitors…
deleted by creator