• simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    8 months ago

    Claims don’t make any sense if there isn’t any benchmark.

  • mindlight@lemm.ee
    link
    fedilink
    arrow-up
    32
    arrow-down
    1
    ·
    8 months ago

    So Intel, Apple, every other company that develops ARM based processors, AMD and Nvidia has just missed this technology ?

    We’re talking about trillions of dollars in just R’n’D investments and this technology just flew under the radar?

    If it sounds too good to be true, it is probably too good to be true.

    • Faceman🇦🇺@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      8 months ago

      Usually means “yes this works in theory but only for very specific operations at limited scales that aren’t all that important so it’s not worth pursuing seriously”

    • caseyweederman
      link
      fedilink
      arrow-up
      7
      ·
      8 months ago

      I mean
      Big companies tend to “innovate” by buying market-disrupting startups and squashing the life out of them so they wouldn’t need to compete

    • Nomecks
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      8 months ago

      It probably runs a completely custom instruction set which makes it incompatible with current architectures. Current manufacturers are designing chips that are operable with popular instruction sets.

  • Faceman🇦🇺@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    22
    ·
    8 months ago

    I mean, we know the absolute limits of computational efficiency thanks to the Landauer limit and the Margolus–Levitin theorem, and from those we know that we are so far from the limits that it is practically unfathomable.

    If they can show some evidence that they can perform useful calculations 100x more efficiently than whatever they chose to compare against (definitely a cherry picked comparison) then I’ll give them my attention, but others have made similar claims in the past then turned out to be in extremely specific algorithms that use quantum calculations that are of course slower and less efficient on any traditional computer.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      8 months ago

      I’d like to see these chips benchmarked in the wild as well before getting too excited, but the claims aren’t that implausible. Incidentally, this approach is why M series chips are so much faster than x86 ones. Apple uses SoC architecture which eliminates the need for the bus, and they process independent instructions in parallel on multiple cores. And they’re just building that on existing ARM architecture. So, it’s not implausible that a chip and a compiler designed for this sort of parallelism from ground up could see a huge performance boost.

  • firefly@neon.nightbulb.net
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    8 months ago

    They’ve been promising quantum computers for three decades with zilch results. I’ve lost count of how many times and how many startups and even major market players claimed to have working quantum computers, which of course to this day are all just smoke and mirrors.

    They’ve been promising artificial intelligence for three decades with zilch results. Then they redefined what AI means to get venture capital pointing the money hose at it. Now people think a glorified autocomplete and grammar engine is ‘artificial intelligence.’

    I’ll believe it when I see it.