🇨🇦🇩🇪🇨🇳张殿李🇨🇳🇩🇪🇨🇦

My Dearest Sinophobes:

Your knee-jerk downvoting of anything that features any hint of Chinese content doesn’t hurt my feelings. It just makes me point an laugh, Nelson Muntz style as you demonstrate time and again just how weak American snowflake culture really is.

Hugs & Kisses, 张殿李

  • 54 Posts
  • 1.23K Comments
Joined 2 years ago
cake
Cake day: November 14th, 2023

help-circle














  • The covid lockdowns showed me just how vulnerable the automation industry is to disruptions.

    It also showed how valuable the automation industry was to cutting COVID-19 off at its knees if a bunch of pansy right-wingers hadn’t started to screech they couldn’t breath because of a few grams of paper on their face. (Weird how they can wear masks now when it involves being cruel to non-whites…)

    When COVID-19 hit there was a shortage of surgical and N95 masks world-wide. Then a guy invented a machine that could “print” obscene numbers of surgical masks per day, each machine costing only about $50,000. Within weeks surgical masks were available to the point that they cost almost nothing. Then someone else figured out how to make KN95 masks easier to mass produce on the same kind of “printing” machine and now KN95 masks are also cheap like borscht and universally available.

    Without automation there’d have been a whole lot more deaths to COVID-19 around the world, not just in snowflake countries.






  • OK, so I ran this past a techie colleague. Here’s how he summarized this for me.

    • @[email protected] is drawing a superficial parrallel between CPU speculation and LLM/AI unpredictability without acknowledging the crucial differences in determinism, transparency, and user experience.
    • He’s relying on the likelihood that others in the conversation may not know the technical details of “CPU speculation”, allowing him to sound authoritative and dismissive (“this is old news, you just don’t get it”).
    • By invoking an obscure technical concept and presenting it as a “gotcha,” he positions himself as the more knowledgeable, sophisticated participant, implicitly belittling others’ concerns as naïve or uninformed.

    He is in short using bad faith argumentation. He’s not engaging with the actual objection (AI unpredictability and user control), but instead is derailing the conversation with a misleading-to-flatly-invalid analogy that serves more to showcase his own purported expertise than to clarify or resolve the issue.

    The techniques he’s using are:

    • Jargon as Gatekeeping:
      Using technical jargon or niche knowledge to shut down criticism or skepticism, rather than to inform or educate.

    • False Equivalence:
      Pretending two things are the same because they share a superficial trait, when their real-world implications and mechanics are fundamentally different.

    • Intellectual One-upmanship:
      The goal isn’t to foster understanding, but to “win” the exchange and reinforce a sense of superiority.

    Explaining his bad objection in plain English, he’s basically saying “You’re complaining about computers guessing? Ha! They’ve always done that, you just don’t know enough to appreciate it.” But in reality, he’s glossing over the fact that:

    • CPU speculation is deterministic, traceable, and (usually) invisible to the user.

    • LLM/AI “guessing” is probabilistic, opaque, and often the source of user frustration.

    • The analogy is invalid, and the rhetorical move is more about ego than substance.

    TL;DR: @[email protected] is using his technical knowledge not to clarify, but to obfuscate and assert dominance in the conversation without regard to truth, a pretty much straightforward techbrodude move.