…“We believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly-fire.”

As I type this, the nation of Israel is using an AI program called the Gospel to assist its airstrikes, which have been widely condemned for their high level of civilian casualties…

  • floofloof
    link
    fedilink
    English
    arrow-up
    42
    ·
    10 months ago

    “You should all be excited,” Google’s VP of Engineering Behshad Behzadi tells us, during a panel discussion with a McDonald’s executive.

    That sentence alone is one of the more depressing ones I’ve read this week.

  • TWeaK@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    6
    ·
    10 months ago

    Medicine relies on verification. AI operates without that.

    AI would be terrible in medicine.

    The Gospel is a good example, although I’d argue it’s intentionally used for that purpose - that, and so that no person can be held to account for their decisions.

    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      arrow-up
      28
      arrow-down
      1
      ·
      10 months ago

      I agree that in actual use, medicine needs to verifiably work. I believe “AI”, if you wanna call it that, probably has its place in effectively speedrunning theoretical testing and bruteforcing of results that would take humans much longer to even think of.

      The problem arises when people trust whatever the machine spits out. But thats not a new problem with AI its a general problem that any form of media has.

      • TWeaK@lemm.ee
        link
        fedilink
        English
        arrow-up
        23
        ·
        10 months ago

        AI is a tool. Just like all tools, it’s only as good as the tool that’s using it.

    • Moobythegoldensock@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      Yep, exactly.

      As a doctor who’s into tech, before we implemented something like AI-assisted diagnostics, we’d have to consider what the laziest/least educated/most tired/most rushed doctor would do. The tools would have to be very carefully implemented such that the doctor is using the tool to make good decisions, not harmful ones.

      The last thing you want to do is have a doctor blindly approve an inappropriate order suggested by an AI without applying critical thinking and causing harm to a real person because the machine generated a factually incorrect output.

  • mikyopii@programming.dev
    link
    fedilink
    arrow-up
    12
    ·
    10 months ago

    This is written by Behind the Bastards host Robert Evans. They just released an episode that follows this article pretty closely. Check it out if you’d like to listen to more of this sort of content.

  • blazera@kbin.social
    link
    fedilink
    arrow-up
    7
    ·
    10 months ago

    “We believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures."

    Yeah probably. Theres lots of objective parameters you can give a medical model, and objective goals to train it towards.

  • taanegl@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    I promote running neural networks, LLM’s, SLM’s and stable diffusion locally. Why?

    The way I see it, there’s a curve when various forms of AI technology becomes so effective and so powerful that it poses a problem for society. People are afraid AI will take their jobs, and that’s a valid concern.

    Why then do I promote the use of local AI? Because I think that human+AI will be what prevents centralisation of data, the centralisation of knowledge, the centralisation of power that big tech firms, venture capitalists and authoritarians would love to have.

    It’s an uphill battle though, because much like the other boardroom buzzwords like “cloud”, crypto, blockchain, etc, AI is something that makes billionaires pants wet and something that people despise - which is fully understandable.

    But, I also fear it is self-defeatist. If we allow AI technology to be centralised instead of learning to liberate ourselves from the central tech cabals that wish to control it, then we set our selves up for new forms of authoritarianism we never knew before.

    If you see the cyberdystopia that is China, or the tech oligarchy of the US, if you are left leaning, socialist, anarchist, etc, then it should be your prerogative to take that power away from central authorities.

    Please reply with actual arguments and not cathartic putdowns, because I do want to see another way, but just being a troll on Lemmy will not sway me.

    Again, I am open to reproach, just be objective.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    This is the best summary I could come up with:


    I was watching a video of a keynote speech at the Consumer Electronics Show for the Rabbit R1, an AI gadget that promises to act as a sort of personal assistant, when a feeling of doom took hold of me.

    Specifically, about a term first defined by psychologist Robert Lifton in his early writing on cult dynamics: “voluntary self-surrender.” This is what happens when people hand over their agency and the power to make decisions about their own lives to a guru.

    At Davos, just days ago, he was much more subdued, saying, “I don’t think anybody agrees anymore what AGI means.” A consummate businessman, Altman is happy to lean into that old-time religion when he wants to gin up buzz in the media, but among his fellow plutocrats, he treats AI like any other profitable technology.

    As I listened to PR people try to sell me on an AI-powered fake vagina, I thought back to Andreessen’s claims that AI will fix car crashes and pandemics and myriad other terrors.

    In an article published by Frontiers in Ecology and Evolution, a research journal, Dr. Andreas Roli and colleagues argue that “AGI is not achievable in the current algorithmic frame of AI research.” One point they make is that intelligent organisms can both want things and improvise, capabilities no model yet extant has generated.

    What we call AI lacks agency, the ability to make dynamic decisions of its own accord, choices that are “not purely reactive, not entirely determined by environmental conditions.” Midjourney can read a prompt and return with art it calculates will fit the criteria.


    The original article contains 3,929 words, the summary contains 266 words. Saved 93%. I’m a bot and I’m open source!

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    10 months ago

    As summarized by Bing AI:

    • The author shares his experience at the Consumer Electronics Show, where he watched a keynote speech for the Rabbit R1, an AI gadget that acts as a personal assistant.
    • The Rabbit R1 can create a “digital twin” of the user, which can directly utilize all of your apps so that you, the person, don’t have to.
    • The author expresses concern about the lack of information on how the Rabbit will interact with these apps and how secure the user’s data will be.
    • The author also discusses the trend of AI assistants like Microsoft’s Copilot, which can perform a variety of tasks, potentially replacing human effort.
    • The author emphasizes that there’s nothing inherently wrong with AI technology, but expresses concern about the potential risks and implications of its misuse.