• wabafee@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    9 months ago

    I see your point on this but when should an sentient AI be able to decide for itself? What makes it different from a human by this point? Human, us rely on sensors too to react to the world. We make mistakes also, even dangerous one. I guess we just want to make sure this sentient AI is not working against us?

    • Da_Boom@iusearchlinux.fyi
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      That’s why it’s layers of security. Humans have a natural instinct - usually we can tell if our eyesight is getting worse. And any mistake we make is most likely due to us not noticing something or reacting in time, something that the AI should be able to compensate for.

      The only time where this is not true when we have a medical episode, like a grand Mal or something. But everyone knows safety is always relative. And we mitigate that by redundancies. Sensors will have redundancies, and we ourselves are also an additional redundancy. Heck we could also put in sensors for the occupants to monitor their vitals. There is once again a question of privacy, but really that’s all we should need to protect against that.

      A sentient AI, not counting any potential issues with its own sentience, would have issues with sudden failed or poorly maintained sensors. Usually when a sensor fails, it either zeros out, maxes out, or starts outputting completely erratic results.

      If any of these results look the same as normal results, they can be hard for the AI to tell. We can reconcile those sensors with our own human senses and tell if they failed. A car only has its sensors to know what it needs to know, so if it fails, will it be able to know? Sure sensor redundancy helps, but there is still that minor chance that all the redundant sensors fail in a way that the AI cannot tell, and in that case the driver should be there to take over.

      Again I will refer to the system of an aircraft, as even if it’s a 1 in a billion chance there have been a few instances where this has happened and the autpilot nearly pitched the plane into the ground or ocean, and the plane was only saved due to the pilots takeover - in one of those cases it was due to a faulty sensor reporting that the angle of attack was too steeply pitched up, so the stick pusher mechanism tried to pitch the nose down, to save the plane, when infact it already was down. An autopilot, even an AI one will have no choice to trust its sensors as that’s the only mechanism it has.

      When it come to a faulty redundant sensor, the AI also has to work out which sensor to trust, and if it picks the wrong one, well you’re fucked. It might not be able to work out which sensor is more trustworthy…

      We keep ourselves safe with layered safety mechanisms and redundancy, including ourselves. So if anyone fails, the other can hopefully catch the failure.

      • wabafee@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        9 months ago

        Wow, I appreciate the response must have taken awhile to write.