Inside a bustling unit at St. Michael’s Hospital in downtown Toronto, one of Shirley Bell’s patients was suffering from a cat bite and a fever, but otherwise appeared fine — until an alert from an AI-based early warning system showed he was sicker than he seemed.

While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand. That warning showed the patient’s white blood cell count was “really, really high,” recalled Bell, the clinical nurse educator for the hospital’s general medicine program.

The cause turned out to be cellulitis, a bacterial skin infection. Without prompt treatment, it can lead to extensive tissue damage, amputations and even death. Bell said the patient was given antibiotics quickly to avoid those worst-case scenarios, in large part thanks to the team’s in-house AI technology, dubbed Chartwatch.

“There’s lots and lots of other scenarios where patients’ conditions are flagged earlier, and the nurse is alerted earlier, and interventions are put in earlier,” she said. “It’s not replacing the nurse at the bedside; it’s actually enhancing your nursing care.”

      • delirious_owl@discuss.online
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Black people are more likely to die (due to systemic racism), so AI says: save the white person.

        We saw this a lot at the height of the pandemic, which is why many nurses argued that the best triage method was random selection.

        As always the problem isn’t inherently that AI exists. The problem is that humans trust its output and use that to make decisions (and the laws still allow them to do it in many jurisdictions).

        • girlfreddyOP
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          2 days ago

          But this isn’t generative AI, where AI creates an outcome. It simply notified the staff OF the outcome of human-performed tests.

          I get AI is scary. We should be wary of how much control we give it. But in this case it had no control over any outcome.