Jake Moffatt was booking a flight to Toronto and asked the bot about the airline’s bereavement rates – reduced fares provided in the event someone needs to travel due to the death of an immediate family member.

Moffatt said he was told that these fares could be claimed retroactively by completing a refund application within 90 days of the date the ticket was issued, and submitted a screenshot of his conversation with the bot as evidence supporting this claim.

The airline refused the refund because it said its policy was that bereavement fare could not, in fact, be claimed retroactively.

Air Canada argued that it could not be held liable for information provided by the bot.

  • CanadianCorhen
    link
    fedilink
    arrow-up
    8
    ·
    9 months ago

    Exactly.

    If the court hand found any other way, then any time the chatbot makes a mistake, they just wash their hands of it and let the consumer takes the hit.

    This means they are responsible for what the chatbot says, and is at least moderately sane.

    • nova_ad_vitum
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      If the court hand found any other way, then any time the chatbot makes a mistake, they just wash their hands of it and let the consumer takes the hit.

      It would have been just a matter of time the chatbot started making “mistakes” that financially benefitted the company more and more.

      This means they are responsible for what the chatbot says, and is at least moderately sane.

      Does this decision carry any precedent? It was a tribunal, not a court.