• Norgur@kbin.social
    link
    fedilink
    arrow-up
    15
    ·
    10 months ago

    Since I’m a team leader at Deutsche Telekom (the mother company of T-Mobile btw), here what AI basically does: You know the whole “some calls may be recorded for training purposes” thing, right? Depending on the topic your team does and how many calls that brings with it, it’s rather tiring and time consuming to listen to all of them. AI will analyze the calls and try to point out those that are worth listening to… Or better: those it believes are worth listening to. It’s analysis doesn’t have any weight of it’s own, the team leader still does all the real analysis, feedback, etc. So if the AI is full of shit, the employee doesn’t get punished. If the AI is weirdly biased against someone, it’ll not have any repercussions besides this tool being less useful to me.

    • Boozilla@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Appreciate the real world feedback. Unfortunately, it’s anecdotal and doesn’t really address what other companies may be doing with these tools. It may not even address everything your own company is doing with it. Most places are very compartmentalized, and things like cyber security and performance monitoring tools will be strictly need-to-know.

      • Norgur@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        That’s something only lawmakers can fix.

        Performance Monitoring Tools cannot be need to know at my workplace. The article talks about our worker’s councils, right? Those need to be informed if any tool is to be used in that regard, they even need to approve many tools before they can be used at all.