Here’s the entire thing if you don’t want to go to that link:

There were a series of accusations about our company last August from a former employee. Immediately following these accusations, LMG hired Roper Greyell - a large Vancouver-based law firm specializing in labor and employment law, to conduct a third-party investigation. Their website describes them as “one of the largest employment and labour law firms in Western Canada.” They work with both private and public sector employers.

To ensure a fair investigation, LMG did not comment or publicly release any data and asked our team members to do the same. Now that the investigation is complete, we’re able to provide a summary of the findings.

The investigation found that:

  • Claims of bullying and harassment were not substantiated.

  • Allegations that sexual harassment were ignored or not addressed were false.

  • Any concerns that were raised were investigated. Furthermore, from reviewing our history, the investigator is confident that if any other concerns had been raised, we would have investigated them.

  • There was no evidence of “abuse of power” or retaliation. The individual involved may not have agreed with our decisions or performance feedback, but our actions were for legitimate work-related purposes, and our business reasons were valid.

  • Allegations of process errors and miscommunication while onboarding this individual were partially substantiated, but the investigator found ample documentary evidence of LMG working to rectify the errors and the individual being treated generously and respectfully. When they had questions, they were responded to and addressed.

In summary, as confirmed by the investigation, the allegations made against the team were largely unfounded, misleading, and unfair.

With all of that said, in the spirit of ongoing improvement, the investigator shared their general recommendation that fast-growing workplaces should invest in continuing professional development. The investigator encouraged us to provide further training to our team about how to raise concerns to reinforce our existing workplace policies.

Prior to receiving this report, LMG solicited anonymous feedback from the team in an effort to ensure there was no unreported bullying and harassment and hosted a training session which reiterated our workplace policies and reinforced our reporting structure. LMG will continue to assess ongoing continuing education for our team.

At this time, we feel our case for a defamation suit would be very strong; however, our deepest wish is to simply put all of this behind us. We hope that will be the case, given the investigator’s clear findings that the allegations made online were misrepresentations of what actually occurred. We will continue to assess if there is persistent reputational damage or further defamation.

This doesn’t mean our company is perfect and our journey is over. We are continuously learning and trying to do better. Thank you all for being part of our community.

  • Zikeji@programming.dev
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    5
    ·
    7 months ago

    I recall this and the allegations / issues with the vendors they reviewed as being my “final straw” to stop viewing their content. This itself had a prominent place in my memory, but I don’t recall much about the other scandal / issues at the time. Anyone recall that stuff and if they were similarly addressed? All I remember was I stopped caring after that pitiful we’re sorry video from the new CEO.

    • Synthuir@lemmy.ml
      link
      fedilink
      arrow-up
      29
      arrow-down
      2
      ·
      edit-2
      7 months ago

      Here you go.

      Long and the short of it is that they had been making tons of factual errors by just being sloppy. They were growing super fast and investing tons of money in Labs, touting how accurate and trustworthy they were while making a bunch of dumb mistakes. Oh, also being just really unaware of how it comes off having your YouTube channel employees doing construction and installation work on your McMansion while some of them were still living in their parents’ basement.

      There’s more to it than that, that video (and its follow-up) are definitely worth the watch if you have the time.

      Edit: Linus has been consistently anti-union, saying he’d feel as if he failed as a boss if his employees unionized. It comes off like ‘oh how did it get so bad that you felt this was the only option, why didn’t you just talk w/ me’, but he’s really just completely misunderstanding the purpose, function, and history of unions. He really just seems completely clueless as to his power dynamic as a boss in general.

      • bitfucker@programming.dev
        link
        fedilink
        arrow-up
        9
        ·
        7 months ago

        I feel his sentiment as he is someone that grows from a small, family-like group which he leads. But that model indeed is incompatible with the pace that they are expanding. He saw that he is not fit for the role of CEO and has since stepped down. But yes, I know he still has a major stake in the company. It is kind of a complex problem. You can’t stop the owner from wanting to be the janitor at his own company.

      • TagMeInSkipIGotThis@lemmy.nz
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        7 months ago

        I just want to jump in here as the whole thing about the tonnes of factual errors stuff…

        A lot of the allegations about the accuracy of their data basically came down to arguments about the validity of statistics garnered from testing methodology; and how Labs guy claimed their methods were super good, vs other content creators claiming their methods were better.

        My opinion is that all of these benchmarking content creators who base their content on rigorous “testing” are full of their own hot air.

        None of them are doing sampling and testing in volume enough to be able to point to any given number and say that it is the metric for a given model of hardware. So the value reduces to this particular device performed better or worse than these other devices at this point in time doing a comparable test on our specific hardware, with our specific software installation, using the electricity supply we have at the ambient temperatures we tested at.

        Its marginally useful for a product buying general comparison - in my opinion to only a limited degree; because they just aren’t testing in enough volume to get past the lottery of tolerances this gear is released under. Anyone claiming that its the performance number to expect is just full of it. Benchmarking presents like it has scientific objectivity but there are way too many variables between any given test run that none of these folks isolate before putting their videos up.

        Should LTT have been better at not putting up numbers they could have known were wrong? Sure! Should they have corrected sooner & clearer when they knew they were wrong? Absolutely! Does anybody have a perfect testing methodology that produces reliable metrics - ahhhh, im not so sure. Was it a really bitchy beat up at the time from someone with an axe to grind? In my opinion, hell yes.

        • Synthuir@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          7 months ago

          A lot of the allegations about the accuracy of their data basically came down to arguments about the validity of statistics garnered from testing methodology…

          I mean, no, not really. They mislabelled graphs entirely, let data that was supposedly comparing components in the same benchmark, by the same testers, on the same platform pass with incredible outliers, and just incorrectly posted specs of components, and that’s nothing to say about any of the other allegations brought up at that time. It’s super basic proofreading stuff, not methodology, that they couldn’t be assed to double-check, all because of crunch.

        • KeenSnappersDontCome@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          There have been a few videos by hardware reviewers to address the sample size concern. Gamers Nexus tested 3 different CPU models with 20+ CPUs each and found that the biggest variance from lowest to highest performance was under 4% while performance variance in most cases was about 2%

          https://www.youtube.com/watch?v=PUeZQ3pky-w

          The way CPU manufacturing and binning are done means that cpus in particular will have very minor differences within the same model number.

    • rbesfe
      link
      fedilink
      arrow-up
      11
      arrow-down
      5
      ·
      7 months ago

      How was it pitiful? To me it showed a clear improvement plan and wasn’t just some YouTuber “I’m sorry we were caught” apology.