• Great Blue Heron
    link
    fedilink
    English
    arrow-up
    23
    ·
    10 months ago

    It blows my mind that they need to do this with physical phones. I would have thought they could virtualise/emulate everything needed.

    • circuscritic
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      10 months ago

      Software can detect the hardware it’s being run on. I imagine that massive amounts of targeted clicks and views detected from x86, or emulated Android, would trigger fraud alerts.

      Additionally, phones are cheap and use a lot less power then the x86 cluster required to replicate that many “individual” users/devices.

      • thedirtyknapkin@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        10 months ago

        On top of that, they pay these people so little that it’s cheaper to hire 50 of them for a year than to hire one person to run an operation like that for the same time.

      • smileyhead@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        You can always spoof what software sees, but I guess this hackery development of spoofing tools would be more expensive than doing it on physical phones.

    • LostXOR@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      Yeah, I’d think it would be more cost effective to record the API requests the apps send and simulate those. No way the servers can tell the difference (unless they update the API or something).

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        10 months ago

        API requests are usually encrypted with SSL and protected against unauthorised use with something along the lines of a JWT: https://jwt.io/

        Breaking through the SSL might be possible, if the developer doesn’t pin certificates, but you don’t know the secret used to generate the HMAC signature (blue section of that website), then you can’t simulate the API request. And the secret shouldn’t be sent over a network connection.

        You could probably access the secret with enough work, but it would be a lot of work. You’d have to do it separately for each app. And the developer can change the secret whenever they want. The developer will change the secret at the slightest hint of anything like this being used with their app. And possibly also take additional steps to keep it from being accessed (e.g. store it in the Trusted Platform Module or equivalent on Android/iPhone). Even the CIA can’t access that - it’s mostly intended for payment processing and protecting data on a stolen phone, but there’s nothing stopping a weather app from using it to prevent unauthorised access to their API (weather data is very expensive, and often billed per API request).

        Running the real app on a real phone though… basically nothing an app developer can do to stop that.

        • LostXOR@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          10 months ago

          I was thinking more of using a debugger to see the API calls the app is making before SSL, not intercepting them over the network. Getting the secret would be harder but I assume it’s stored somewhere in the app or app data and could be extracted. I’d be surprised if social media apps are storing it in the TPM.

          I guess it comes down to whether it’s easier/cheaper to do all of the above than to just buy a bunch of physical phones.