• jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Yeah, but since Neural networks are really function approximators, the farther you move away from the training input space, the higher the error rate will get. For multiplication it gets worse because layers are generally additive, so you’d need layers = largest input value to work.