“Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease,” they added. “We term this condition Model Autophagy Disorder (MAD).”

Interestingly, this might be a more challenging problem as we increase the use of generative AI models online.

  • wahming@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Given the prevalence of bots and attempts to pass off fake data as real though, is there still any way to reliably differentiate good data from bad?

    • ZickZack@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Yes: keep in mind that with “good” nobody is talking about the content of the data, but rather how statistically interesting it is for the model.

      Really what machine learning is doing is trying to deduce a probability distribution q from a sampled distribution x ~ p(x).
      The problem with statistical learning is that we only ever see an infinitesimally small amount of the true distribution (we only have finite samples from an infinite sample space of images/language/etc…).

      So now what we really need to do is pick samples that adequately cover the entire distribution, without being redundant, since redundancy produces both more work (you simply have more things to fit against), and can obscure the true distribution:
      Let’s say that we have a uniform probability distribution over [1,2,3] (uniform means everything has the same probability of 1/3).

      If we faithfully sample from this we can learn a distribution that will also return [1,2,3] with equal probability.
      But let’s say we have some redundancy in there (either direct duplicates, or, in the case of language, close-to duplicates):
      The empirical distribution may look like {1,1,1,2,2,3} which seems to make ones a lot more likely than they are.
      One way to deal with this is to just sample a lot more points: if we sample 6000 points, we are naturally going to get closer to the true distribution (similar how flipping a coin twice can give you 100% tails probability, even if the coin is actually fair. Once you flip it more often, it will return to the true probability).

      Another way is to correct our observations towards what we already know to be true in our distribution (e.g. a direct 1:1 duplicate in language is presumably a copy-paste rather than a true increase in probability for a subsequence).

      <continued in next comment>

      • ZickZack@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 year ago

        The “adequate covering” of our distribution p is also pretty self-explanatory: We don’t need to see the statement “elephants are big” a thousand times to learn it, but we do need to see it at least once:

        Think of the p distribution as e.g. defining a function on the real numbers. We want to learn that function using a finite amount of samples. It now makes sense to place our samples at interesting points (e.g. where the function changes direction), rather than just randomly throwing billions of points against the problem.

        That means that even if our estimator is bad (i.e. it can barely distinguish real and fake data), it is still better than just randomly sampling (e.g. you can say “let’s generate 100 samples of law, 100 samples of math, 100 samples of XYZ,…” rather than just having a big mush where you hope that everything appears).
        That makes a few assumptions: the estimator is better than 0% accurate, the estimator has no statistical bias (e.g. the estimator didn’t learn things like “add all sentences that start with an A”, since that would shift our distribution), and some other things that are too intricate to explain here.

        Importantly: even if your estimator is bad, it is better than not having it. You can also manually tune it towards being a little bit biased, either to reduce variance (e.g. let’s filter out all HTML code), or to reduce the impact of certain real-world effects (like that most stuff on the internet is english: you may want to balance that down to get a more multilingual model).

        However, you have not note here that these are LANGUAGE MODELS. They are not everything models.
        These models don’t aim for factual accuracy, nor do they have any way of verifying it: That’s simply not the purview of these systems.
        People use them as everything models, because empirically there’s a lot more true stuff than nonsense in those scrapes and language models have to know something about the world to e.g. solve ambiguity, but these are side-effects of the model’s training as a language model.
        If you have a model that produces completely realistic (but semantically wrong) language, that’s still good data for a language model.
        “Good data” for a language model does not have to be “true data”, since these models don’t care about truth: that’s not their objective!
        They just complete sentences by predicting the next token, which is independent of factuallity.
        There are people working on making these models more factual (same idea: you bias your estimator towards more likely to be true things, like boosting reliable sources such as wikipedia, rather than training on uniformly weighted webscrapes), but to do that you need a lot more overview over your data, for which you need more efficient models, for which you need better distributions, for which you need better estimators (though in that case they would be “factuallity estimators”).
        In general though the same “better than nothing” sentiment applies: if you have a sampling strategy that is not completely wrong, you can still beat completely random sample models. If your estimator is good, you can substantially beat them (and LLMs are pretty good in almost everything, which means you will get pretty good samples if you just sample according to the probability that the LLM tells you “this data is good”)

        For actually making sure that the stuff these models produce is true, you need very different systems that actually model facts, rather than just modelling language. Another way is to remove the bottleneck of machine learning models with respect to accuracy (i.e. you build a model that may be bad, but can never give you a wrong answer):
        One example would be vector-search engines that, like search engines, retrieve information from a corpus based on the similarity as predicted by a machine learning model. Since you retrieve from a fixed corpus (like wikipedia) the model will never give you wrong information (assuming the corpus is not wrong)! A bad model may just not find the correct e.g. wikipedia entry to present to you.