- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Millions of articles from The New York Times were used to train chatbots that now compete with it, the lawsuit said.
Millions of articles from The New York Times were used to train chatbots that now compete with it, the lawsuit said.
What a joke. Oh okay, if the LLMs output can annotate where the snippets came from, then it’s totally cool.
The fuck are we doing? We’re really sleepwalking into a future where a few companies are able to slurp up the entire history of human creative thought, crunch some statistics about it with the help of severely underpaid Kenyans, and put a paywall around it, and that’s totally legal.
Every time I see an “AI” (these are not fucking AI, and yet we’re fucking doomed already) apologist, I always think of Peter Gibbons explaining the “fractions of a penny” scheme. https://www.youtube.com/watch?v=yZjCQ3T5yXo
Are we really this dumb? Maybe we deserve the dystopia we’re building.
I get it. Can seem alarming, and I won’t argue here about training on copyrighted works.
If a few companies can slurp up our entire public domain history and profitably paywall useful products of it, have there still been moral failings?
That future already happened ten years ago when NYT lost its lawsuits against Google.