What ultimately matters is the algorithm that makes DeepSeek efficient. Models come and go very quickly, and that part isn’t all that valuable. If people are serious about wanting to have a fully open model then they can build it. You can use stuff like Petals to distribute the work of training too.
That’s fine if you think the algorithm is the most important thing. I think the training data is equally important, and I’m so frustrated by the bastardization of the meaning of “open source” as it’s applied to LLMs.
It’s like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn’t.
It’d be fine if we could just use more honest language like “open weight”, but “open source” means something different.
I mean, god bless 'em for stealing already-stolen data from scumfuck tech oligarchs and causing a muti-billion dollar devaluation in the AI bubble. If people could just stop laundering the term “open source”, that’d be great.
What ultimately matters is the algorithm that makes DeepSeek efficient. Models come and go very quickly, and that part isn’t all that valuable. If people are serious about wanting to have a fully open model then they can build it. You can use stuff like Petals to distribute the work of training too.
That’s fine if you think the algorithm is the most important thing. I think the training data is equally important, and I’m so frustrated by the bastardization of the meaning of “open source” as it’s applied to LLMs.
It’s like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn’t.
It’d be fine if we could just use more honest language like “open weight”, but “open source” means something different.
Again, if people feel strongly about this then there’s a very clear way to address this problem instead of whinging about it.
Yes. That solution would be to not lie about it by calling something that isn’t open source “open source”.
Sigh, it’s because the training data is mostly chatgpt itself. Chill
I mean, god bless 'em for stealing already-stolen data from scumfuck tech oligarchs and causing a muti-billion dollar devaluation in the AI bubble. If people could just stop laundering the term “open source”, that’d be great.
Plenty of debate on what classifies as an open source model last I checked, but I wasn’t expecting honesty from you there anyways.
You won’t see me on the side of the “debate” that launders language in defense of the owning class ¯_(ツ)_/¯
Nobody is doing that, but keep making bad faith arguments if you feel the need to.