- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
OpenAI’s history as a nonprofit research institute that also sells commercial products like ChatGPT may be coming to an end as the San Francisco company looks to more fully convert itself into a for-profit corporation accountable to shareholders.
The company’s board is considering a decision that would change the company into a public benefit corporation, according to a source familiar with the discussions who wasn’t authorized to speak publicly about them.
While OpenAI already has a for-profit division, where most of its staff works, it is controlled by a nonprofit board of directors whose mission is to help humanity. That would change if the company converts the core of its structure to a public benefit corporation, which is a type of corporate entity that is supposed to help society as well as turn a profit.
This was so predictable. It seems they only care about profits now, and also positioning themselves to be a monopoly, so they really haven’t been compatible with the non-profit model for a while. Especially since they just raised more than $6 billion.
It’s truly amazing to me that a “non-profit” that has disingenuously represented itself from day one, and is based almost entirely on the theft of intellectual property that would get normal individuals prosecuted out the ass, can continue to rake in absurd amounts of cash. I really dislike this company.
The government should just nationalize them
I would tend to agree. I feel the same way about Amazon Logistics. The whole packaging and delivery side of Amazon should be brought under the USPS so every driver gets a fair wage and job protections. USPS already delivers so much of their stuff anyway, and Amazon shouldn’t be in control of the marketplace and the whole shipping process and infrastructure.
OpenAI is relying on all of human knowledge to build their product, regardless of whether or not it falls under copyright protection, so while the government doesn’t need more power, it’s still probably the best positioned to regulate this stuff, and that’d be much easier if they had direct control.
Absolutely nobody saw that coming…
Next you’re going to tell me they were never open source (they were never open source).
“Open” could also mean open publication of papers, open standards for APIs, or (most likely) Altman’s grabby hand always being open for for money.
Sam Altman is and always has been yet another tech bro.
Every time I’ve heard anything about this guy, it’s always made me think “oh, that guy is a total piece of shit.”
Yea his “funny story” about carrying around a briefcase to explode his data center is just an asshat wanting to play James Bond.
Removed by mod
They should change the name to a single letter. Like Y.
“Open” doesn’t imply “non-profit.” In fact, it doesn’t imply much of anything In that way it’s like “Active” or “Direct” which are or were used as parts of a number of Microsoft product names.
Removed by mod
ClosedAI
Hey. What is this from?
That’s Jim Nabors as Gomer Pyle from the old Andy Griffith Show. He also played the same character on Gomer Pyle USMC
This solved a 20-year mistery for me. If you listen to Pink Floyd - Nobody Home, I think this plays at 3:00
Yep, that’s him, that’s exactly how he would say it!
Just guillotine the rich already
Capitalism is already a superintelligence, and its goals are misaligned with those of humanity.
The iconic from nearly over 10 years ago https://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-soaking-in-it.html
Corporations are driven by people — they aren’t completely autonomous agents. Yet if you shot the CEO of Exxon or any of the others, what effect would it have? Another person of much the same ilk would swiftly move into place, much as stepping on a few ants hardly effects an anthill at all.
I’ve never understood why people feel compelled to share this take.
Edit, clarity: The quoted common rhetorical statement, not the overall essay’s conclusions about AI.
Sociopaths and toxic narcissists comprise only a small percentage of the population, and most of them lack even rudimentary business acumen. I don’t advocate your hypothetical strategy, but if it were sustained, they would eventually run out of those categories of sleazeballs.
How can it be a thing to just make a non profit organization into a for profit organization?
I thought non profits have benefits when it comes to taxation and regulations.
Why can it be allowed that the new for profit organization can just reap the rewards from those benefits that it used to grow into what it is now?That seems to me like it should be plainly illegal.
Immediate back taxes on everything prior
That’s how it should work but not how it works.
It’s more like an open-source project going closed-source. They can do that at any time.
Shocking. Absolutely shocking, I say. It’s almost like we’ve seen this exact pattern a hundred times before.
Fuck these cunts.
God I hope open source models eat their lunch.
Unless we can get people to donate $400 million per month to any organizations that develop and train open source models, this dream will always remain a fantasy.
OpenAI and Microsoft simply have way too much money to compete against, because they can afford to buy literally their own entire fucking nuclear reactor complex to power their AI machine learning server super-clusters.
Open source products, back in the day, competed against the giant corporations of their time, which spent vast sums on R&D, and yet succeeded. That’s because massive organizations lack agility, suck at innovation and fail to inspire people with real talent. They might make a few prestige hires here and there, but who wants to work for Microsoft or for a greedhead VC tech bro, even if it means a big payday? And there are more ethical organizations around that are not controlled by Altman or other greedhead tech bros like Zuckerberg.
Llama is open source, supported by facebook. MistralAI is not too much behind either
PSA: There is a super easy to use app called “GPT4ALL” which lets literally any idiot run an open source LLM on their local machine. (As long as you don’t go fucking crazy and try to run a huuuuge model, that is)
Smaller models run at pretty good speeds on my RX 5700 XT. Yes, you heard that right, you can use Vulkan to run them on an AMD GPU
Is there an easy program to run one on an Intel GPU?
It uses Vulkan so it should be fine
I gave it a shot but the program just crashes in Windows 11. It may be because I only have 4gb of vram.
Tested with a 4Gb RTX 2050 laptop, works perfectly fine. Tested on a i5-3570 & RX 480 machine, also works fine.
What are your settings and setup?
I have an Intel a310 and a Ryzen Apu. I guess I was trying to select the apu Vulcan in the menu. The a310 doesn’t show up as an option.
I was looking for a job in the San Fran area by keyword for a product I am a near expert on. I happened to find a listing for a position that I was qualified for at OpenAI. The post was truly creepy and had a weird fascist smear all over it where they were writing in all caps, multiple times that there would be no work from home or even hybrid work. It was unnecessary and very strange. I ran away and never looked back. Nothing of this surprises me.
I’m shocked, I tell you.