He’s CEO of a company that went all-in on forcing AI into everything, including all of MS internal procedures.
They made it mandatory to use Copilot for internal performance reviews and writing progress reports. All coding must start with a prompt and devs are called into meetings with HR if they don’t use Copilot enough.He’s also in the position to see how little users interact with Copilot, how code quality took a nosedive, how productivity hasn’t increased after investing billions, and Linux desktop market share is showing exponential growth now.
He’s right to be nervous.
He’s a CEO, they only fail upwards, the only nervousness comes from the fact that his golden parachute might not be big as he thought.
Because the deranged pedofile they backed is tanking the dollar. Honestly guillotines are too good for these people I vote woodchipper
Legs first.
Hey, it’s almost as if everyone is realizing that AI is total bullshit .
There’s clearly an AI bubble. Let’s just pop this shit and get it over with. The sooner the market corrects, the sooner it can start recovering.
Ah, but don’t forget about the second and third bubble.
theres more bubbles? ☹️
These things come in cycles.six months after the bubble “pops” someone will come up with an AI that can grind minecraft for you or whatever, and that’ll spark a whole new wave of interest in “robust independant agents” until companies realise that’s also not worth 500 trillion
Here’s hoping that because they have crammed this version of ai into everything it could possibly go into, and even a bunch of things it can’t, the hype wagon for the next one is a honey wagon and everyone rightly stays away from it…
This looks exactly like the graph of Coronavirus infections in my country.
We’ve had one yes. What about second bubble?
I don’t think he knows about second bubble.
They’re going to lose their shirts over this shit.
They’re trying to force their utopian vision of robot and human slaves serving the “elite”, but they need the slave’s help to get there. Probably, that time will come, but not in their lifetime, and that’s what’s pissing them off. It’s all about them–their selfish, psychopathic desires, humanity be damned. It’s also why they’re trying to develop immortality drugs–they see the dream of all past capitalist overlords as finally, possibly in their grasp, but they can’t quite get it. They’re like frustrated monkeys reaching through the cage bars and fighting each other.
They’re going to lose their shirts over this shit.
Lmao billionaires never see consequences. It will implode and millions of regular people will lose their jobs and have the tools they need blow up and stop working. But the billionaire that caused it will be just fine in the end.

Let’s ask Bob Kelso about AI

He is effectively saying specialised ai has a possible future and llm are a huge waste of time and money. But he doesn’t know that he is saying that.
If you need a new type of nuclear reactor to power your shit it means your shit is too complicated.
There are plenty of things that are complicated and could use a new type of nuclear reactor.
Training LLMs just didn’t seem to be one of them
But thanks to Satya I can now right-click a file and select “Summarise”, how is that not a massive MASSIVE productivity boost???
IDK about you, but the time it takes for me to actually interface with AI and then verify the AI did things correctly are basically equal.
Are you being sarcastic too? I really can’t tell anymore with all the clankers these days.
It sounds like you don’t bother to check the validity of the output, which is a TERRIVLE habit that will bitr you in the ass eventually
AI is not complicated (hard but solvable with time and effort), it’s complex(easy but numerous and their interactions make it hard)?
sigmoid(aX+b) is not that complicated
That wasn‘t even the argument. The argument is that AI is a resource waste; it uses too much power and water for what is a shit return.
Who cares how complicated it is.
once it no longer needs massive amounts of expensive resources, what then?
Its already here, it’s called the average Redditor.
This isn’t a joke. You already know what a low power LLM is, its a 40W human brain with high school level knowledge and a confident human tone writing on Reddit. But not the entire brain, only the natural language and superficial knowledge part… Meaning having the human is better anyway.
high school level knowledge
I love your optimism. These are the people who add an S to mass nouns like ‘email’, and who pluralize with an apostrophe. A successful high-school education isn’t being shown, there, at least in terms of literacy (admittedly, we are represented by how we write).
Is it correlative, in that many Redditors live in a country where 54% of them do not communicate above a 6th-grade level?
How about Louisiana or Mississippi or Missouri high school level?
That’s a massive assumption on your part. Exactly how do you propose that the entire architecture of LLM’s be restructured to not be the massive resource hog desperately seeking a profit that it is? Mind you, it’s shit now and you’re basing your argument on a hypothetical “maybe”, like maybe you’ll win a massive lottery payout tomorrow.
You make a compelling argument, it will never be achieved. I am convinced. Not one human is capable, nor are one thousand humans. It is unsurmountable mountain.
It is the year 5000. Aliens look at the wreckage of earth. They see what little remains of our meager existence. The goal that haunted humanity for all eternity, a doomed plan from the start, only ever resulting in certain failure? making chatbots use as much energy as a gaming rig. It never happened. It was… impossible.
Oh good. If you resort to hyperbole to prove your argument you’ve already lost.
Okay, since its obviously hyperbole that it isnt possible, its a potential outcome so lets return to my original question (deemed a massive assumption on my part by you):
once it no longer needs massive amounts of expensive resources, what then?
What kind of shits require nuclear power to be pushed out, mate? You’re getting me worried. Been having your fibers?
Omg our path forward on how to avoid the consequences of our actions isn’t aligned with our fundamental business needs of doing whatever we like with no consequences. If only my MBA training and Six Sigma/Lean Leader certs had prepared me for this. I better go post on LinkedIn about how GenZ did this.
- someone somewhere at MS
He’s the guy gonna be left holding the biggest bucket of slop
It only takes one of these large companies to walk away for everyone else to panic and the bubble will burst. I really hope MS takes a step away from it and actually tries to innovate something.
Apple has been dragging their heels on AI from the start. They got sued by shareholders for not having enough AI in their products fast enough. Makes me wonder if they smelled the shit from the start and have been half-assing it on purpose.
Microsoft has been the opposite. If both of them shrug and say “I guess we’ll just have slightly better digital assistants”, the market might wake up and go “oh, shit.”
I think OpenAI and Anthropic could get
bailedbought by MS/Apple/Google in a fire sale. Grok will just suck it or get propped up by Trump.Having just paid Google a billion for them to back Siri, it seems like Apple’s play is to have the tick box feature but not develop it themselves.
This is a pretty uninformed take, and maybe entirely influenced by the intentional marketing/PR image they crafted, but I get the impression that Anthropocene was more of a “building a practical tool with this technology, and iterating on that practicality” and less “this will be digital god” sort of company. Maybe ther value wouldn’t crater as much since they’ve been selling as a reasonably practical corporate tool than a magical profit box?
Pop. Pop. Pop. Pop. 🤞
Once those companies will be held responsible for everything AI (copyright issues with training, resource waste and people getting harmed and killed from their output), they’ll have every reason to be nervous.
Best we can do is a government bailout
Can I just say I love how Iceland treated its 2008 banking collapse? They didn’t bail out anyone, but they did protect investments by people. Some banks died, losing their investors billions. People didn’t lose their house or their own retirement.
This is the way we should treat all of this. Let the companies die, and let private equity take the hit for being stupid.
That’s not how capitalism works. Everyone else faces the consequences, the ones up top suffer nothing.
They should. Fuck, AI doesn’t even need to implode for them to get bent. I’m forced to do some stuff with it at work for ‘research’. Copilot is the worst of the worst. I don’t know how they could fork GPT and come up with an even worse product, but there they are.
I don’t know how they could fork GPT and come up with an even worse product, but there they are.
That’s a Microsoft specialty… have you seen Edge lately?
Lately, no. I had switched to them from Chrome for vanilla testing assuming Microsoft was less able to market my data. Had it running in Linux, no less.
Then they dedicated to dropping manifest v2. YEET
I saw the rebranded o365 to copilot, Making the one last product that some people still had an appetite for look incredibly unappealing.
For anyone curious and lazy:
Speaking at the World Economic Forum at Davos, Switzerland on Tuesday, Nadella pontificated about what would constitute such a speculative bubble, and said that the long-term success of AI tech hinges on it being used across a broad range of industries — as well as seeing an uptick in adoption in the developing world where it’s not as popular, the Financial Times reports. If AI fails, in other words, it’s everyone else’s fault for not using it.
Nadella explained the pitfalls the AI industry would need to avoid, perhaps betraying his own anxieties about its future.
“For this not to be a bubble by definition, it requires that the benefits of this are much more evenly spread,” Nadella said, as quoted by the FT. The “tell-tale sign of if it’s a bubble,” he added, would be if only tech companies were benefitting from the rise of AI. He gave the example of a pharmaceutical company using AI to accelerate drug trials; it doesn’t need to be used to discover the “magical molecule,” but provide some other tangible, less extraordinary benefit to developing the product.
Nadella is adamant that these kinds of boosts that AI provides will justify AI and carry the industry, stressing less spectacular and more practical applications of the tech.
“I’m much more confident that this is a technology that will, in fact, build on the rails of cloud and mobile, diffuse faster, and bend the productivity curve, and bring local surplus and economic growth all around the world,” he proclaimed.
Thank you for being my laziness enabler. 🙏
I would wear a cape, but just too darn lazy.
I’m much more confident that this is a technology that will, in fact, build on the rails of cloud and mobile, diffuse faster, and bend the productivity curve, and bring local surplus and economic growth all around the world,” he proclaimed.
Jesus christ i want to stuff him in a locker and roll it down a hill
Jack-Nicholson-Nodding.gifStarship-troopers-Its-afraid.webmim-doing-my-part.gif















