It always feels like some form of VR tech comes out with some sort of fanfare and with a promise it will take over the world, but it never does.
Printer drivers.
Apparently sending data serially at glacial speeds is impossible.
The LaserJet 4P driver was the GOAT. It worked on every HP printer for years. It’s been all downhill since.
cars.
we’re in too deep now, investment bias prevents changing strategy
No no no. There will always be solutions to the problems they cause.
They kill billions of animals every year but we can build nature overpasses. They kill millions of humans every year but we can blame pedestrians for wearing headphones or not looking properly. The tires shed about a quarter of all microplastics in the environment in Canada but surely we will find a technological solution for that eventually. The parking spaces still cause heat islands but we can just cover them with solar panels. Parking also causes flooding because of impervious surfaces but we can just resurface all of them with new materials.
And soon cars will all run on hydrogen and be totally environmentally friendly.And soon cars will all run on electricity and be totally environmentally friendly. Everyone on the planet just has to buy a new car eventually, keep buying cars, and spend (buy!) energy to move them everywhere they go. But they will be environmentally friendly! Except for all the other issues but surely we will find solutions for them. Save the planet by getting an electric car, the biggest and most expensive consumption object, and have a taste of freedom when paying to fill it with energy./s just in case.
the automobile is the perfect vehicle for a speedy fuite en avant ! not sure how to translate that, flight forward ? rush forward ?
“Smart” TVs. Somehow they have replaced normal televisions despite being barely usable, laggy, DRM infested garbage.
Curious what your preferred streaming box is, considering changing my Android TV so that it launches to HDMI, disconnect from internet, use a streaming box that isn’t as slow and has a hardwired connection instead.
They are surveilance- and ad delivery platorms. The user experience is as bad as the consumer can tolerate. They work as intended.
I don’t buy it, they would be better at whatever nefarious crap if they didn’t take a full second to navigate between menu options, or had a UI designed by someone competent. Even people who have subscriptions to the services the TV is a gateway to have a hard time figuring out how to use them. These things aren’t even good at exploitation, they are decaying technology.
If every smart TV you buy is the same, then you have no viable choices, and as such they’re doing the bare minimum of what’s expected for the bare minimum of cost.
You can choose not to have a TV. I only know about the current state of smart TVs because of sometimes being around the ones other people have, I would never buy one myself, there’s no need. Any media you want to see can be viewed in other ways.
Do you have a 55" OLED laptop screen to watch movies and play games on?
I mean, all power to you, but I really like having a nice sized TV.
That’s fair. I think if I wanted a larger screen I’d look into big monitors and some kind of expansion of my homelab setup to display things to it, but I can see why people might want a dedicated device with less setup required, even one where the setup is still pretty confusing.
I looked up some statistics and it seems, depressingly, that consumers are in fact buying more televisions and it’s projected to increase, so I guess I have to concede the point that what they are doing is successful despite all reason.
You’re not kidding. It’s pretty difficult to not buy them.
It’s a $250 smart TV vs a $2000 non-infested TV.
Nothing is smart if you dont connect it to the internet.
That my strategy when I have to buy one of those dumb TVs. Just leave it ignorant of the internet
Man, I haven’t really faced this yet. My flat screen is a really old Panasonic plasma and it is"barely" smart. It came with a few apps on it. I ignore them and use it as a dumb monitor, running everything through my receiver instead. When it dies, I don’t know what I’ll do.
You can disconnect them from the WiFi and block their ability to connect and then use a third party device for any apps you want.
I recently bought a TV on behalf of a friend( because it was cheaper at Costco) and when we got it to his house and connected it, it asked him to give up his privacy like 11 times. If he said no, would it still have worked?
Mine had the ability to turn of WiFi in settings. I provided it no real information, didn’t create and account, and didn’t use their app or interface.
It was a Samsung. YMMV with other brands.
They’re more expensive, but check out commercial displays. They’re basically just big “dumb” TVs for businesses to display menus and whatnot, usually with a single HDMI and no sound, but those limitations can easily be bypassed with a stereo receiver.
Only if you use it as a smart tv - I just never signed the user agreements, and now have a big TV with OLED. I switch to the source I want - off I go. Television can still just be television!
The concept confuses and infuriates me. I’m just going to stick a game console or Blu-ray player on it, but you can’t buy a TV these days that doesn’t have a bloated “smart” interface. The solution, for me at least, is a computer monitor. I don’t need or want a very large screen, and a monitor does exactly one thing, and that’s show me what I’ve plugged into it.
A projector is also a good alternative
you can buy business-grade stuff without all the spyware shit, it’s just much more expensive
So I have a contentious one. Quantum computers. (I am actually a physicist, and specialised in qunatum back in uni days, but now work mainly in in medical and nuclear physics.)
Most of the “working”: quantum computers are experiments where the outcome has already been decided and the factoring they do can be performed on 8 bit computers or even a dog.
https://eprint.iacr.org/2025/1237.pdf “Replication of Quantum Factorisation Records with an
8-bit Home Computer, an Abacus, and a Dog”
This paper is a hilarious explanation of the tricks being pulled to get published. But then again, it is a nascent technology, and like fusion, I believe it will one day be a world changing technology, but in it’s current state is a failure on account of the bullshittery being published. Then again such publications are still useful in the grand scheme of developing the technology, hence why the article I cited is good humoured but still making the point that we need to improve our standards. Plus who doesnt like it when an article includes dogs.
Anyway, my point is, some technologies will be constant failures, but that doesn’t mean we should stop.
A cure for cancer is a perfect example. Research has been going on for a century and cumulatively amassed 100s of billions of dollars of funding. It has failed constantly to find a cure, but our understanding of the disease, treatment, how to conduct research, and prevention have all massively increased.A cure for cancer is a perfect example. Research has been going on for a century and cumulatively amassed 100s of billions of dollars of funding. It has failed constantly to find a cure, but our understanding of the disease, treatment, how to conduct research, and prevention have all massively increased.
Cancer != cancer. There are hundreds of types of cancer. Many types meant certain death 50 years ago and can be treated and cured now with high reliability. “The” cure for cancer likely doesn’t exist because “the” cancer is not a singular thing, but a categorization for a type of diseases.
Exactly, a “cure for cancer” is like “stopping accidents”.
There’s still cancer, and there are still accidents. But on both fields it’s much better to be alive in 2026 than in 1926
Thank you for helping educate on this. I live in the best time in history to have the cancer I have. I’ll be able to live a pretty full life with what would have been a steady decline into an immobile death, were this 30 years ago.
Amen. Too few people understand this or fail to make this distinction.
Yes of course. There are also many types of quantum computer and applications, multiple types of fusion, and cancers.
yeah it is like saying a cure for virus or a cure for bacteria. Its like why we don’t have a cold vaccine and flue ones have to be redone every year.
They didn’t thank Scribble (the dog) in their acknowledgements section. 1/10 paper, would only look at the contained dog picture
We have also produced treatments that work to some extent for some forms of cancer.
We don’t have a 100% reliable silver bullet that deals with everything with a simple five minute shot, but…
That article made my day!
Probably not top ten of mind, but Carbon Capture and Storage (CCS) has been trotted out by the fossil fuel industry for a generation as a panacea for carbon emissions, in order to prevent any real legislation limiting the combustion of hydrocarbons.
Doesn’t sound like it failed at its purpose in that case
ai
pretty much, we will never make it like CYLon level, or skynet level intelligent. the former requires a human mind in a convoluted process, which is probably more realistic than skynet/kaylon.
Encryption with safe, unexploitable backdoors.
“unexploitable backdoor” is a contradiction.
https://en.wikipedia.org/wiki/One-time_pad
The one-time pad (OTP) is an encryption technique that cannot be cracked in cryptography. It requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent.
OTPs have a safe, unexploitable backdoor feature?
Oh, nice catch, thanks. I read it as “safe, without exploitable backdoors”, but that’s not what he was saying.
I read it the exact same. Didn’t notice until reading this, that that is not what was said
I’m going to get downvoted for this
Open source has its place, but the FOSS community needs to wake up to the fact that documentation, UX, ergonomics, and (especially) accessibility aren’t just nice-to-haves. Every year has been “The Year of the Linux Desktop™” but it never takes off, and it never will until more people who aren’t developers get involved.
Not here to downvote. But I will say there is some good changes as of the past five years.
From a personal perspective: there’s a lot of GOOD open-source software that has great user experiences. VLC. Bitwarden. OBS. Joplin. Jitsi.
Even WordPress (the new Blocks editor not the ugly classic stuff) in the past decade has a lot of thought and design for end users.
For all the GIMP/Libre office software that just has backwards ass choices for UX, or those random terminal apps that require understanding the command line – they seem to be the ones everyone complains about and imprinted as “the face of open-source”. Which is a shame.
There’s so much good open-source projects that really do focus on the casual non technical end user.
While you generally have a point, the year of the linux desktop is not hindered by that. Distributions like Linux Mint, Ubuntu and the like are just as easy to install as Windows, the desktop environments preinstalled on them work very good and the software is more than sufficient for like 70% to 80% of people (not counting anything, that you cannot install with a single click from the app store/software center of the distribution.
Though Linux is not the default. Windows is paying big time money to be the default. So why would “normal people” switch? Hell, most people will just stop messaging people instead of installing a different messenger on their phone. Installing a different OS on your PC/Notebook is a way bigger step than that.
So probably we won’t get the “Year of the Linux Desktop”, unless someone outpays Microsoft for quite some time, or unless microsoft and Windows implode by themselves (not likely either)
Funny you make “missing documentation” an argument against open source and for closed source, as if the average Windows user reads any documentation or even the error messages properly.
your comment is a joke.
Linux fan boys mad when regular users exist
I’m not even a “regular user” per se, just not a software dev. I’m a network administrator working in a data center. I think a lot of FOSS devs think their users are like themselves, they love to tinker and don’t mind if their PC is a project. And sometimes I do like to tinker, but sometimes I need a computer to be a tool, not an end in itself, and desktop Linux rarely serves in that capacity.
Weird considering I need my desktop to just be a tool as well, and Mint really does that for me. Just my experience tho.
If you make that argument about the state of software in general, I’d agree to an extent in the sense that it should be more prioritized. But I don’t see how that applies to open source in particular?
In those aspects proprietary software is just as bad, if not even worse. The difference is simply that the default choice of software for most tasks is a proprietary software. They can have a shit ton of unusable and confusing mess, even intentional dark patterns, but users will adapt.
There’s a reason why Apple is the poster child for accessibility. They control the entire stack from hardware to OS, and have an ocean of money to devote to what is effectively a tiny marginalized portion of their user base.
Open source is the exact opposite. Any given open source project (especially any given Linux distro) is standing atop a precarious mound of other open source projects that the distro maintainers themselves have no control over. So when accessibility breaks, the maintainers say “It’s not us, it’s GNOME”. Then GNOME says “It’s not us, it’s Wayland”, and so on.
Imagine I handed you a laptop without a working screen, then when you complain you can’t use it, I said “It’s not my problem” or “We’ll get to it eventually” or “I wouldn’t know how to help you” That’s desktop Linux when you’re blind.
Apologies if this comes across as a rant. I’m just bitter about the fact there’s all this free, privacy-respecting software out there that’s out of my reach, and I’m stuck selling my soul to Microsoft and Apple.
Theres no singular year of the linux desktop as every year is the year of the Linux desktop, as long as Microsoft keeps shooting itself in the foot and Linux marketshare rises slow bit by bit
I’m a reasonably new Linux at a place of trying to learn how to improve/optimise my system, and honestly, Google’s Gemini has become my user manual.
If I can’t figure something out then I could trawl through a bunch of forums where the issue doesn’t really match mine, or the fix has changed since OP had the same problem, or I could just go straight to an LLM. I understand that they have a tendency to make shit up on the fly (this is a great example), but when it comes to troubleshooting setup issues they’re really helpful. And yes, I kmow that’s because they’ve already ingested the support forums. But it is genuinely so much quicker to sort things out, while learning as you go.
It’s made a world of difference to me in my IT support services business. It’s not always right, but it’s always helpful even when it isn’t. It’s far better at looking at a page of log information and picking out the one bit that explains why the thing I need to work isn’t working. I’ve been emboldened to do a lot of projects that I was previously uncomfortable with. The key is I know enough about nearly anything that I can tell when im being led down a garden path.
The quality of the prompt is everything.
It’s far better at looking at a page of log information and picking out the one bit that explains why the thing I need to work isn’t working
Yes. I can post a terminal output into it and it’ll tell me exactly what’s not working and why. And that’s incredibly valuable.
Ironically, I used Gemini to help me build a little app that takes a copied YouTube link and uses yt-dlp to download it to my Jellyfin server in a format that’ll play nicely on my Apple TV. I can’t imagine how I’d approach achieving that if I had to start from scratch.
I get what you’re aiming at. My perspective is that the regular user typically is forced into a state of learned helplessness.
You learn Windows and don’t look further because you learned Windows = normal computer, MacOS = fancy expensive computer. If you cannot see the problems it is very hard to sell the solution
Regarding UX - that stuff is hard to get good. When you’re that good it’s often more lucrative to get paid for that skill set compared to passionately designing FOSS.
Huge difference between having it and not needing it and needing it and not having it.
I think the person you’re replying to is 100% correct since you’re coming at them so heated
I’ll go against the grain and say literally all of it. Every piece of technology that exists is a compromise between what the designer wants to do and the constraints of what is practical or possible to actually pull off. Therefore, all technology “fails” on at least some metric the designer would like it to achieve. Technology is all about improvement and working with imperfection. If we don’t keep trying to make things better, then innovation stops. With your example of VR, I’d say that after having seen multiple versions of VR in my lifetime, the one that we have now is way more successful and impactful, especially in commercial uses rather than consumer products. Engineers can now tour facilities before they are built with VR headsets to see design flaws that they might not have seen just with a traditional model review, for example. Furthermore, what we have now is just an iteration on what we had before. It doesn’t happen in a vacuum, people take what came before, look at what worked and what didn’t, and what could be fixed with other technologies that have developed in the meantime. That’s the iteration process.
Iteration isn’t a claim that the predecessor was a failure though, you iterate on the successes of the prior generation. It used to be that technology advanced so rapidly that the cutting edge became obsolete in a matter of a few years, but for that time it was a success.
I think there’s also an assumption of design philosophy here. One designer might put many generalized requirements into their design, then you get Google glasses, AI, NFTs and so on. This means everything is a failure because it couldn’t achieve the requirements. Others may pick a small set of very specific requirements, then you get the iPhone or a Toyota hilux. These are massive successes because they had cohesion in the idea and planned as to about compromise.
Since at least 1970, every decade there seems to be a, “The VR take over is here!” fad and it falls flat every time.
Those VR rollercoaster shuttle rides in malls during the 1980s and early 1990s, thinking that is the future, oh boy, we were all so silly.
AI,
Maybe like super-thin phones and foldables/rollable phones. Most people have no need or use for them tbh
I don’t want a phone so thin and slippery I can’t hold it in my hand. I want a phone as thicc as an old gray brick Game Boy. When I drop it on the floor I want to have to replace the floor. I want a battery that will outlast the lifespan of the sun.
The big one would be viable nuclear fusion, we’ve been trying to figure it out and spending money on it for like 80 years now.
That being said, there’s actually a lot of verified progress on it lately by reputable organizations and international teams.
It’s only 30 years away!
Just like it was 30 years ago.
Ah, the so called Fusion Constant.
I’ve seen a road map at the start of iter which was actually more. 30 years to get a stable exo energetic plasma, then 30 years to build a demonstrator able to produce electricity, and then 30 years to have industrialised fusion plant.
Well that said IF we succed that s a game changer in the production of electricity
As far as i know they can get it working in small scale, in labs
Essentially yes, https://en.wikipedia.org/wiki/Fusion_power#2020s
https://www.world-nuclear-news.org/articles/helion-begins-work-on-fusion-power-plant One of the commercial entities did start building a plant last year, not particularly large (only 50 MW) with an agreement to power a Microsoft datacenter, and billions in funding from government and private sources.
Hard to tell for real though because the level of secrecy around this is insanity and the US Military is heavily involved in not just this, but pretty much every similar organization.
I would not be surprised if we hear nothing, or see them “failing”, even if some of these designs are fully functional already.
In its defence, that assumed it was properly funded. Its actual funding was very limited.
I believe most of the critical problems have been solved. The only major one left is keeping the reactor walls stable. They have a tendency to transmute, which causes multiple problems.

deleted by creator
AI, Mass Surveillance and privatization of services people need to live and National security technology
AI.
How is AI a failure exactly?
The cost to maintain it? The enviormental impact? The impact its enormouse energie consumption on everyday people (rising costs imensly)?
It can’t really reliably do any of the stuff which it is marketed as being able to do, and it is a huge security risk. Not to mention the huge climate issues for something with so little gain.
AI is great, LLMs are useless.
They’re massively expensive, yet nobody is willing to pay for it, so it’s a gigantic money burning machine.
They create inconsistent results by their very nature, so you can, definitionally, never rely on them.
It’s an inherent safety nightmare because it can’t, by its nature, distinguish between instructions and data.
None of the company desperately trying to sell LLMs have even an idea of how to ever make a profit off of these things.
LLMs are AI. ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that’s 8 million paying customers. That’s not “nobody.”
That sheer volume of weekly users also shows the demand is clearly there, so I don’t get where the “useless” claim comes from. I use one to correct my writing all the time - including this very post - and it does a pretty damn good job at it.
Relying on an LLM for factual answers is a user error, not a failure of the underlying technology. An LLM is a chatbot that generates natural-sounding language. It was never designed to spit out facts. The fact that it often does anyway is honestly kind of amazing - but that’s a happy accident, not an intentional design choice.
ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that’s 8 million paying customers. That’s not “nobody.”
Yes, it is. A 1% conversion rate is utterly pathetic and OpenAI should be covering its face in embarrassment if that’s. I think WinRAR might have a worse conversion rate, but I can’t think of any legitimate company that bad. 5% would be a reason to cry openly and beg for more people.
Edit: it seems like reality is closer to 2%, or 4% if you include the legacy 1 dollar subscribers.
That sheer volume of weekly users also shows the demand is clearly there,
Demand is based on cost. OpenAI is losing money on even its most expensive subscriptions, including the 230 euro pro subscription. Would you use it if you had to pay 10 bucks per day? Would anyone else?
If they handed out free overcooked rice delivered to your door, there would be a massive demand for overcooked rice. If they charged you a hundred bucks per month, demand would plummet.
Relying on an LLM for factual answers is a user error, not a failure of the underlying technology.
That’s literally what it’s being marketed as. It’s on literally every single page openAI and its competitors publish. It’s the only remotely marketable usecase they have, because these things are insanely expensive to run, and they’re only getting MORE expensive.
It’s quite bad at what we’re told it’s supposed to do (producing reliably correct responses), hallucinating up to 40% of the time.
It’s also quite bad at not doing what it’s not supposed to. Meaning the “guardrails” that are supposed to prevent it from giving harmful information can usually be circumvented by rephrasing the prompt or some form of “social” engineering.
And on top of all that we don’t actually understand how they work in a fundamental level. We don’t know how LLMs “reason” and there’s every reason to assume they don’t actually understand what they’re saying. Any attempt to have the LLM explain its reasoning is of course for naught, as the same logic applies. It just makes up something that approximately sounds like a suitable line of reasoning.
Even for comparatively trivial networks, like the ones used for written number recognition, that we can visualise entirely, it’s difficult to tell how the conclusion is reached. Some neurons seem to detect certain patterns, others seem to be just noise.You seem to be focusing on LLMs specifically, which are just one subcategory of AI. Those terms aren’t synonymous.
The main issue here seems to be mostly a failure to meet user expectations rather than the underlying technology failing at what it’s actually designed for. LLM stands for Large Language Model. It generates natural-sounding responses to prompts - and it does this exceptionally well.
If people treat it like AGI - which it’s not - then of course it’ll let them down. That’s like cursing cruise control for driving you into a ditch. It’s actually kind of amazing that an LLM gets any answers right at all. That’s just a side effect of being trained on a ton of correct information - not what it’s designed to do. So it’s like cruise control that’s also a somewhat decent driver, people forget what it really is, start relying on it for steering, and then complain their “autopilot” failed when all they ever had was cruise control.
I don’t follow AI company claims super closely so I can’t comment much on that. All I know is plenty of them have said reaching AGI is their end goal, but I haven’t heard anyone actually claim their LLM is generally intelligent.
I know they’re not synonymous. But at some point someone left the marketing monkeys in charge of communication.
My point is that our current “AI” is inadequate at what we’re told is its purpose and should it ever become adequate (which the current architecture shows no sign of being capable) we’re in a lot of trouble because then we’ll have no way to control an intelligence vastly superior to our own.So our current position on that journey is bad and the stated destination is undesirable, so it would be in our best interest to stop walking.
If people treat it like AGI - which it’s not - then of course it’ll let them down.
People treat it like the thing it’s being sold as. The LLM boosters are desperately trying to sell LLMs as coworkers and assistants and problemsolvers.
I don’t personally remember hearing any AI company leader ever claim their LLM is generally intelligent - and even the LLM itself will straight-up tell you it isn’t and shouldn’t be blindly trusted.
I think the main issue is that when a layperson hears “AI,” they instantly picture AGI. We’re just not properly educated on the terminology here.
“GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” - Altman
During the launch of Grok’s latest iteration last month, Musk said it was “better than PhD level in everything” and called it the world’s “smartest AI”.
https://www.bbc.com/news/articles/cy5prvgw0r1o.amp
“PhD level expert in any topic” certainly sounds like generally intelligent to me. You may not have heard them saying it, but I feel like I’ve heard a bunch of these statements.
I don’t personally remember hearing any AI company leader ever claim their LLM is generally intelligent
Not directly. They merely claim it’s a coworker that can complete complex tasks, or an assistant that can do anything you ask.
The public isn’t just failing here, they’re actively being lied to by the people attempting to sell the service.
For example, here’s Sammy saying exactly that: https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/
And here’s him again, recently, trying to push the “our product is super powerful guys” angle with the same claim: https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/sam-altman-ai-agents-hackers-best-friend
But he is not actually claiming that they already have this technology but rather that they’re working towards it. He even calls ChatGPT dumb there.
and ChatGPT (which Altman referred to as “incredibly dumb” compared with what’s coming next)



















