

Maybe a better question might be why all the upvotes. Perhaps that already answers one of your questions.


Maybe a better question might be why all the upvotes. Perhaps that already answers one of your questions.


I think the article fails to take several critical factors into consideration.
The complexity of dealing with such large amounts of information will keep increasing forever as the amount of information also grows
AI struggles with conflicting information and mistakes, which happen a lot especially when humans are involved, so eventually you will have lots of “garbage in garbage out” issues causing problems
The data one might be able to track will continuously be challenged or removed on legal/compliance bases over time, reducing its availability
For example: Yes the NSA might want our chatbot logs, but after enough people realize they might be/are getting them, people will stop feeding it as much, or introduce noise on purpose. It’s not a perfect vacuum of constant reliable information forever. We are already seeing that AI models learning from web results are getting caught up in their own slop making themselves dumber. And the sheer volume of information relative to the computing power necessary to process everything will also become a problem if they keep trying to process every single thing.
If you can still switch to the console, then check dmesg and/or journalctl -eb for any issues. But this at least tells you the system itself is not frozen. The kernel still works.
I would try to restart your login manager/desktop environment and see if that brings you back to a working desktop. If so then it sounds like a software bug in your DE. You could try switching to a different one and see if that helps anything. As a last resort you could also try a completely different Linux distro.


We will never be able to de-anonymize all Tor users
No, but the implication is that they may be able to do a lot of it, and we can never know.
What came just a few pages later in the presentation you referenced is “Goal: expand number of nodes we have access to”.
That has been their goal for practically decades at this point.
Is it really some conspiracy-nut level stretch to think they might be operating thousands of nodes today and have much deeper penetration than we think?


CMR is simpler and more reliable/battle-tested IMO


Does it have accounts?


Do you need more than the 32TB CMR disks they have up on Amazon right now?


I wish it were feasible to go back to 5.25" sized disks again.
Have you checked dmesg (or historical system boot logs) and also ran memtest86+ to make sure your RAM isn’t faulty? Even if it’s brand new it can still happen. If you have another system nearby (or even just a phone) you could try to SSH (make sure to enable/start the daemon before it freezes) into the machine and see if it’s still responsive.
I had a similar issue where I’d get a full system freeze every few weeks (not even the mouse worked), and that one turned out to be a faulty cpu, it was the infamous Raptor Lake “Vmin shift instability” bug, which I got replaced under warranty and that fixed the issue.
But since your mouse still works, we know your CPU is still functioning.
Have you tried to switch to the console with Ctrl+Alt+F1 (or F2 etc.) when the freeze happens? It could just be a software bug with your graphical environment (either Xorg/wayland or your particular window manager/desktop environment like KDE/Gnome/etc.) since the kernel itself doesn’t appear to be locked up if the mouse still works.
If I allocated 16 gig of ram to the kvm, shouldn’t my memory usage be over 16 gig or ram with other Linux programs running?
Normally yes, in my experience.
I open a new tab on a browser and it hangs my system
Hangs the host or the VM guest?
If it’s the host, does it ever happen when the VM isn’t running?
If it’s the guest, are you sure the VM itself isn’t just paused? One thing I have noticed is that the VM will pause when either I run out of disk space, or (if using -snapshot) run out of RAM (because it’s using RAM as an always-expanding disk image).


No it can’t… all this paper talks about is correlating OPSEC failures which any human can do and not related to the name you use.
They don’t even publish their exact methodology, like prompts or other tools “for safety.” So their findings are literally just “trust me bro.”


I swear people will never be happy no matter what.
No AI features? Get with the times man.
AI features? High treason.
Opt-out? Not good enough.
Opt-in? Nobody will use it.
Can’t please everyone I guess.


I wonder if you need to explicitly prompt it to check if a function really exists before suggesting it? Think about how a human brain works… we are constantly evaluating whether or not things are really true based on info in our heads… but we are not telling the models to do the same thing and instead they just yolo some shit that is confidently-wrong (not unlike many humans, admittedly).


So you’re saying people shouldn’t want nice features?


How are they able to auto-play audio even when my browser is set to block that?


yea that’s not how these models work, at all. they don’t act autonomously or just take over sentient-like control of everything that runs it, unless you tell it to. and probably wouldn’t be able to do that anyway without a lot more information it doesn’t have.


IMO The problem isn’t so much getting them to know about it, the problem is getting them to care.
I think your own statements here are pretty elitist.
“As a rule, strong feelings about issues do not emerge from deep understanding.”