What do you think, ChatGPT? If it can create almost perfect summaries with a prompt; why wouldn’t it work in reverse? AI built into Windows could flag potentially subversives thoughts typed into Notepad or Word, as well as flag “problematic” clicks and compare it to previously profiled behavior. AI built into your GPU could build an behavioral profile based on your interactions with your hentai Sonic the Hedgehog game.
I think it was during the Cambridge analytics days, but I read an article that the average person is tracked by over 5000 data points. So we’re already kinda f’d.
Defeatism plays into their advantage, you can always minimize the tracking. E.g. https://www.goodreads.com/book/show/54033555-extreme-privacy
Ah, yeah, sorry, didn’t mean to come off defeatist. I see that now.
As someone who recently ditched Alexa, blocks his smart TVs, and runs everything through PiHole and a VPN, I’m definitely…sorta trying.
If you don’t start limiting house electrical hours are you even trying
Don’t need AI for any of this. It already happens with OS and Application telemetry.
Hello, I’m NVIDIA I send every app you use as telemetry. But you know it’s only to know in what apps your driver crash of course. I wouldn’t send that data to telemetry even when it doesn’t crash. Right?
True, you don’t need AI for security problems…
…but it is introducing tons of them, for little to no benefit.
About a month ago I saw a post for a MSFT led AI Security conference.
None of it, absolutely none of it, was about how to say, leverage LLMs to aid in heuristic scanning for malware, or something like that.
Literally every talk and booth at the conference was all about all the security flaws with LLMs and how to mitigate them.
I’ll come back and edit my post with the link to what I’m talking about.
EDIT: Found it.
Unless I am missing something, literally every talk/panel here is about how to mitigate the security risks to your system/db which are introduced by LLM AI.
Sorry, what was that? “BUY BUY BUY”?
And it’s been escalated with AI
Have a few friends over and have them all sit around a table. Have everyone place their smartphones on the table (turned on, of course), and proceed to discuss something like the merits of drills from Harbor Freight versus Ryobi, Milwaukee and DeWalt. Ideally with one person speaking at a time. Wait about a week and ask your friends if any of them noticed an uptick in ads for drills or powertools in general.
Hasn’t this been proven to be false? People have monitored the network traffic and phones don’t listen like this; it’s just not practical.
Instead, they keep track of your browsing, location, contacts, etc and build a profile well enough they don’t need to listen to you.
It’ll vary by the software you have, and the phone you have. Many companies have been caught capturing microphone recordings such as Google, Meta and Amazon over the years to name a few.
It also depends on the appliances you own, and how you have them configured. TVs, Alexa, hell we even have refrigerators that have live mics on them now.
I have worked for tech my whole life, this is table stakes for these organizations, ethics be damned.
My understanding is the mics aren’t “live” until the activation phrase is said, then they record and send that data for processing. If someone has proven otherwise I’d love to see their methods.
The scary thing isn’t that they’re listening, it’s that they collect so much other data that they don’t have to.
How are they listening for the activation phrase then?
I’m sure you’ll find some good explanations online, but there’s an “activation” circuit on the device “listening” that then engages the rest of the system when it’s triggered. So there’s no recording or sending of data until the activation phrase has been said, and the activation phrase detection is done locally on the device.
This makes sense for devices like Google home where there is only one activation phrase but I don’t understand how an IC could exist that can respond to custom activation phrases.
Also are you saying that cellphones have this circuit too? I’m pretty darn sure that’s all software based.
the “it doesnt record you until the software decides so” argument is such a bullshit. does not mke any difference. it listens when it wants, and you cant even verify it
Run your own experiments. That’s all I am suggesting.
It’d be very easy to take some LLM text about some product, run it through a text to speech converter then quietly expose the phone to it (like put a earbud up to the mic). This way you could easily create a blind or a double blind test, you don’t know what product that this set up has been rambling about into the phone for the past twelve hours and you have to pick it out from the ads you’re served.
You don’t need to transmit the recording. Maybe not even a transcript. Just the keywords.
deleted by creator
I saw this in minutes after a conversation in a car with 2 people, 2 phones.
And it was for a subject which was waaaaay out in left field for us both, something neither of us had ever even thought about before.
Ads? You mean those stickers on a bus?
Seriously though, use DNS, VPN and other means to block ads and telemetry, so thoughts like that don’t even occur to you.
We’ve noticed.
Initiating countermeasures
While it could, and I have no doubt that someone will try to do this, it’s not the reason it’s being shoehorned into everything.
It’s partially because it’s the tech thing that’s ‘so hot right now’, so every tech enthusiast and hustler thinks it can be used everywhere to solve everything, and it’s partially because it’s a legitimately huge advancement in what computers are capable of doing, and one with a lot of room for growth and improvement, and can be legitimately useful in places like Notepad.
Yeah Gen AI is the perfect demo tech looks amazing if you don’t look to close. Plus it’s the perfect bullshiting machine no wonder CEOs love it, it talks like they do. AI has its uses and it’s doing good work in the fields you don’t hear about much. But there are way more pets.com right that will go bust soon and the viable businesses will float to the surface. Hell we are going through that right now. Where web 2.0 are moving out of the growth phase and into the Enshittification phase.
They’re just sending every query home right now. Actual training is still resource-intensive and very expensive. I suspect they’re just grabbing as much data as they can get their hands on from everyone with unique identifiers and storing it for later training. Once the data they have is worth more than the cost to train on it then they’ll go ahead and run a giant model of everyone.
At that point they’ll sell query time to corporations. “How many people would pay $400 for trainers with OLED screens on the sides”. “Oh really? Yes, I’d like to buy ads for all of those people”