Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Taking over for Gerard this time. Special thanks to him for starting this.)
Holy smokes Jeeps will reportedly show ads while you are freaking driving:
Imagine pulling up to a red light, checking your GPS for directions, and suddenly, the entire screen is hijacked by an ad. That’s the reality for some Stellantis owners. Instead of seamless functionality, drivers are now forced to manually close out of ads just to access basic vehicle functions.
One Jeep 4xe owner recently shared their frustration on an online forum, detailing how these pop-ups disrupt the driving experience. Stellantis, responding through their “JeepCares” representative, confirmed that these ads are part of the contractual agreement with SiriusXM and suggested that users simply tap the “X” to dismiss them.
“Listen guys, if you don’t want me stabbing you you simply have to ask nicely every time, and also I’m trying real hard to reduce the rate of stabbing incidents so in a way I’m the victim here.”
Reading around it sounds like modern cars can be user-hostile in general, and this might not be new; so I’m sure glad I have one from the ancient times of 2012. It has a tiny unobtrusive screen which does nothing but show my music, the odometer, the backup camera, any warnings, and the Hatsune Miku wallpaper I loaded into it.
A massive fan favorite in this community and CEO of a thermodynamics startup recently linked up with Grimes.
Nitter link: https://xcancel.com/BasedBeffJezos/status/1889072622409064649#m
I feel like there’s something to be said about how merely wearing a suit apparently means “Bond villain aesthetics”. Or is posing with a pretty young woman what makes the suit Bond villainy?
It’s probably because he’s a fatuous evil twat.
No, that’s not a new achievement for him.
In a hilarious turn of events that no one could have foreseen, Anthropic is having problems with people sending llm generated job applications, and is asking potential candidates to please not use ai.
While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate ‘Yes’ if you have read and agree.
https://www.404media.co/anthropic-claude-job-application-ai-assistants/
Making my service slightly worse once again to own the libs.
Well as they promised Google Maps has finally fallen. It now shows “Gulf of America” and nothing else to US users. I suspect someone outside the US will be shown both the real name and Gulf of America. Denali is still labeled as Denali… for now.
Disorganize the world’s information and make it universally inaccessible and stupid.
Wingnuts genuinely think corporations having a rainbow colored version of their logo on social media in June is proof they’re being controlled by a cabal of woke soy sjw leftists.
Meanwhile corporations the second Donald Trump is in the office again:
Shows up as “Gulf of Mexico (Gulf of America)” for me.
Well that’s the stupidest most pathetic way they could have done it, bravo.
Shows up like that in the local (non-US) version here too.
And this would have been such an easy way for Google to show a little bit of (at least symbolic) resistance to everything going on…
Same here. Fucking bootlickers.
This came across my desk today. Anything to be skeptical about?
https://www.platformer.news/roost-open-source-trust-safety/?ref=platformer-newsletter
rahaeli on bluesky says this is fine, it’s the same ML that any T&S dept already has to apply to triage the firehose of shit, and doesn’t involve LLMs
Well now, this is intriguing. Let me check out their website and see if they have the source code for this open source offering available there. Oh dear, looks like they have forgotten to include a link to the source code (though they did make sure to prominently include the referrer of platformer.news in the URL so that’s good for them). Not to worry, surely they have a GitHub or something. Oh, still nothing. Maybe there’s a link in this Mozilla blog post about it? Still no, but they seem to accidentally imply this is some kind of an AI thing? Is this finally the open source AI we have all been so excitedly waiting for?
To be a little more serious, there’s barely anything here to even be gullible about. Just a Vaporware idol for corpos to have a circlejerk around and congratulate themselves for pretending to do something about the bad vibes. If there’s a real ambition beyond corporate peacockery here, the motivation is merely to take care of the pesky content moderation without having to pay people to do it.
merely from seeing the domain and author, probably all of it - casey newton’s got a real bad case of access syndrome, and keeps writing fluff/puff pieces uncritically amplifying tons of bayfucker nonsense
(it’s even beyond the usual levels of what one may refer to as useful idiot)
(I’ll read the rest of it later as my brain boots and the day’s bullshit allows on time)
Ed Zitron behind a tree rubbing his hands together
Credit where credit is due, this is a decent comeback
https://xcancel.com/elonmusk/status/1889063777792069911
Mom says we have Kendrick vs. Drake at home.
Idea for another megathread: go to linkedin and post the first thing you see (that provokes a reaction). Here’s mine:
FWIW I checked a few comment threads and guy is playing this off as lighthearted/a joke, but folks here know better than that.
Of all the world wide websites on the web of this wide world LinkedIn might be the one I understand the least, for I dread to even try to understand it.
I assume it’s like an online CV/résumé where you can list your job experience, which seems sensible enough. But it’s also like Facebook for some reason. Well maybe it’s good that someone who needs your skills can also come to you and you need some kind of messaging, call it social network type functionality for that. But also recruiters are spammy pests because obviously they are.
Also apparently some people use it as an actual social media and just post their travel photos or random thoughts there, which is wild to me. It’s like someone writing a letter to the editor of a newspaper to tell them about the pancakes they made in the weekend. How is this your medium of choice for this? And then there are the influencers posting the kind of baffling crap seen in this thread, which are already a mysterious animal by themselves, but how on earth are they doing this on the same website that somewhat normal seeming people just use to host their professional biography?
It’s like you founded a combination of an employment office and a cult temple, where the job seekers aren’t expected or required to join the cult, but the rites are still performed in the waiting room in public view. Sometimes one of my friends tells me about the funny and cringe cultist orgy they saw at the employment office. “Why were you at the orgy cultist employment office?” I ask them. “I didn’t know you were looking for a job.” And they tell me they weren’t looking for a job, they just go there sometimes. Or maybe HR announces a bowling night or blood drive or whatever and the email includes a link to let everyone (cultists, job seekers and neither of the above) at the cultist employment service office know. So my colleagues do, then they crack a joke about how annoying and weird all the cult stuff in that office is and we all have a chuckle. Just another day of having a white collar job, telling about their day to their mostly non-cultist white collar job having friends at the cult temple that is also an employment agency for cultists and non-cultists alike.
Also it’s hilarious to me that Windows has a built-in global keyboard shortcut for opening LinkedIn in your default web browser and it’s fucking Ctrl-Alt-Shift-Super-L, proving that Windows is the true modern successor of Emacs.
It’s like you founded a combination of an employment office and a cult temple, where the job seekers aren’t expected or required to join the cult, but the rites are still performed in the waiting room in public view.
chef’s kiss
go to linkedin and post the first thing you see (that provokes a reaction).
Why would you do this to me?
spoiler
Great leadership is born under pressure.
Anyone can perform when things are easy. Real leadership shines in moments of pressure.
Most people react. Great leaders respond.
Here’s how you can too:
❌ “You need to calm down” ↳ Why: Instantly escalates tension ↳ Instead: “I’m noticing we’re both getting tense. Should we take a break?”
❌ “This is a complete disaster” ↳ Why: Spreads panic and paralyzes action ↳ Instead: “What’s the one thing we absolutely must get right?”
❌ “You should have known better” ↳ Why: Creates shame, not learning ↳ Instead: “What can we learn from this for next time?”
❌ “It’s not my fault” ↳ Why: Signals lack of ownership ↳ Instead: “I may have contributed to this. Help me understand where”
❌ “Just figure it out” ↳ Why: Shows poor leadership ↳ Instead: “Can we clarify what success looks like for both of us?”
❌ “Why isn’t this done yet?” ↳ Why: Creates defensiveness ↳ Instead: “What’s the most immediate barrier we need to address?”
❌ “That’s not my problem” ↳ Why: Destroys team cohesion ↳ Instead: “We’re on the same team. Let’s figure this out together”
❌ “I don’t have time for this” ↳ Why: Devalues others’ priorities ↳ Instead: “I want to give this proper attention. Can we schedule 30 minutes?”
❌ “I already told you that” ↳ Why: Makes people shut down ↳ Instead: “Let me explain this another way”
❌ “That’s how we’ve always done it” ↳ Why: Kills innovation ↳ Instead: “What if we tried a different approach?”
The truth: Reputations are fragile. And rebuilding them is expensive.
P.S. Which response do you want to use more often?
—
♻ Repost to help your network communicate better.
➕ Follow me for more like this.
❌ “Why would you do this to me?” ↳ Why: Instantly creates tension. ↳ Instead: “What did I do wrong, and how can I do better next time?”
Just incredible that good leadership equals gaslighting yourself into an abusive relationship
A bit of a superpower, just a bit. A tiny little morsel, a sample of superpower, if you will.
If you gotta qualify it like that…
Saltman has a new blogpost out he calls ‘Three Observations’ that I feel too tired to sneer properly but I’m sure will be featured in pivot-to-ai pretty soon.
Of note that he seems to admit chatbot abilities have plateaued for the current technological paradigm, by way of offering the “observation” that model intelligence is logarithmically dependent on the resources used to train and run it (i = log( r )) so it’s officially diminishing returns from now on.
Second observation is that when a thing gets cheaper it’s used more, i.e. they’ll be pushing even harded to shove it into everything.
Third observation is that
The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.
which is hilarious.
The rest of the blogpost appears to mostly be fanfiction about the efficiency of their agents that I didn’t read too closely.
Second observation is that when a thing gets cheaper it’s used more, i.e. they’ll be pushing even harded to shove it into everything.
Are they trying to imply that when they will make it cheaper by shoving it everywhere? I honestly can’t see how that logic is holding together
as I read it, it’s an attempt at reference to economy of scale under the thesis “AI silicon will keep getting cheaper because more and more people will produce it” as the main underpinning for how to reduce their unit economics. which, y’know, great! that’s exactly what people like to hear about manufacturing and such! lovely! it’s only expensive because it’s the start! oh, the woe of the inventor, the hard and expensive path of the start!
except that doesn’t hold up in any reasonable manner.
they’re not using J Random GPU, they’re using top-end purpose-focused shit that’s come into existing literally as co-evolution feedback from the fucking industry that is using it. even some hypothetical path where we do just suddenly have a glut of cheap model-training silicon everywhere, imo it’s far far far more likely to be an esp32 situation than a “yeah this gtx17900 cost me like 20 bucks” situation. even the “consumer high end” of “sure your phone has a gpu in it” is still very suboptimal for doing the kind of shit they’re doing (even if you could probably make a great cursed project out of a cluster of phones doing model training or whatever)
falls into the same vein of shit as “a few thousand days” imo - something that’s a great soundbite, easily digestible market speak, but if you actually look at the substance it’s comprehensive nonsense
Could also be don’t worry about deepseek type messaging that addresses concerns without naming names, to tell us that a drastic reduction in infrastructure costs was foretold by the writing of St Moore and was thus always inevitable on the way to immanentizing the AGI, ἀλληλούϊα.
The surface claim seems to be the opposite, he says that because of Moore’s law AI rates will soon be at least 10x cheaper and because of Mercury in retrograde this will cause usage to increase muchly. I read that as meaning we should expect to see chatbots pushed in even more places they shouldn’t be even though their capabilities have already stagnated as per observation one.
- The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
-
My big robot is really expensive to build.
-
If big robot parts become cheaper, I will declare that the big robot must be bigger, lest somebody poorer than me also build a big robot.
-
My robot must be made or else I won’t be able to show off the biggest, most expensive big robot.
QED, I deserve more money to build the big robot.
P.S. And for the naysayers, just remember that that robot will be so big that your critiques won’t apply to it, as it is too big.
-
christ this is dumb as shit
My ability to guess the solution of Boolean SAT problems also scales roughly with the log of number of tries you give me.
It probably deserves its own post on techtakes, but let’s do a little here.
People are tool-builders with an inherent drive to understand and create
Diogenes’s corpse turns
which leads to the world getting better for all of us.
Of course Saltman means “all of my buddies” as he doesn’t consider 99% of the human population as human.
Each new generation builds upon the discoveries of the generations before to create even more capable tools—electricity, the transistor, the computer, the internet, and soon AGI.
Ugh. Amongst many things wrong here, people didn’t jerk each other off to scifi/spec fic fantasies about the other inventions.
In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we are building together. In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential.
AGI IS NOT EVEN FUCKING REAL YOU SHIT. YOU CAN’T CURE FUCK WITH DREAMS
We continue to see rapid progress with AI development.
I must be blind.
- The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.
“Intelligence” in no way has been quantified here, so this is a meaningless observation. “Data” is finite, which negates the idea of “continuous” gains. “Predictable” is a meaningless qualifier. This makes no fucking sense!
- The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
“Moore’s law” didn’t change shit! It was a fucking observation! Anyone who misuses “moore’s laws” outta be mangione’d. Also, if this is true, just show a graph or something? Don’t just literally cherrypick one window?
- The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.
“Linearly increasing intelligence” is meaningless as intelligence has not been… wait, I’m repeating myself. Also, “super-exponential” only to the “socio” that Ol’ Salty cares about, which I have mentioned earlier.
If these three observations continue to hold true, the impacts on society will be significant.
Oh hm but none of them are true. What now???
Stopping here for now, I can only take so much garbage in at once.
dude’s gone full lesswrong. feels nostalgic.
You’d think that, at this point, LW style AGI wish fulfilment fanfic would have been milked dry for building hype, but apparently Salty doesn’t!