![](https://awful.systems/pictrs/image/8e33e545-e593-400a-bdcc-594e6ffb4d17.png)
![](https://awful.systems/pictrs/image/8651f454-1f76-42f4-bb27-4c64b332f07a.png)
It’s like you founded a combination of an employment office and a cult temple, where the job seekers aren’t expected or required to join the cult, but the rites are still performed in the waiting room in public view.
chef’s kiss
It’s not always easy to distinguish between existentialism and a bad mood.
It’s like you founded a combination of an employment office and a cult temple, where the job seekers aren’t expected or required to join the cult, but the rites are still performed in the waiting room in public view.
chef’s kiss
The surface claim seems to be the opposite, he says that because of Moore’s law AI rates will soon be at least 10x cheaper and because of Mercury in retrograde this will cause usage to increase muchly. I read that as meaning we should expect to see chatbots pushed in even more places they shouldn’t be even though their capabilities have already stagnated as per observation one.
- The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
Saltman has a new blogpost out he calls ‘Three Observations’ that I feel too tired to sneer properly but I’m sure will be featured in pivot-to-ai pretty soon.
Of note that he seems to admit chatbot abilities have plateaued for the current technological paradigm, by way of offering the “observation” that model intelligence is logarithmically dependent on the resources used to train and run it (i = log( r )) so it’s officially diminishing returns from now on.
Second observation is that when a thing gets cheaper it’s used more, i.e. they’ll be pushing even harded to shove it into everything.
Third observation is that
The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.
which is hilarious.
The rest of the blogpost appears to mostly be fanfiction about the efficiency of their agents that I didn’t read too closely.
a lot of kids
They had 3 kids last time they came up, which despite their posturing is not really a notable amount, and they’re both nearing their 40s, so it’s unlikely they’ll hit quiverfull numbers.
“Genetic Enhancement: Prediction Markets for Future People” by Jonathan Anomaly
What a completely cursed presentation title. According to the first youtube transcription service that pops up on google, he means that we should use prediction markets to find out which diseases will be curable/treatable in the next however many years so we can prioritize accordingly when doing polygenetic embryo screening based family planning.
Eugenics enjoyer quotient: Mr Anomaly is an iq enthusiast who goes on to talk about how genetic screening starts at choosing a suitable partner. Also, we should establish something like a polygenic health index that represents an individual’s genetic health to better systematize selection. This will be based on the individual’s known genetics as well as family history, I’m assuming because getting tricked into marrying someone with a schizophrenic great uncle or an obese cousin is a serious concern for him.
This presentation came up on the subject of how Cremieux/TP0/Lasker got invited to give a talk in Stanford if he’s only known for his race science bullshit while otherwise unaffiliated, and the answer is that the school of business faculty who organized the talks was into forecasting markets and almost definitely met him in this event.
So we have the broader rationalist cultic milieu to once again thank for bringing terrible people together, I guess.
Penny Arcade weighs in on deepseek distilling chatgpt (or whatever actually the deal is):
You misunderstand, they escalate to the max to keep themselves (including selves in parallel dimensions or far future simulations) from being blackmailed by future super intelligent beings, not to survive shootouts with border patrol agents.
I am fairly certain Yud had said something very close to that effect in reference to preventing blackmail from the basilisk, even though he tries to no-true-scotchman zizians wrt his functional decision ‘theory’ these days.
Distilling is supposed to be a shortcut to creating a quality training dataset by using the output of an established model as labels, i.e. desired answers.
The end result of the new model ending up with biases inherited from the reference model should hold, but using as a base model the same model you are distilling from would seem to be completely pointless.
The 671B model although ‘open sourced’ is a 400+GB download and is definitely not runnable on household hardware.
Taylor said the group believes in timeless decision theory, a Rationalist belief suggesting that human decisions and their effects are mathematically quantifiable.
Seems like they gave up early if they don’t bring up how it was developed specifically for deals with the (acausal, robotic) devil, and also awfully nice of them to keep Yud’s name out of it.
edit: Also in lieu of explanation they link to the wikipedia page on rationalism as a philosophical movement which of course has fuck all to do with the bay area bayes cargo cult, despite it having a small mention there, with most of the Talk: page being about how it really shouldn’t.
NYT and WaPo are his specific examples. He also wants a connection to “a policy/defense/intelligence/foreign affairs journal/magazine” if possible.
Today on highlighting random rat posts from ACX:
(Current first post on today’s SSC open thread)
On slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump’s term, so be on the lookout for that when it materializes I guess.
wrong thread :(
The zizian angle makes this so weird. Like, on top of probably being stopped for driving while trans, they might have instigated the shootout to prove to the basilisk that their parallel universe selves/simulated iterations/eternal souls can’t be acausally blackmailed.
It’s another one of those things that the further you read the worse it gets, isn’t it?
Does anyone know who or what is Ziz in this context? Google says jewish mythological beast.
edit: found this:
The Zizians were a cult that focused on relatively extreme animal welfare, even by EA standards, and used a Timeless/Updateless decision theory, where being aggressive and escalatory was helpful as long as it helped other world branches/acausally traded with other worlds to solve the animal welfare crisis.
They apparently made a new personality called Maia in Pasek, and this resulted in Pasek’s suicide.
They also used violence or the threat of violence a lot to achieve their goal.
This caused many problems for Ziz, and she now is in police custody.
They basically invent a special new type of mental impairment to apply to brown people to sidestep the need to explain why entire swaths of the global south aren’t desolate wastelands of barely functional people trying to converse in grunts, as predicted by Lynn’s research.
That the statistics of a mean IQ of nothing imply that you should be able to fit every current African university graduate in a room together seems pretty valid as a reductio ad absurdum of the whole affair, surprised I didn’t see it mentioned before.
That’s a problem in itself, don’t you think? It’s all very “Feminists hate sex and they want to erase the differences between the genders”. Julia gets a taste of freedom and her right place in the world by putting on makeup and girly clothes and having a lot of sex.
It’s been to long for me to be able to tell if that applies to the general context of Orwell’s views (which apparently I’m not sufficiently aware of) or if it’s also a significant issue with 1984. In principle having the woman character employ cargo cult femininity in a desperate attempt at self expression shouldn’t be unsalvageabl. Being the only woman with a speaking part and also a ditz less so.
Winston being a self-aggrandizing tit who needs things explained to him a lot so the author can soapbox was the sum of my reaction to the character, that he was also supposed to be relatable beyond the basics of his clash with authoritarianship certainly puts a different spin on things.
Why
Hot take time, I think when siskind was at the age that he decided there are some things he will never again change his mind about he happened to be downstream of some flavor of transhumanism that favored gene editing instead of cybernetic augmentations and brain uploads, and things kind of escalated from there.
Spotlighting eugenics-based IQ-maxing is probably his version of going all in on summoning the acausal robot god to fix everything, and also the substack money is pretty good.
Could also be don’t worry about deepseek type messaging that addresses concerns without naming names, to tell us that a drastic reduction in infrastructure costs was foretold by the writing of St Moore and was thus always inevitable on the way to immanentizing the AGI, ἀλληλούϊα.