

Quadball is officially better than any sport that doesn’t let transgender people compete. There I said it.
Granted this is a particularly low bar nowadays.
I am the journeyer from the valley of the dead Sega consoles. With the blessings of Sega Saturn, the gaming system of destruction, I am the Scout of Silence… Sailor Saturn.


Quadball is officially better than any sport that doesn’t let transgender people compete. There I said it.
Granted this is a particularly low bar nowadays.


(I also lament that this makes things really rough for the trans men for whom this is not a fad, and for whom earlier transitioning would be a huge quality of life improvement.)
Like dear cis people are you OK? Is society not transphobic enough for you? :'(
Are you worried that if a teenager is allowed to explore their gender a little that it will cause a bunch of precocious little cis girls to accidentally glance at a vial of testosterone the wrong way and grow a fantastic beard overnight?
When I started estrogen (later than I should have and fuck you Idaho) I was absolutely 100% sure that it was the right thing to try, and that I’d stop if I didn’t like it. Never looked back (estrogen is tasty and I encourage everyone to try it at least once), and I had years before there was much you could call permanent.
But of course it’s not the “permanent changes to bodies” that made me a 6ft tall amazonian beauty that people like Aella are concerned about. “What if we accidentally trans one of the cis??” fundamentally assumes it is OK to accidentally withhold critical medicine from countless trans people just to be “safe”.


As background: the Kaufmann report that prompted all this is a load of absolute garbage. As discussed fairly extensively on social media (example).
As for Aella’s addition: oh god why did I read this?
The methodology was apparently running a “Big Kink Survey” which was “trending on TikTok” and had “very good SEO”. I suppose this is the right data needed to draw conclusions about what rate 14 year olds are transgender.
The whole this is also full of weird gender essentialism (I never want to read the word “biofemales” again).
I think this is evidence for an increasing split between afabs and amabs
But don’t worry she’s very pro trans (JK Rowling sense):
Despite having been cancelled by the more radical subgroups of trans people, I’m nevertheless very pro trans.
Which is why she wants to make a massive reach and be concerned that maybe trans people are getting too much healthcare:
I think it’s unlikely that 11.5% of afabs are actually trans men in a way that would last through adulthood. If my data is measuring any real trend in the world, and if that trend meaningfully increases permanent changes to bodies, then this high percentage might actually be quite bad.
… Nevermind that her data doesn’t even touch on stuff like rate of HRT, or regret rate; these “concerns” are all pulled out of thin air.


I expect her methodology was great but I don’t actually know what it was.
Science!


Death Note deleted scene:
Yagami Light: “No you see I couldn’t possibly be Kira because if I was I would have replied to your inquiry with `I can neither confirm nor deny that I am Kira`!”
L: “Oh dang that’s exactly what Kira wouldn’t have not not not said”
Yagami Light: “… which BTW shouldn’t be illegal in the first place and also I would give sufficiently needy 14 year olds LSD and this medicine I’m taking fell off the back of a truck.”


Ugh OK I have to vent:
I’m getting pushed into more of a design role because oops my company accidentally fired or drove away all of a team of a dozen people except for me after forgetting for a few years that the code I work on is actually mission critical.
I do my best at designing stuff and delegating the implementation to my coworkers. It’s not one of my strengths but there’s enough technical debt from when I was solo-maintaining everything for a few years that I know what needs improving and how to improve it.
But none of my coworkers are domain experts, they haven’t been given enough free time for me to train them into domain experts, there’s only one of me, and the higher ups are continuously surprised that stuff is going so slow. It’s frustrating for everyone involved.
I actually wouldn’t mind architecture or design work in better circumstances since I love to chat with people; but it feels like my employer has put me in an impossible position. At the moment I’m just trying to hang in there for some health insurance reasons; but in a few years I plan to leave for greener pastures where I can go a day without hearing the word “agentic”.


Some Chat-GPT user queries were leaked via some Google Search Analytics owned by websites that ranked on the search result pages that Chat-GPT saw when searching: https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/
Or something like that. It’s a little confusing.


“Talking with all these marbles in my mouth holds huge promise, but also exposes some longstanding flaws in communication”
Also ironically enough they seem to be claiming that natural language is the future of ambiguously(?) specifying systems:
Specifications are back, even if they are now called “prompts.”


BTW the official way to support trans people as they “explore ways of being humans” is to punch Nazis in the face and dunk on techbros. Go forth and tell all your cis friends!
(If anyone finds some ways of not being human to explore do let me know, I’m holding out for Magical Girl)


There is no need to give transphobes the benefit of the doubt really.
You don’t believe in gender? I mean you’re wrong IMO but we’re cool. But if someone says the words “I don’t believe have an attribute called gender” as part of a bunch of totally obvious anti-trans dog-whistles when butting into a discussion about transgender day of remembrance? Now it’s a problem.


We regret to inform you that one of Google’s "AI leader"s is a transphobe:
https://bsky.app/profile/alexhanna.bsky.social/post/3m52bffg2222x (screenshot of relevant section)


More bias-laundering through AI, phrenology edition! https://www.economist.com/business/2025/11/06/should-facial-analysis-help-determine-whom-companies-hire
I couldn’t actually read the article because paywall, but here’s a paper that the article is probably about: AI Personality Extraction from Faces: Labor Market Implications
Saying the quiet part out loud:
First, an individual’s genetic profile significantly influences both their facial features and personality. Certain variations in DNA correlate with specific facial features, such as nose shape, jawline, and overall facial symmetry, defined broadly as craniofacial characteristics
Second, a person’s pre- and post-natal environment, especially hormone exposure, has been shown to affect both facial characteristics and personality
To their credit the paper does say that this is a terrible idea, though I don’t know how much benefit of the doubt to give them (I don’t have time to take a closer look):
This research is not intended, and should not viewed, as advocacy for the usage of Photo Big 5 or similar technologies in labor market screening.


Half the time the trucks are driving on the left side of the road. That’s fine I guess; except half the time the trucks are driving on the right side of the road.


NotAwfulTech and AwfulTech converged with some ffmpeg drama on twitter over the past few days starting here and still ongoing. This is about an AI generated security report by Google’s “Big Sleep” (with no corresponding Google authored fix, AI or otherwise). Hackernews discussed it here. Looking at ffmpeg’s security page there have been around 24 bigsleep reports fixed.
ffmpeg pointed out a lot of stuff along the lines of:
All very reasonable points but with the reactions to their tweets you’d think they had proposed killing puppies or something.
A lot of people seem to forget this part of open source software licenses:
BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW
Or that venerable old C code will have memory safety issues for that matter.
It’s weird that people are freaking out about some UAFs in a C library. This should really be dealt with in enterprise environments via sandboxing / filesystem containers / aslr / control flow integrity / non-executable memory enforcement / only compiling the codecs you need… and oh gee a lot of those improvements could be upstreamed!


Grokipedia just dropped: https://grokipedia.com/
It’s a bunch of LLM slop that someone encouraged to be right wing with varying degrees of success. I won’t copy paste any slop here, but to give you an idea:
Also certain articles have this at the bottom:
The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License.


Check out the graphics on their homepage. It has that terrible “scroll driven” web-design but the graphics look like placeholder art cooked up by a programmer.
Usually these sorts of VC bait companies at least hire a graphics designer but I guess that’s not actually necessary.


Crypto Investor Proposes 450-Foot Statue of Greek God on Alcatraz Island is a story making the rounds in the press lately and aaaaaah I hate it. I’d say something more coherent than that but it’s already given me quite a headache.
He has a personal website as well as a website for his stupid statue idea. Both of which are buggy / ugly – apparently after saving $450 million for a dumb statue he has none left for good website coding.


Yet another billboard.
https://www.reddit.com/r/bayarea/comments/1ob2l2o/replacement_ai_billboard_in_san_francisco_who/
This time the website is a remarkably polished satire and I almost liked it… but the email it encourages you to send to your congressperson is pretty heavy on doomer talking points and light on actual good ideas (but maybe I’m being too picky?):
I am a constituent living in your district, and I am writing to express my urgent concerns about the lack of strong guardrails for advanced AI technologies to protect families, communities, and children.
As you may know, companies are releasing increasingly powerful AI systems without meaningful oversight, and we simply cannot rely on them to police themselves when the stakes are this high. While AI has the potential to do remarkable things, it also poses serious risks such as the manipulation of children, the enablement of bioweapons, the creation of deepfakes, and significant unemployment. These risks are too great to overlook, and we need to ensure that safety measures are in place.
I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections, as local efforts can serve as crucial safeguards.
Thank you for your time and attention to this critical issue.
Wow highly recommend reading all his comments where he doubles down on how everyone else is in the wrong (for wanting maintainable code that isn’t a legal liability) while he is in the right (for being brave and bold enough to type prompts into an LLM to create code that he won’t stand behind).
It’s almost as if he went in there looking for a fight.
Lool, look at these two quotes next to eachother:
vs.