

The series is on the sympathetic and charitable side in terms of tone and analysis, but it still gets to most of the major problems, so its probably a good resource for referring to people that want a “serious”, “non-sarcastic” dive into the issues with LW and EA.
Edit: Reading this post in particular, it does a good job of not cutting the LWs slack or granting them too much charity. And it has really broken down the factual details in a clear way with illustrative direct quotes from LW.
Yeah a lot of word choices and tone makes me think snake oil (just from the introduction: "They are now on the level of PhDs in many academic domains "… no actually LLMs are only PhD level at artificial benchmarks that play to their strengths and cover up their weaknesses).
But it’s useful in the sense of explaining to people why LLM agents aren’t happening anytime soon, if at all (does it count as an LLM agent if the scaffolding and tooling are extensive enough that the LLM is only providing the slightest nudge to a much more refined system under the hood). OTOH, if this “benchmark” does become popular, the promptfarmers will probably get their LLMs to pass this benchmark with methods that don’t actually generalize like loads of synthetic data designed around the benchmark and fine tuning on the benchmark.
I came across this paper in a post on the Claude Plays Pokemon subreddit. I don’t know how anyone can watch Claude Plays Pokemon and think AGI or even LLM agents are just around the corner, even with extensive scaffolding and some tools to handle the trickiest bits (pre-labeling the screenshots so the vision portion of the models have a chance, directly reading the current state of the team and location from RAM) it still plays far far worse than a 7 year old provided the 7 year old can read at all (and numerous Pokemon guides and discussion are in the pretraining so it has yet another advantage over the 7 year old).