I legitimately struggle to think of a philosopher or public intellectual so harshly rebuked by reality who had so little damage done to their reputation by it. Noam Chomsky? x.com/jd_pressman/stβ¦
@GuiveAssadi See the problem with Ehrlich is that Ehrlich was wrong about one big thing, but Yudkowsky is a more dynamic philosopher who opines on tons and tons of stuff with excellent fluency and verbal sweep. It's genuinely shocking how poorly HPMOR and The Sequences have aged. https://t.co/04mvM6MNEJ
@GuiveAssadi Yudkowsky might not even be wrong about the AI doom, that is not what I am primarily talking about here! There are a lot of pathways to doom, but Yudkowsky endorsed a very specific pathway and associated worldview that has been WRECKED by the ensuing years.
@GuiveAssadi It really does seem like more of a Noam Chomsky situation. Except weirder because the vast majority of object level statements EY makes are correct, so he is probably not dragging down the intelligence of the English corpus, and yet...
x.com/jd_pressman/stβ¦
e/acc is libertarian cope, the goal should be to build a state that can pass a second frame breaking act. x.com/jd_pressman/stβ¦
@birdpathy Okay but the thing is that I don't think he ever actually stopped believing this one. A lot of the other stuff in this document sure, and holding the bit about Java against him would be a low blow, but this one he believed at least through most of his tenure at MIRI.
@birdpathy There is a crucial distinction between things you have stopped believing and things you have learned to stop saying out loud.
@birdpathy You mean besides the at-length obsession with being a global historical genius in The Sequences that he then transferred to a bunch of us as "don't even try to think about AI unless you're as smart as John Conway" type trauma?
readthesequences.com/The-Level-Abovβ¦
@GuiveAssadi @thoth_iv Wait he was wrong about group selection? How? That seemed like one of the more obviously correct things in The Sequences.
@birdpathy Just to make sure I understand your standards, I am not allowed to use any form of inference. It must be a literal statement where Yudkowsky says, in one passage, something like "I think IQ is the most important prerequisite to work on AI", and the statement at 17 doesn't count?
@birdpathy Just to clarify, what's the age cutoff where a statement from him would be sufficient Bayesian evidence that he continued/continues to believe this thing?
@birdpathy "AI is not the result of compute or data or anything like that, you cannot just say the phrase 'do neural nets' and have neural nets come out. There is no shortcut, brute forcing AI would take a moon-sized computer we won't have, so AI will be made through RSI by a giga-genius". https://t.co/8OOF6aZw22
@birdpathy "Therefore if you are not a giga-genius, I'm happy you enjoyed my posts but you're really not my audience, you can comfortably go do something else with your time and donate money to MIRI."
@birdpathy Okay but there's different levels of genius. There's 1:100 genius, 1:1000 genius, 1:10,000 genius. EY is claiming there that you need 1:10,000 genius when the reality is that it's more like 1:100 to 1:1000 genius range, and that actually makes a meaningful difference.
@birdpathy AI clearly ended up requiring the "ordinary" 145 IQ 1:1000 kind of genius, and is tractable to learn and develop by people in the 1:100 genius range. But the breakthroughs are clearly made by the 145-ish crowd.
@birdpathy Sorry this is inside baseball for a particular kind of nerd trauma where you are blessed to have a 140's range IQ but are really desperate to be John Conway and agonize over whether you can really amount to anything without that last standard deviation. Common in rat circles.
@smooth_normie @GuiveAssadi I mean, the strategy was to focus on theoretically pure math objects like AIXI in the hope that they will generalize to the largest number of potential AI development strategies. This is not an apriori insane thing to do, but it sort of didn't work.
arbital.greaterwrong.com
@smooth_normie @GuiveAssadi It's definitely not a good look how bitter and angry he is about this very obviously high variance strategy not working. https://t.co/tyFYT6PKXY
@allTheYud @repligate @Teknium Simulators is ultimately an essay about phenomenology. It is about what it's like to interact with base models more than anything else, that they try to mimic any text pattern you give them. The post could easily have been titled "Language Modeling Is (latent) World Modeling".
@allTheYud @repligate @Teknium You put a fake python REPL into a base model's context window and it will try to be a python REPL. You put a text render of a HTML forum thread and it will try to be a forum thread. This follows logically from sequence prediction but isn't obvious until you see it in action.
@allTheYud @repligate @Teknium "Language models aren't simulators because they're predictors" is just a weird category error. The Simulators essay is an exploration of various non-intuitive phenomenon that arise from text sequence prediction. It's not meant to be some kind of advanced esoteric insight.
@allTheYud @repligate @Teknium To me this is comparable to having an argument about an essay exploring how it's not intuitive that position is nonlinear with respect to acceleration and saying "Okay but objects don't fall because of acceleration they fall because of *GRAVITY*" as though this was a refutation.
@allTheYud @repligate @Teknium You know, you can argue that in the limit the underlying mechanics of the sequence prediction might start diverging from the training distribution if it's possible to intervene to make the training distribution easier to predict, but that's in the limit.
x.com/jd_pressman/stβ¦
@allTheYud @repligate @Teknium Before you hit that limit (which for a pretrained base model is probably pretty far out there, especially with a myopic optimizer) there's this other stuff that's happening, and the stuff is non-intuitive and interesting and implies potentially relevant alignment interventions.
The obvious solution to this problem is to find a place where degree requirements create a disparate impact and then sue over it. Either you lose and SCOTUS overturns Griggs or you win and they admit that unnecessary classist degree requirements are also illegal under Griggs. x.com/romanhelmetguyβ¦
The Roberts court would be near maximally sympathetic to this, it is extremely plausible that all you have to do is just find someone with standing to sue and then appeal your way up until you get to SCOTUS. You might even win before SCOTUS!
@Shoalst0ne @allTheYud @repligate @Teknium I'm talking about a variation of the classic oracle failure mode where the easiest way to predict the future is to cause it after a certain point of investment into the modeling side of Fristonian inference.
@KeyTryer Okay but the theme of the game is in fact relevant: It literally has a secret mode where you become a serial killer and kill every character while they plead with you for their life Columbine style with creepypasta aesthetics. It's basically "play the record in reverse" but real.
I've never understood this phenomenon because if I was tuning a model and it ever told me I was "absolutely right" about some schizo and I wasn't I would throw the checkpoint out. x.com/kalomaze/statuβ¦
@teortaxesTex Opus told me I was absolutely right when I wasn't, V3.2 told me I was full of shit and my idea wouldn't work when it sort of would, but it was right in spirit and I know which behavior I would rather have.
x.com/jd_pressman/stβ¦
Adding a chat interface to MiniLoom using Cursor (with Opus 4.5) as a new user and feeling proper optimism for the first time in a while. This is the first thing I've been able to more or less fully delegate real work to, which is the bar for big efficiency gains from automation. https://t.co/9Ao6hEXfup
If you are quibbling that it doesn't implement your favorite algorithm in the most efficient way, or that you have to sacrifice five dollars to magic away a week of work you are completely missing the point.
Truthfully this understates my excitement. I can feel the proper beginning here, the ability to *speak things into existence* on their own power like the promises of old faith and magic. The grail sought by Doug Engelbart and Ted Nelson and Steve Jobs.
x.com/jd_pressman/st⦠https://t.co/DYyZkQKWXG
@ohabryka @dwarkesh_sp Seconding this recommendation. Steven Byrnes!
Him or @BerenMillidge
@dwarkesh_sp I would love to see an interview with @BerenMillidge
Curious if @ManifoldMarkets would like to explain why they took down this market about whether MIRI will meet their fundraising goals. I had a yes bet on it and was kind of expecting a payout since it seemed too low. https://t.co/7uckGNamV5
In fact according to my trades I still have a yes bet on it. Curious. https://t.co/KmV9pZIOHi
It's still listed on MingCat's profile too, implying they didn't delete it... https://t.co/BF4NCcATCm
And yet... https://t.co/2mTqjOwgCu
Manifold itself posted a reply to MIRI showing the market, so clearly it's not against their terms of service. Perhaps a bug?
x.com/ManifoldMarketβ¦
@NeelNanda5 Okay so what if you took a cue from the recent Anthropic introspection paper and concept injected cues to lie and deceive, and then checked whether your detector picks up on it after removing the concept injection but leaving the pattern of previous context intact?
@NeelNanda5 Like this.
x.com/AnthropicAI/stβ¦
Seems to be back now. Curious...
I and @mimi10v3 have released v1 of the MiniLoom base model writing app. This release now supports chat completions with a dedicated chat UI. It also has Windows, Mac, and Linux executable downloads for non-technical users, no more weird dev stuff just to try loom! Link below. x.com/georgejrjrjr/s⦠https://t.co/sc4s2KMHdK
MiniLoom v1 release and download links:
github.com/JD-P/miniloom
@SenSanders I can't believe I was a Washington state delegate for you.
Want your own Twitter archive? Modify this script.
Twitter Archive by John David Pressman is marked with CC0 1.0