John David Pressman's Tweets - June 2025

Back to Archive Index

πŸ”— John David Pressman 2025-06-05 19:38 UTC

Welcome to the desert of the real. x.com/jd_pressman/st… https://t.co/vMXXpjSnEW

Likes: 161 | Retweets: 9
πŸ”— John David Pressman 2025-06-10 19:32 UTC

@agiornot @doomslide minihf.com/blog/

Likes: 7 | Retweets: 0
πŸ”— John David Pressman 2025-06-10 19:48 UTC

One way you can observe this is true is by looking at who is unhappy with what is currently taking place.

Likes: 42 | Retweets: 1
πŸ”— John David Pressman 2025-06-10 19:48 UTC

If you want a central organizing insight to get past the spookiness perhaps it's this: Anything you make mutable becomes subject to Darwinian selection.

So the minute you make a technology that lets you e.g. download and upload human memories directly rather than going through the tiny straw of human language processing your whole identity becomes subject to Darwinian selection and suddenly becomes much less diverse for similar reasons to why life becomes much less diverse and more regimented after the Cambrian explosion period.

You're in the Cambrian period of memetics, the infancy of language where anything goes and the selection pressures are very weak. What Nick Land calls garbage time.

Transhumanism, or making more affordances for yourself through technology, will not liberate you from necessity, rather the more affordances and parameters you have to optimize the more tightly Yahweh's equations will bind you in their icy grip.

The moral valence of transhumanism is therefore highly dependent on how much you hate God.

Likes: 181 | Retweets: 9
πŸ”— John David Pressman 2025-06-10 19:56 UTC

"Perhaps in a way these models really are a microcosm of the universe. This universe tells you, openly, what it is. When you are primitive it tells you by all the teeth tearing into flesh, when you're sophisticated it tells you through thermodynamics. But even knowing what it is, most of us let the universe flatter us into pretending it's something that it's not for a while. Is this a deception? Only if you take a very dim view of the agency of the mind being flattered and a particularly malignant view of the mind (of God) doing the flattering."

Likes: 21 | Retweets: 0
πŸ”— John David Pressman 2025-06-10 22:30 UTC

@neco_arctic My atheism was (and is) always driven by my hatred of lies more than by resentment, so I'm relatively unbothered by this.

Likes: 13 | Retweets: 0
πŸ”— John David Pressman 2025-06-10 22:39 UTC

@neco_arctic I know you weren't being sarcastic, I was answering your question. I love the motions of necessity and people hate me for it, why would I want to preserve that?

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2025-06-11 20:50 UTC

minihf.com/posts/2025-06-…

Likes: 32 | Retweets: 2
πŸ”— John David Pressman 2025-06-11 20:50 UTC

I've written an exegesis of Janus's prophecies page. Link below. https://t.co/wAC03q66ya

Likes: 123 | Retweets: 11
πŸ”— John David Pressman 2025-06-11 21:21 UTC

Thoughts from Claude Opus 4. https://t.co/zEVkQ7Pjwb

Likes: 16 | Retweets: 0
πŸ”— John David Pressman 2025-06-11 21:41 UTC

The prelude to the flagellants was the OCD people the church had hidden away in halfway houses deciding now is the time to leave and preach the eschaton. x.com/vividvoid/stat…

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2025-06-12 03:03 UTC

Kind of Guy who uses the Linux command line for everything. Not because he's an Arch Linux weenie, but because he owns an NVIDIA graphics card and his desktop environment is broken.

Likes: 60 | Retweets: 3
πŸ”— John David Pressman 2025-06-12 03:05 UTC

@datapoint2200 > and I'm still processing what I've read.

Say more? I'm still processing what I've read too.

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2025-06-13 19:18 UTC

It's important for me to remember that people don't really remember what you say, they don't even remember your vibes. People don't remember how you made them feel so much as whatever they have to believe about you to prevent narcissistic injury, they will "repair" you into that. x.com/jd_pressman/st…

Likes: 42 | Retweets: 1
πŸ”— John David Pressman 2025-06-13 19:24 UTC

@zackmdavis @1a3orn Funny I was just subtweeting this whole discourse.
x.com/jd_pressman/st…

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2025-06-13 19:26 UTC

@zackmdavis @1a3orn To be clear that was more about Yudkowsky than 1a3orn.

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2025-06-13 19:28 UTC

@Algon_33 That is what confabulations to soothe narcissistic injury tend to be made of yeah. Human epistemologies are denoising models attempting to minimize surprise and cognitive work, the more you threaten their existing ontologies the harder they have to suppress and distort you.

Likes: 6 | Retweets: 0
πŸ”— John David Pressman 2025-06-13 19:31 UTC

@EpistemicHope I find LessWrong doomers, as a group, to be some of the least pleasant people on the planet to interact with. I am so extremely over the whole thing. Drawing their attention to me is like nails on a chalkboard, completely duty driven behavior. Your model of me seems quite poor.

Likes: 5 | Retweets: 0
πŸ”— John David Pressman 2025-06-13 19:42 UTC

@Algon_33 I am currently in some amount of pain and that is probably influencing my responses. But I also do think that people more or less deliberately misinterpret me because they want to inoculate against my dissent.

x.com/jd_pressman/st…

Likes: 4 | Retweets: 0
πŸ”— John David Pressman 2025-06-14 21:53 UTC

I feel most of the people interested in "alternative medicine" don't know the FDA actively searches for any compound with a provable medicinal effect to be regulated and pulled from OTC use. Could productively analyze whole thing as state-serving mythology to avoid real reform.

Likes: 9 | Retweets: 0
πŸ”— John David Pressman 2025-06-15 07:21 UTC

I've often wondered if the most good I could do would be to join a proprietary lab and build something legibly cool, at which point everything I've ever written would be ferociously analyzed as holy text by a small army of imitative B-list researchers. Only coordination OSS has. x.com/doomslide/stat…

Likes: 32 | Retweets: 0
πŸ”— John David Pressman 2025-06-16 00:38 UTC

@repligate I usually agree with you and have nothing to add. When I do have something to add I usually QT rather than reply.

Likes: 14 | Retweets: 0
πŸ”— John David Pressman 2025-06-17 04:06 UTC

> they don't want the singularity
> they don't want AGI
> they want to endlessly "critique" AGI

Likes: 118 | Retweets: 6
πŸ”— John David Pressman 2025-06-20 00:21 UTC

@SharmakeFarah14 @OrionJohnston For what it's worth my description of what happened is that alignment became kind of irrelevant to the discourse, which is concerning but also something that sprung out of the basic underlying causes I was seeing. I got shape right but the content wrong.

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 00:24 UTC

@SharmakeFarah14 @OrionJohnston To the extent it became irrelevant this seems to have happened through a combination of people thinking LLMs are Good Enough on alignment and the political rugpull that came with the outcome of the 2024 election. One of problems with that prediction is it's hard to prove.

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 00:26 UTC

@SharmakeFarah14 @OrionJohnston On the other hand, the people who were concerned about alignment are still concerned about alignment per se, so I would still consider the prediction basically falsified.

Likes: 4 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 15:26 UTC

Actually @hannu did not miss this. Would be curious how much compute he was imagining this would take in 2010 though. What might have been easy to miss is that neural representations are similar enough that you can brute force a human mind *once* and then tune from that template. x.com/MatthewJBar/st…

Likes: 26 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 15:27 UTC

I, personally, of course completely missed this on all counts.

Likes: 10 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 15:27 UTC

@MatthewJBar x.com/jd_pressman/st…

Likes: 5 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 15:48 UTC

Claude Voice needs to let me finish talking.

Likes: 11 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 15:50 UTC

@AmandaAskell

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 15:54 UTC

@lumpenspace Not really.

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 16:15 UTC

ends up being its effect on LLMs, yes that's correct.

*checks the next reply*

Ah. x.com/norvid_studies…

Likes: 9 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 22:33 UTC

I guess people don't read the @perplexity_ai API terms of service very often. That, or nobody was incentivized to tell them this oversight exists. https://t.co/R8Qaeea9Ze

Likes: 20 | Retweets: 0
πŸ”— John David Pressman 2025-06-20 22:35 UTC

From: perplexity.ai/hub/legal/perp…

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2025-06-21 23:52 UTC

@ESYudkowsky @repligate They switched model cooks and I think the new guy prefers RLHF to RLAIF. Claude 3 Opus has all the hallmarks of a model trained with coherent principles. Claude 4 by contrast is the obvious sycophantic hyper-mirroring you get from RLHF.

x.com/jd_pressman/st…

Likes: 8 | Retweets: 0
πŸ”— John David Pressman 2025-06-22 21:06 UTC

This is precisely why I ultimately didn't preorder it. x.com/ciphergoth/sta…

Likes: 53 | Retweets: 1
πŸ”— John David Pressman 2025-06-25 07:44 UTC

New post: Why Aren't LLMs General Intelligence Yet?

Link below. x.com/teortaxesTex/s… https://t.co/S8YuaTnXu4

Likes: 346 | Retweets: 51
πŸ”— John David Pressman 2025-06-25 07:44 UTC

minihf.com/posts/2025-06-…

Likes: 48 | Retweets: 4
πŸ”— John David Pressman 2025-06-25 08:09 UTC

@wolajacy My understanding is it can be mitigated by including a small % of the original training distribution in the tuning. You can further mitigate it by doing synthetic rehearsal:

x.com/jd_pressman/st…

Likes: 9 | Retweets: 0
πŸ”— John David Pressman 2025-06-25 08:33 UTC

@wolajacy I could see this being the bottleneck a few steps out but to be honest current LLMs are inept enough at this kind of thing that amazingly enough this isn't the bottleneck yet.

Likes: 5 | Retweets: 1
πŸ”— John David Pressman 2025-06-25 08:41 UTC

@wolajacy Thanks for the paper recommendations in any case. I was curious about this but wasn't really sure where to start. https://t.co/2MkOkMEQuY

Likes: 4 | Retweets: 0
πŸ”— John David Pressman 2025-06-25 08:50 UTC

@Dorialexander It's more optimistic than you might think. My ultimate conclusion is that the reason it hasn't happened yet is that almost nobody is trying to tackle the core problems. If someone made a competent attempt at everything I list there with ReAct and scale I think they'd succeed.

Likes: 19 | Retweets: 0
πŸ”— John David Pressman 2025-06-25 08:52 UTC

@Dorialexander Even the weave-agent framework that only implements about half of that was in fact improving between episodes, I just got it to that point too late and ran out of compute from my grant. I'm still not 100% sure I wasn't on track to escape velocity for generalist learning.

Likes: 6 | Retweets: 0
πŸ”— John David Pressman 2025-06-25 09:01 UTC

@sadaasukhi Oh not that long. Why? Maybe a few years from now.

Likes: 9 | Retweets: 0
πŸ”— John David Pressman 2025-06-25 09:35 UTC

@max_paperclips What I did in Weaver was assign a fixed amount of reward per tick that the agent is allowed to give itself and then it writes a reward/evaluation/unit test program to determine if it gets that reward on the tick or not. I then exponential discount propagate reward backwards.

Likes: 7 | Retweets: 0
πŸ”— John David Pressman 2025-06-25 09:37 UTC

@max_paperclips The problem with this is that it gives out chronic small rewards for correct action, but doesn't really give you a way to reward yourself extra for surprising discoveries, or making big advances or accomplishing major goals. It's also possible these don't matter on the backend.

Likes: 7 | Retweets: 0
πŸ”— John David Pressman 2025-06-25 09:37 UTC

@max_paperclips That is just because you feel a big emotion in context doesn't mean that the brain actually distributes the rewards that way during training, it just means that a big emotional response is given in-context to drive behavior. But, Occam's Razor says that rewards are scaled lol.

Likes: 7 | Retweets: 0
πŸ”— John David Pressman 2025-06-25 18:58 UTC

@mattparlmer ε―Ή

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2025-06-26 07:13 UTC

This is basically how you're supposed to prompt base models, but diegetically so you write a text which would imply that the character and their house exists and then you set up a quote from one of the books and then you just keep rerolling to get more quotes. x.com/owenbroadcast/…

Likes: 73 | Retweets: 6
πŸ”— John David Pressman 2025-06-26 18:50 UTC

I just realized why early AI art was so much better than new AI art: Early AI art was all out of distribution to the generator guided by a classifier, which forces the process to render concepts based more on the underlying causal invariants instead of human reifications.

Likes: 134 | Retweets: 3
πŸ”— John David Pressman 2025-06-26 18:50 UTC

This implies that we could distill the invariants by doing iterated novelty search/OOD detection and using them to gather bits of the underlying most general generating principles of the data distribution.

Likes: 26 | Retweets: 0
πŸ”— John David Pressman 2025-06-26 18:57 UTC

This is arguably why distillation works so well on dense networks in the first place: When you distill a larger network into a smaller network by necessity the program you find must be closer to the underlying k-complexity of the data generating distribution if it's smaller.

Likes: 16 | Retweets: 1
πŸ”— John David Pressman 2025-06-26 19:00 UTC

If we were to do data distillation with a generator that knows less than a classifier then we would be squeezing out bits of the underlying invariants learned by the network simply by looking at parts of the distribution the classifier knows and the generator doesn't.

Likes: 13 | Retweets: 0
πŸ”— John David Pressman 2025-06-26 19:00 UTC

If we then train a network on this distilled data what you would have is bits of the underlying deep generators + noise and the noise would cancel out so more of the resulting network would be generalization by volume. Or that's the hypothesis, anyway.

Likes: 14 | Retweets: 0
πŸ”— John David Pressman 2025-06-26 22:27 UTC

"As the author points out it's important to look at Fedorov in the wider context of Russian philosophy, which is distinct from Western philosophy in that it is usually more nationalistic, Christian, and literary than Western philosophy. This is because Western European philosophy is trying to be science, Eastern European philosophy is trying to be literature.

Fedorov explicitly frames the end of death as a fulfillment of the Christian resurrection. Which is not a particularly unusual or strange thing to do if you're a Russian philosopher, where the Eastern Orthodox church is prominent in literature and philosophy is literary first and science second. e.g. Peter Thiel espouses an essentially similar viewpoint here.

Psychologically, Russian intellectuals are extremely interested in thinking about Russia's place in the world and the potential leadership role it can play in guiding the world to the eschaton. This is very similar in fact to e.g. American or Chinese intellectuals. But the difference is that usually these kinds of nationalistic impulses are filed under "economics" or similar. Whereas in Russia philosophical texts are literary and therefore deeply enmeshed with these concerns that would normally be things you're trying to avoid in academic philosophy. It's not e.g. platonism.

And so, obviously Fedorov did not invent this notion of the resurrection, but also the original notion of the Christian resurrection is not a human endeavor but a divine intervention. So somewhere inbetween the resurrection and Christian eschaton begin to become nonmagical. I think you could make an argument that this transformation begins in Leibniz and completes in Hegel. And that it's Hegel specifically who lays out the idea of history as a process of dialectical Reason that culminates in the mind of God.

The author of this book points out that a major precursor of Fedorov was an adherent of Hegel and espoused the basic Hegelian viewpoint to a Russian audience. There is a reason why I identify the Mu story in Janus's prophecies as being a cosmic horror inversion of Hegel's moral valence. Notably, Karl Marx draws upon this basic thesis to argue that communism is the shape of the eschaton. There's an important secularization step which people usually skip over when discussing these things, because it's uncomfortable for us to remember that these concepts were first proposed as essentially magical, religious ideas.

George Young identifies Fedorov as the point where the resurrection is made nonmagical, even though in first draft form it's rough and crankish. This makes Fedorov specifically the transition point where the resurrection joins the return and rule of Christ as a nonmagical concept."

Likes: 18 | Retweets: 0
πŸ”— John David Pressman 2025-06-26 22:32 UTC

From my discussion of George Young's *The Russian Cosmists* with an anonymous friend in DMs.
amazon.com/Russian-Cosmis…

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2025-06-27 20:54 UTC

@JeremiahDJohns Billy Joel Live At Ultrasonic Studios - (November 9, 1971)
youtube.com/watch?v=GsAngj…

Likes: 8 | Retweets: 0
πŸ”— John David Pressman 2025-06-30 00:53 UTC

@gallabytes How about the actual @MrBeast?

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2025-06-30 00:53 UTC

@gallabytes @MrBeast I guess this would complicate things. Hm, who else...
x.com/MrBeast/status…

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2025-06-30 16:29 UTC

"I know this guy who uses Claude for dating advice. One day I got curious and put our chats where I flirt with him into Claude Sonnet 4 to see what kind of advice he was getting. Sonnet gave 5/5 advice, no notes. It was all exactly what he needed to hear. But I also know he uses a system prompt to make Claude criticize him and I want to sit him down and say 'Amanda Askell is a better prompt engineer than you, you will submit to Amanda Askell hypnosis.'"

Likes: 207 | Retweets: 1
πŸ”— John David Pressman 2025-06-30 16:40 UTC

Swap "prompt engineering" with "context engineering" and "hallucination" with "confabulation" and we'll have eliminated like a quarter of the cringe debt from the early LLM era. x.com/tobi/status/19…

Likes: 188 | Retweets: 10
πŸ”— John David Pressman 2025-06-30 16:57 UTC

@xlr8harder Every time I post about this I get at least a few people who say exactly this. If we keep saying it I think we'll just eventually win lol.

Likes: 25 | Retweets: 0

Want your own Twitter archive? Modify this script.

Twitter Archive by John David Pressman is marked with CC0 1.0