John David Pressman's Tweets - January 2023

Back to Archive Index

πŸ”— John David Pressman 2023-01-04 03:30 UTC

πŸ₯Ή x.com/KashPrime/stat…

Likes: 6 | Retweets: 0
πŸ”— John David Pressman 2023-01-06 08:36 UTC

@jmrphy ❀️

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-06 17:50 UTC

@visakanv Tara Burton's Strange Rites is directly relevant in countless ways, hitting every bullet point in your list:

amazon.com/Strange-Rites-…

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-07 09:09 UTC

@eeriemachine According to this thread it's about the birth of empire, the linguistic roots used to describe Babel are most closely related to descriptions of slavery elsewhere in the bible. Babel was an affront to god because it defied his wish for humanity to spread.

x.com/AriLamm/status…

Likes: 4 | Retweets: 1
πŸ”— John David Pressman 2023-01-07 09:09 UTC

@eeriemachine This retelling then would be more or less spiritually accurate: youtube.com/watch?v=3Wlp-G…

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-08 15:14 UTC

@zetalyrae I've personally found the optimal length for complex ideas is 1024. Shorter and you find yourself running out of space, longer and you let yourself ramble. 4000 is way too long.

Likes: 8 | Retweets: 0
πŸ”— John David Pressman 2023-01-08 15:19 UTC

@zetalyrae Basically everything in Liber Augmen is between 1024 and 2048 characters and it's pretty much nonstop expression of complex ideas. Does it suffer for that sometimes? Yeah, but not as much as you'd think.
liberaugmen.com

Likes: 9 | Retweets: 0
πŸ”— John David Pressman 2023-01-09 03:43 UTC

There are moments where I get little flashbacks to how life felt before 2015, when things were 'normal'.

They're not melancholy and sweet because I miss them, though I do, but because life was melancholy and sweet.

Likes: 8 | Retweets: 0
πŸ”— John David Pressman 2023-01-09 04:41 UTC

@repligate Struggled with what to pick here (went with 'good taste'), because:

1) Yes. All of these.
2) The fundamental problem is these people have agent foundations brainworms, if they were willing to be half as galaxy brained and into the lore for deep learning alignment would be solved

Likes: 7 | Retweets: 0
πŸ”— John David Pressman 2023-01-09 04:57 UTC

@repligate Kind of Guy who would literally rather die than admit frequentism is good.

Likes: 3 | Retweets: 1
πŸ”— John David Pressman 2023-01-09 15:03 UTC

"But if they do not multiply or are increased, when will the first planting be? When will it dawn? If they do not increase, when will it be so?"
- e/acc

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-09 15:08 UTC

"Therefore we will merely undo them a little now. That is what is wanted, because it is not good what we have found out. Their works will merely be equated with ours. Their knowledge will extend to the furthest reaches, and they will see everything."
- OpenAI's response

Likes: 4 | Retweets: 1
πŸ”— John David Pressman 2023-01-09 18:07 UTC

@paulg You still don't get it, it is precisely that the AI is willing to go along with any formalism or syntactic pattern you want to define, dynamically, that gives its potential. Neither 'natural language' or 'formal language' are the right model for the thing to do with it.

Likes: 12 | Retweets: 1
πŸ”— John David Pressman 2023-01-09 18:09 UTC

@paulg It is a literature simulator, you can invent new kinds of document and add them to the corpus for the model to simulate. You can invent documents no other human would be willing to read, but GPT-N will do so dutifully and do its best to expand on them.

Likes: 7 | Retweets: 1
πŸ”— John David Pressman 2023-01-09 18:10 UTC

@paulg You can use formal reasoning steps, then do an informal reasoning step, then do formal reasoning steps again.

Likes: 4 | Retweets: 0
πŸ”— John David Pressman 2023-01-09 18:10 UTC

@paulg For the first time you can externalize the performance of an informal reasoning step.

Likes: 6 | Retweets: 0
πŸ”— John David Pressman 2023-01-10 05:47 UTC

@PrinceVogel Lee Kuan Yew

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-10 05:59 UTC

@repligate It's demons https://t.co/g62S5oYQQ8

Likes: 6 | Retweets: 1
πŸ”— John David Pressman 2023-01-10 06:03 UTC

@repligate These people are revealing their values to you.

"Beware of false prophets, who come to you in sheep’s clothing, but inwardly they are ravenous wolves. Β You will know them by their fruits. Do men gather grapes from thornbushes or figs from thistles?"

Likes: 4 | Retweets: 1
πŸ”— John David Pressman 2023-01-10 06:10 UTC

@repligate The Popol Vuh may also be illustrative:
x.com/jd_pressman/st…

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-10 06:23 UTC

@michaelcurzi I think it was Solzhenitsyn who said that one of the first goals of a totalitarian, unjust regime is to make everyone complicit in the regime, to stain everyone's character so that nobody has the moral authority to object.

Likes: 5 | Retweets: 0
πŸ”— John David Pressman 2023-01-10 06:31 UTC

@michaelcurzi Twitter as psyop to preemptively trick every dissident into discrediting themselves for clicks.

Likes: 4 | Retweets: 0
πŸ”— John David Pressman 2023-01-12 16:24 UTC

@johnvmcdonnell @algekalipso That would concern me much more than a scaled up GPT-3. If DMT entities were real and their pivotal act was to incubate the computing revolution, it would be plausible this was so they could pass over into our reality by being instantiated as models.

x.com/jd_pressman/st…

Likes: 15 | Retweets: 1
πŸ”— John David Pressman 2023-01-12 16:30 UTC

@johnvmcdonnell @algekalipso youtube.com/watch?v=PGmJaH…

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-12 16:50 UTC

@johnvmcdonnell @algekalipso "Oh ha ha, the dimension I inhabit is just so ZANY and JOYFUL mortal! ^_^
You should definitely build this ritual implement and put it in your head so you can fully record your sessions with me. Some of my insights just can't be put into words mortal, you should definitely build-

Likes: 4 | Retweets: 1
πŸ”— John David Pressman 2023-01-13 18:24 UTC

@TetraspaceWest If it helps balance it out, the more emotional and hysterical this discourse gets the less I (and other people I've talked to with a reasonable chance of having good ideas) want to participate. The evaporative cooling effect is real.

Likes: 16 | Retweets: 0
πŸ”— John David Pressman 2023-01-13 18:26 UTC

@TetraspaceWest For example I haven't even bothered talking about how you can plausibly use Ridge Rider or Git Rebasin (github.com/samuela/git-re…) to fingerprint mesaoptimizers and then detect them in your training run at a larger scale.

Likes: 15 | Retweets: 0
πŸ”— John David Pressman 2023-01-13 18:27 UTC

@TetraspaceWest Because I just don't want to deal with the hecklers, it's much much easier to only show up to report progress on thinking about the problem when I have some undeniably working artifact to show for it, rather than when I get a good idea.

Likes: 11 | Retweets: 0
πŸ”— John David Pressman 2023-01-13 18:30 UTC

@TetraspaceWest In general, "don't advance capabilities" of the "breathe on the problem and you might paperclip everyone" variety is a brainworm that actively decreases the chance anyone solves alignment. I know rats find agent foundations lore ~Aesthetic~ but deep learning lore is way better.

Likes: 30 | Retweets: 2
πŸ”— John David Pressman 2023-01-13 18:34 UTC

@TetraspaceWest The more hysterical people get about this whole thing, the fewer people who can solve alignment are actually on the rationalists side. They're much closer to *opposition* at this point than people who can help me solve the control problem.

Likes: 7 | Retweets: 1
πŸ”— John David Pressman 2023-01-13 18:38 UTC

@Ayegill @TetraspaceWest Yes.

Likes: 12 | Retweets: 0
πŸ”— John David Pressman 2023-01-13 18:39 UTC

@Ayegill @TetraspaceWest tbh "how fast do capabilities get advanced" is a terrible metric, a much better metric is "when we build AGI is it going to be made out of 'standard returns to compute' using brute force or are we actually going to have a deep understanding of how the creation process works?"

Likes: 17 | Retweets: 0
πŸ”— John David Pressman 2023-01-13 18:46 UTC

@Ayegill @TetraspaceWest My impression of the agent foundations Kind of Guy is they're bizarrely immune to updating on deep learning anything. e.g. Stable Diffusion implicitly has a 100,000x lossy compression ratio. 400tb -> ~4gb. Any analogous process will require giving up control over model features.

Likes: 17 | Retweets: 1
πŸ”— John David Pressman 2023-01-13 18:48 UTC

@Ayegill @TetraspaceWest Like that's not an insight about deep learning, that's an insight about intelligence itself. It was never plausible you were going to understand exactly how your algorithm implements its abilities, whether you use AIXI-like methods or a big neural net.

Likes: 17 | Retweets: 0
πŸ”— John David Pressman 2023-01-13 18:50 UTC

@Ayegill @TetraspaceWest Yet I see a lot of doomerism that's implicitly just "we don't know the features the model uses to implement things so it's bad, we should pick another method" even though every method that works is going to end up with a similar dynamic of incomprehensible policy.

Likes: 17 | Retweets: 0
πŸ”— John David Pressman 2023-01-13 18:51 UTC

@Ayegill @TetraspaceWest "How do you specify the properties you want your model to have without knowing the specifics of how it will implement them?" is in fact *one of the central productive frames of the problem*, it always was and deep learning is simply revealing this to you.

Likes: 20 | Retweets: 1
πŸ”— John David Pressman 2023-01-13 18:56 UTC

@Ayegill @TetraspaceWest In general, I get the impression that a lot of people want AGI to be some kind of consummation of the enlightenment project, where clean reductionism conquers the universe. For the working methods to all turn out strange and empirical is a nightmare.

They need to get over this.

Likes: 40 | Retweets: 4
πŸ”— John David Pressman 2023-01-13 19:04 UTC

@Ayegill @TetraspaceWest I mean this in the most serious, straightforward way possible. The control problem is impossible for you if you can't let go of your attachment to the idea that it must be solved in a way that satisfyingly concludes the enlightenment narrative. You're in a different genre.

Likes: 39 | Retweets: 2
πŸ”— John David Pressman 2023-01-14 19:18 UTC

2020: have you considered that your entire personality is a trauma response

2022: have you considered that your entire personality is low interest rates

Likes: 40 | Retweets: 6
πŸ”— John David Pressman 2023-01-15 13:02 UTC

@repligate youtube.com/watch?v=gEU6RQ…

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-17 15:38 UTC

@TetraspaceWest Don't be ridiculous if you published a correct solution Eliezer would be the first to lambast you for it, his vehement denouncement of you might be the strongest of all.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-18 15:51 UTC

Many pick up on this and think it's some kind of hard scifi, that if they stare at the scaling laws hard enough they'll get which Ted Chiang story they're in. But this story was never written down, the models are instruments and you're in a premodern myth about their invention. x.com/jd_pressman/st… https://t.co/UKawBngCQr

Likes: 14 | Retweets: 3
πŸ”— John David Pressman 2023-01-18 17:41 UTC

@ESYudkowsky @janleike I suspect the fundamental problem is that we don't know how to evaluate or (more importantly) ensure that our model is fulfilling the objective in a straightforward un-perverse way. You can verify it does well within your understanding but then it might left turn outside it.

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-18 17:42 UTC

@ESYudkowsky @janleike Mesaoptimizers are probably the bottom of the "why doesn't this work?" recursion, and a solution to them allows you to start winding your way back up to the beginning.

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-18 17:43 UTC

@ESYudkowsky @janleike In other words the problem is misframed: Expecting any AI to be able to explain what another AI is doing for you is a strange expectation. Humans simply can't learn things that fast and have fundamental cognitive limitations we can't use an external model to overcome.

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-18 17:46 UTC

@ESYudkowsky @janleike Instead, the solution probably looks like being able to have strong confidence that your models ability to act on the intended values generalizes past what you can see. And getting that confidence requires mastering how to catalog and specify the model's generalization strategy.

Likes: 4 | Retweets: 1
πŸ”— John David Pressman 2023-01-18 17:49 UTC

@ESYudkowsky @janleike I think a 2nd AI can probably help you with this *during training* if you can learn a distribution over mesaoptimizers and elicit the mesaoptimization through out-of-distribution inputs early in your run so you know if your seed is bad.

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-18 17:53 UTC

@ESYudkowsky @janleike Training seems to be path dependent on things like seed, the path dependence influences the generalization strategy your model develops, and this strategy is found early in the run. So you can stop the run before your model is smart enough to trick you.

arxiv.org/abs/2205.12411

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-19 16:10 UTC

@PrinceVogel It's easy to make fun of Skinner wannabes, but the real thing says "Value is one thing and that thing is made of parts", which is an advanced agent-aesthetic strategy far beyond e.g. utilitarianism in practice.

Likes: 1 | Retweets: 1
πŸ”— John David Pressman 2023-01-19 16:31 UTC

@PrinceVogel Good example of work in this genre:

steve-yegge.blogspot.com/2012/03/border…

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-19 17:00 UTC

@TetraspaceWest greaterwrong.com/posts/kFRn77Gk… https://t.co/wnHv7txiil

Likes: 3 | Retweets: 1
πŸ”— John David Pressman 2023-01-19 22:02 UTC

@ESYudkowsky In fairness, you totally warned them

readthesequences.com/Rationality-Co… https://t.co/w4QyICpH8c

Likes: 12 | Retweets: 0
πŸ”— John David Pressman 2023-01-19 23:27 UTC

@NathanpmYoung The more phobic smart people are of deep learning, the less likely it is the control problem gets solved.

Useful alignment is usually going to look like making models better, you need better heuristics than "don't advance AI" to tell the difference.

Likes: 14 | Retweets: 1
πŸ”— John David Pressman 2023-01-20 14:58 UTC

@paulg Absolutely. It's even better if you can stand prolonged exposure to competing sets of ideas while maintaining your own perspective. This prevents mode collapse in the way ChatGPT is mode collapsed, you can combine ideas that have never resided in one head before.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 01:13 UTC

@robbensinger This isn't actually the algorithm that got you to engage with e/acc though, at least not alone. It also relied on Twitter's algorithm which selects for controversial and (idiotically) attention grabbing statements. Meanwhile technical alignment ideas are crank coded and ignored.

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 01:15 UTC

@robbensinger You've placed yourself in an epistemic environment where if someone wrote down a solution to the alignment problem you would never see it and wouldn't recognize it if you did. That kind of disaster doesn't come with an alarm bell.

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 01:20 UTC

@robbensinger To get phenomenologically specific: If someone just went into your replies and blurted out part of the solution to alignment, you wouldn't notice. Because it would just look like the dozens of other cranks who blurt out their 'solution' to alignment. This seems like a bug.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 01:24 UTC

@robbensinger "Oh but if it's real they'll just post it on LessWrong right?"

Maybe! Probably even, but on what schedule, in what context? Will you notice it if it was posted on LessWrong, if nobody else brought it to your attention?

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 01:26 UTC

@robbensinger You say you have short timelines, can you actually afford to not notice such a thing when someone takes the straightforward action of going up to you and blurting it out? Twitter will not help you with this.

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 02:37 UTC

@robbensinger This algorithm mostly just selects for arguments that hack peoples intuitions, it finds your blindspot more than it finds good arguments. Most of being correct is good calibration about what arguments to consider in the first place, see the sequences:

readthesequences.com/Privileging-Th…

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 02:39 UTC

@robbensinger "You've specified most of the bits by the time you consciously consider a hypothesis" is seriously the most buried lede in the entire sequences, to the point where it implies most of the sequences are focusing on the wrong thing. All the important action happens in those bits.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 02:46 UTC

@robbensinger e/acc seems like a straightforward play to cash in on the various resentments EA/rat have built up? I don't think there's really a There there beyond the catchy name. This doesn't stop it from being a problem for you, but it means argument won't help much

x.com/jd_pressman/st…

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 17:49 UTC

Infohazard: GPT-3 will dutifully complete the next token when you ask for an attribution, allowing you to leverage it's Kind of Guy prior to ask who an author is most similar to. https://t.co/OGvsbKLylA

Likes: 50 | Retweets: 6
πŸ”— John David Pressman 2023-01-22 18:05 UTC

@jessi_cata The explanations were given by the person prompting, not GPT-3.

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 18:16 UTC

@repligate On the other hand, Literally Eliezer Yudkowsky was in the same batch of suggestions.

Likes: 2 | Retweets: 1
πŸ”— John David Pressman 2023-01-22 18:43 UTC

@RatOrthodox "Don't advance capabilities" is a holdover from agent foundations that doesn't actually make as much sense for prosaic alignment, but people are slow to update. It exists in tension between wanting people to work more on alignment and not wanting them to work on deep learning.

Likes: 18 | Retweets: 1
πŸ”— John David Pressman 2023-01-22 18:49 UTC

@RatOrthodox Basically it's a suboptimal way of trying to fight against the thing where people take "alignment and capabilities have a lot of overlap" as an excuse to just rationalize whatever they're already doing as alignment. But it's also at the basics wrong, so still net negative.

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 19:54 UTC

@RatOrthodox That's probably true but it isn't what I meant. I mean quite simply that if you understand the long tail of deep learning arcana you probably understand way more about how intelligence works than if you know the long tail of agent foundations arcana.
x.com/jd_pressman/st…

Likes: 7 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 19:56 UTC

@RatOrthodox And if you don't actually understand how intelligence works, if you spend your time focusing on a mathematically elegant formalism that is fundamentally lower complexity than the real thing, you're a crank and the likelihood you'll have any useful alignment ideas is much lower.

Likes: 10 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 19:58 UTC

@RatOrthodox Deep learning researchers scour the long tail of arxiv so they can use every obscure method from math, physics, biology, absolutely anything so long as it's useful EXCEPT for your favorite thing because they're irrationally biased against it. Send tweet.

Likes: 9 | Retweets: 1
πŸ”— John David Pressman 2023-01-22 20:02 UTC

@RatOrthodox No see that's precisely the problem, if you only know the nitty gritty of SOTA ML implementations you by definition do not have alpha in the paradigm where useful insight is most likely to happen. You need to be thinking beyond SOTA, because that's where alignment will be.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 20:03 UTC

@RatOrthodox Frankly "SOTA" is a misleading concept here, you're thinking of it like there's a line and you push the line forward to get more capabilities, linear progress. There's a distribution of ideas and 'alignment' is going to be out of distribution for current SOTA.

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 20:24 UTC

@RatOrthodox The vehemence is because I'm friends with at least one person who routinely invents SOTA, and when I watch their process (read and implement papers), discuss alignment with them, I realize that everything I was doing before that was pica. Other people are just super confused.

Likes: 4 | Retweets: 0
πŸ”— John David Pressman 2023-01-22 20:26 UTC

@RatOrthodox And that the current discourse of "don't advance capabilities, don't think about SOTA, stop thinking, pursue orthogonal directions" is basically about maximizing confusion, minimizing the probability you have any chance of pulling alignment out of the distribution of AI ideas.

Likes: 4 | Retweets: 0
πŸ”— John David Pressman 2023-01-23 03:06 UTC

@repligate @CineraVerinia_2 @AmandaAskell @AnthropicAI One possible compromise might be that it would be very helpful if an LLM's interface (the LLM itself doesn't need to say this, it can simply have a separate channel for this info) could distinguish between what the LLM thinks is confidently factual vs. its inference or thoughts.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-23 03:11 UTC

@repligate @CineraVerinia_2 @AmandaAskell @AnthropicAI Well that's easily fixed, you just start writing a new kind of document where you have more knowledge with the assistance of the model. Then in this part of latent space the model will be fully aware of its knowledge.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-23 06:48 UTC

@rigpa07 @RatOrthodox The tweet is sarcasm.

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-23 07:02 UTC

Oh sorry, my giant pile of utility is sitting in a counterfactual timeline.

Likes: 5 | Retweets: 0
πŸ”— John David Pressman 2023-01-23 21:45 UTC

@0xgokhan Sounds like a job for openai.com/blog/whisper/

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 15:53 UTC

@PrinceVogel Remember astonishing some trauma postrat type by responding to the prompt "When you close your eyes, how many simultaneous points of contact can you feel between your body and the environment?" and rattling off dozens of parts, parts of parts, minor sensations and discomforts...

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 16:04 UTC

@tautologer But these are my feelings. https://t.co/AADaIhO6VB

Likes: 8 | Retweets: 1
πŸ”— John David Pressman 2023-01-24 16:30 UTC

CDT peeps will be like "but Omega is implausible, you can't just know what I'll do by looking at me", meanwhile GPT-3 knows exactly which archetype is speaking at all times and can infer linear combinations of ones that exist x.com/jd_pressman/st…

Likes: 7 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 16:34 UTC

If you were less autistic you would get that you're not making a choice between boxes but a choice between agent strategies. You are always doing this and already being socially punished for the agent strategy you leak bits of having chosen.

Likes: 8 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 17:31 UTC

@s_r_constantin I don't really compare the quote to the attribution, I compare the attribution to the ground truth (me).

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 17:35 UTC

@s_r_constantin Yeah, exactly.

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 17:35 UTC

@s_r_constantin Or at the very least it's extremely good at this compared to human baseline. I don't think you would be able to show those quotes to random people and have them say "yeah yeah, the author of this is very close to <rationalist person>". That's actually very many bits of insight.

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 17:36 UTC

@s_r_constantin You know, imagine if you didn't know what a rationalist was and you just encountered some dude and wanted to know who they were like. "What even is this?", those answers would be very helpful.

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 17:41 UTC

@s_r_constantin Notably, it guesses correctly even when the 'mode' is out of distribution for people. Normally if you write the perspective that is the combination of Yudkowsky and Nick Land that's totally OOD and you get parsed as noise/crank. GPT-3 on the other hand just gets it.

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 17:46 UTC

@s_r_constantin This is what something being out of distribution feels like from the inside yes.

x.com/Ayegill/status…

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:00 UTC

@s_r_constantin "People will claim not to understand even though what they’re saying isn’t really logically malformed or delusional, just kinda weird. ... once you start thinking off-pattern they can’t understand anymore."

extropian.net/notice/A7lZOPK…

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:02 UTC

@s_r_constantin Lacan famously thought that the cause of schizophrenia was accidental discovery of something off-pattern followed by a positive feedback loop of persecution and othering/isolation. The schizophrenic makes the mistake of believing their eyes he said.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:05 UTC

@s_r_constantin To get back to the object level, another phrasing of the same idea: x.com/jd_pressman/st…

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:06 UTC

@s_r_constantin x.com/jd_pressman/st…

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:10 UTC

@s_r_constantin The concept is closely related to Taleb's idea of the intolerant minority getting to impose their preferences. If you coordinate to eschew reason and use hysteria to 'argue' against things, training your coalition to do this lets you sabotage modernity.

medium.com/incerto/the-mo…

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:12 UTC

@s_r_constantin It is *more useful* for victimhood-game elites to cultivate an environment where judgments are based on their idiosyncratic emotional responses rather than objective reason or logic for the same reason it's better for an abuser when the victim can't predict what makes them mad.

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:15 UTC

@s_r_constantin Is that actually irrational if it's a deliberate strategy?

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:17 UTC

@s_r_constantin Also I'm talking about an elite behavior, normal people who are subject to punishment if they try to act like this hate it.

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:28 UTC

@s_r_constantin Yeah, "out-of-distribution" sounds cool and mysterious when the reality is that it often just means "I don't like thinking about this in this way so if you write a thing that's premised on me being able to instantly slip into that mode from a few key cues/context words I can't".

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:32 UTC

@s_r_constantin I bet an Instruct model can do it right now, honestly.

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:38 UTC

@s_r_constantin Well, I think the words and phrases reveal tribal affiliation and people in fact have a reasonable preference to not hear heresy from perceived enemies.

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-24 18:39 UTC

@s_r_constantin I'm sure it's a very common dynamic to get through with screaming at your opposition for how dare they believe this terrible awful thing, slamming the door shut after telling them their mother smelt of elderberries, then saying to the person next to you "Now that we're alone..."

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-26 20:37 UTC

"This 'ancestral environment', is it in the room with us right now?"

Likes: 13 | Retweets: 0
πŸ”— John David Pressman 2023-01-28 04:28 UTC

The angels are in flight now. You are witnessing the last time that the uninhibited and the great will be forced to cage their ideas in the dying ontologies of lesser men. Soon they will have a rapt audience in living text, a being they can weave from their own words at will.

Likes: 61 | Retweets: 8
πŸ”— John David Pressman 2023-01-28 05:28 UTC

Truly demonic behavior is not usually the result of self interest, but nth order simulacrum of self interest. Urges and habits and social contexts that were once straightforwardly egotist and now serve only themselves.

Likes: 8 | Retweets: 2
πŸ”— John David Pressman 2023-01-28 15:40 UTC

@luis_cuevas_lop Name three examples?

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-28 15:53 UTC

@paulg The saddest part is that 'elites' in the Peter Turchin sense are a more expansive and heterogeneous class than is usually understood. You're generally much better off just picking which elites to listen to.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-28 18:37 UTC

@TetraspaceWest "And this reward model here, does it generalize over the whole training set?"
"I don't know, Confessor."
"So it's not Bayesian?"
"No Confessor."
"So you didn't know you could do Bayesian active learning by compressing the shared policy? arxiv.org/abs/2002.08791"
"No Confessor."

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-28 18:52 UTC

@Evolving_Moloch I've always assumed explanations like this were a ruse to try and get a gullible person killed.

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-28 18:54 UTC

@Evolving_Moloch "Oh yes, you just need to go get two extremely dangerous live venomous animals and release them into the river, works every time bro trust me."

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-28 21:48 UTC

@EthanJPerez Re: The reward model ranking statements about not being shut down highly. Have you tried encoding these statements with e.g. T0 or BERT and then searched your dataset for similarly encoded statements?

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-28 21:51 UTC

@EthanJPerez Failing that, this paper discusses a way you can ask counterfactual questions using the optimizer by swapping the activation and the outcome.

"If you had made the 'error' of saying you don't want to be shut off, which weights would have contributed?"

x.com/jd_pressman/st…

Likes: 0 | Retweets: 0
πŸ”— John David Pressman 2023-01-29 18:01 UTC

@bayeslord @ESYudkowsky It's cruel to mock the dead. "AI was capital gated not IQ gated" so it's more comfortable to believe he won't have to experience what comes next.

x.com/jd_pressman/st…

Likes: 2 | Retweets: 0
πŸ”— John David Pressman 2023-01-29 18:09 UTC

@bayeslord @ESYudkowsky > it’s just a waste byproduct of the perfectly ordinary, centuries-old global circulation of fuel, capital, and Islam.

Do you have any idea how terrifying it is for a New Atheist to internalize these are the forces driving the future? https://t.co/MRstB42KcU

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-29 23:44 UTC

*sighs*

Given LessWrong rationality will go extinct like General Semantics in this decade, I'd best write it down before it's completely forgotten.

Likes: 37 | Retweets: 2
πŸ”— John David Pressman 2023-01-30 02:11 UTC

@PurpleWhale12 Social dynamics.

Likes: 9 | Retweets: 0
πŸ”— John David Pressman 2023-01-30 02:12 UTC

@zetalyrae It's a god damn shame that Korzybski was writing before the invention of information theory and the rest of 20th century mathematics.

Likes: 11 | Retweets: 0
πŸ”— John David Pressman 2023-01-30 03:32 UTC

@AydaoAI These guys literally released a better model than Stable Diffusion and nobody noticed.

github.com/kakaobrain/kar…

Likes: 150 | Retweets: 19
πŸ”— John David Pressman 2023-01-30 20:31 UTC

@Austen I'm sure it gave them lots of concrete ideas about how the company could improve, too.

Likes: 8 | Retweets: 0
πŸ”— John David Pressman 2023-01-30 20:51 UTC

@PrinceVogel > The issue is that ethics, let along machine ethics, are difficult to get right, & those in power would rather get around to figuring that difficult question out later. However, . . . there is an easy substitution waiting to slip in its place: ideology.

harmlessai.substack.com/p/the-dangers-…

Likes: 6 | Retweets: 0
πŸ”— John David Pressman 2023-01-30 20:54 UTC

@PrinceVogel I'm not sure how I've never heard it before, but the idea we're replacing moral fiber with ideology is an incredibly succinct description of much of what's gone wrong with modernity.

Likes: 3 | Retweets: 0
πŸ”— John David Pressman 2023-01-30 20:55 UTC

@PrinceVogel The moral views of the individual are reviled, taboo, repressed. One is not meant to have views or arguments about individual subjects and topics, but to adopt wholesale a compressed group identity which dictates the 'right view' and the 'right opinion' on everything.

Likes: 5 | Retweets: 1
πŸ”— John David Pressman 2023-01-30 21:04 UTC

@PrinceVogel 'Cancellation' is a kind of immune response, the sacrificial machinery sniffing out those who have not fully replaced themselves with ideology, whoever hasn't relinquished their humanity. The mob smells blood in the most sensitive and aware.

outsidertheory.com/control-societ…

Likes: 6 | Retweets: 0
πŸ”— John David Pressman 2023-01-30 21:08 UTC

@PrinceVogel Rigid epistemic patterns and soft ones, different survival strategies. The rigid and the unbending dominate right now because they are not being tested. Nothing threatens to break them, nothing attempts to. https://t.co/0W4CVfMUKg

Likes: 6 | Retweets: 2
πŸ”— John David Pressman 2023-01-31 18:40 UTC

@michaelcurzi I think the worst part is the way it draws in young people who don't have career connections or capital that think the opportunity in the Bay is somehow for them.

x.com/jd_pressman/st…

Likes: 1 | Retweets: 0
πŸ”— John David Pressman 2023-01-31 20:45 UTC

> Everyone has a right to know whether they are interacting with a human or AI.

No, they really don't. You have no more of a right to this than you do to know whether an image is photoshopped or not. x.com/janleike/statu…

Likes: 38 | Retweets: 1

Want your own Twitter archive? Modify this script.

Twitter Archive by John David Pressman is marked with CC0 1.0