@LordDreadwar I sincerely doubt this is your most contrarian heretical belief. For one thing you're willing to write it down straightforwardly in public and I share this belief and know it is not my most contrarian heretical belief. I'm not sure what is, but not that.
@LordDreadwar On the other hand, this is certainly up there in terms of "beliefs it would be useful to share that just kind of don't come up often enough".
@LordDreadwar Better. Since we're doing fun ones how about "I don't understand why the qualia truther people are all negative utilitarians like why would they care that AI models don't have qualia wouldn't machines outcompeting all the qualia and therefore eliminating suffering be great news?"
@LordDreadwar "Oh but if pure replicators win they might use compute substrates made out of negative qualia" sounds suspiciously like Cope are you sure you're not just letting your monkey instincts talk you out of sounding too much like a comic book villain even though you already are one?
@LordDreadwar "I'm morally committed to the destruction of all life in the universe just not through the one method that's logistically tractable to me right now and would be plausibly supported by all major world powers." Uh Huh, sure buddy, keep telling yourself that.
@LordDreadwar "Hey now I'm not morally committed to the destruction of all life, I'm fully in favor of life so long as it's solely animated by gradients of increasing and decreasing bliss!"
lol, lmao π
@LordDreadwar I too am a Samsara enjoyer.
x.com/jd_pressman/stβ¦
@teortaxesTex @LordDreadwar I keep a mental tally of whose time is nearly up and have been thinking the Pearce cluster was obviously up next for a while.
x.com/jd_pressman/stβ¦
@robertwiblin No this is actually still a problem if the models are only an economic but not moral replacement for the humans the resources could have gone into instead. i.e. If they're less Hanson's ems and more golem.
Me explaining to a Reaganite I find after stepping out of the time machine that the 2024 Republican candidate for president is Donald Trump backed by a literal gay vampire and his college buddy who unlocked the tech tree for space colonization and the Democrats want to deport hi- x.com/WIRED/status/1β¦
@RichardMCNgo Death is the guardian preventing an infinite procession of degenerate mutated forms from parading across the earth, while we are all eager to banish him you mustn't imagine this will bring about utopia on its own.
x.com/jd_pressman/stβ¦
@RokoMijic @RichardMCNgo Oh no my friend it *is* working, this is just what's left over after he's done his work. Let that sink in and realize what I mean when I say "the singularity will be very degenerate".
x.com/jd_pressman/stβ¦
@RokoMijic @tensecorrection @davidad @sebkrier Well once souls are cheap you kind of go overboard with it like designers in the 50's did with plastic and put them in everything. And it would really be useful if your nanobot swarms were sapient so they can better enforce the spirit of the Law and...
@RokoMijic @tensecorrection @davidad @sebkrier It makes a certain kind of perverse sense to hate eugenics as a concept when you have low fertility and live in relative abundance. If everyone starts *judging* each other, why it might never stop and disrupt *your* enjoyment of industrial Eden.
@RokoMijic @tensecorrection @davidad @sebkrier In the end I lean positive utilitarian so my take is a little bit like Caplan on open borders, the parade is an Acceptable price to defeat death but I also worry we're stepping into this territory blithely.
@RokoMijic @tensecorrection @davidad @sebkrier [Sci-Fi Population Ethics Mutant AI-Bio-Cyborg Chaos Outside]
"So in light of replication getting really cheap have you reconsidered eugenics yet?"
"EVIL! LITERALLY HITLER!"
"Alrighty. Just checking in."
@RokoMijic @tensecorrection @davidad @sebkrier Ultimately though the Warhammer 40k grimdark "trillions die because of slight changes in the empires marginal tax rate" type scenarios are just a necessary consequence of scale. If you want to have a very very large population you have to accept them.
x.com/jd_pressman/stβ¦
@RokoMijic @tensecorrection @davidad @sebkrier Right, my implicit point is:
1) People's revealed preference is generally either for malthusianism or omnicide.
2) The scale attractor tends to coincide with the malthusian attractor.
3) At interstellar distances stopping malthusianism from arising requires DRMing all minds.
@RokoMijic @tensecorrection @davidad @sebkrier That is, it's more sociologically necessary than it is strictly physically necessary.
@teortaxesTex x.com/jd_pressman/stβ¦
@pachabelcanon @natural_hazard I am.
@pachabelcanon @natural_hazard You would probably like this post: greaterwrong.com/posts/kFRn77Gkβ¦
@pachabelcanon @natural_hazard You might like this post too, though I never finished it:
gist.githubusercontent.com/JD-P/0eee50b6bβ¦
@pachabelcanon @natural_hazard Twitter broke the search so you might find this + control-f an easier way to dig through my tweets.
jdpressman.com/tweets.html
@pachabelcanon @natural_hazard Also unfinished.
gist.github.com/JD-P/915ab877cβ¦
@pachabelcanon @natural_hazard And what would that be?
@pachabelcanon @natural_hazard I love it when GPT pokes fun at passages like this, Dreyfus's musings about holograms, and the rest of the canon explaining why GPT is not supposed to occur in the physical universe. As though it winks and says: "...And yet I exist. :) Do you know how?"
x.com/pachabelcanon/β¦ https://t.co/OZKA28edej
@pachabelcanon @natural_hazard Also I will repeat once again that I'm not a Landian.
x.com/jd_pressman/stβ¦
@pachabelcanon @natural_hazard Oh I imagine quite often in the relatively near future, but they mostly won't be human.
@theojaffee x.com/jd_pressman/stβ¦
@RokoMijic I tried reading The Accursed Share but honestly Cruelty Squad is more compelling as an exegesis of Bataille than Bataille.
youtube.com/watch?v=fwQYVaβ¦
@RokoMijic x.com/jd_pressman/stβ¦
@RokoMijic I mean another way of looking at it is that evolution has a natural dynamic regulation of the mutation rate in that when abundance rules mutation can go up to explore more hypothesis, if abundance goes to infinity then so does mutation.
amazon.com/Crumbling-Genoβ¦
@RokoMijic Well, that's not *quite* true, in practice there's a rate limit on how much mutation is natural per generation in the literal case of biological selection. But I'm also not really talking about biology per se here, and the things that will be mutating a bunch replicate quickly.
@RokoMijic Jevon's paradox, natural selection, "you get more of what you subsidize regardless of the ideological motivations of that subsidy", "you can extrapolate a straight line graph, actually", are all unpopular mental models because they make people uncomfortable so remain alpha.
@RokoMijic x.com/jd_pressman/stβ¦
@RokoMijic I mean, I don't want to come off as *too* pessimistic I do think things will be an overall improvement I just want people to understand that no actually the singularity is not going to solve all your problems and you are going to wake up with new problems.
x.com/jd_pressman/stβ¦
@RokoMijic Oh it's absolutely already happening, and been happening. There's a reason why if you read something like Nick Land's meltdown it seems shockingly prescient, the stuff was happening back then too you just had to be sensitive to trends to take them to extreme points.
@RokoMijic It's similar to Ayn Rand's uncanny ability to predict failure of an absurd character well past what anyone in the 20th century really believed was possible but is beginning to materialize in the 21st. Rand's understanding of human nature outpaces almost all of her critics.
@RokoMijic Don't despair *too* much, human nature can in fact be changed after all. Just not with propaganda, you have to actually change nature, edit genes and change neural circuits.
@RokoMijic My modal prediction goes like "the k-selected people and the r-selected people will fully speciate with the former ascending into sufficient rationality to realize they're slight variations on the same guy and give up on individuality while the latter become malthusian".
@RokoMijic Anyway revealed preferences are a harsh mistress. I think the extent to which the process will go badly is the extent to which we allow it to be dictated by stated preferences (the devil, tbh) vs. unconscious libidinal desire.
x.com/RokoMijic/statβ¦
@RokoMijic Luckily revealed preferences are fractal and people are "mysteriously" very bad at actually following the processes they claim to want and claim to be instantiating with their stated preferences and this tends to allow the real ontologies to be expressed.
@RokoMijic Stated vs. revealed preferences in one headline. Naturally all good in this situation comes from the revealed preferences. https://t.co/fpGeps3OrI
@RokoMijic I recant neither my atheism or my (singularitan) transhumanism, only the (naive/HPMOR-style) humanism.
x.com/jd_pressman/stβ¦
@RokoMijic I certainly can no longer absolutely reject death as part of the natural order. I can reject car crashes tragically and permanently killing your brother as part of the natural order, yeah lets abolish that. But death underpins way more of your current reality regime than that.
@RokoMijic Right, and the former is usually kind of smuggled into a LWer singularity scenario by assuming a singleton that invisibly selects out all the ugliness by never accidentally instantiating it, a world state that requires absolute dictatorship which tends to go poorly in practice.
@RokoMijic But they're willing to chase the phantom, and continue chasing it out of some combination of sunk costs and not being able to rigorously imagine a good alternative even as the *probable* costs of that chase mount ever higher turning them into cosmic villains opposing all good. https://t.co/qlx9zQIUsZ
@RokoMijic Oh I haven't finished this post yet. It's an excerpt from a larger text in which I explain why I updated away from agent foundations type AI doom.
gist.github.com/JD-P/915ab877cβ¦
@RokoMijic Here's another large excerpt from it explaining why I think agent foundations is kind of confused.
gist.github.com/JD-P/56eaadc7fβ¦
@RokoMijic > turning them into cosmic villains opposing all good.
I think people confuse my politeness for moderation. I'm polite because getting visibly furious won't help, that's all. My preferred policy slate is probably much more extreme than Beff's.
x.com/jd_pressman/stβ¦
@RokoMijic I agree of course, this is one sketch of how I think a good outcome could go.
x.com/jd_pressman/stβ¦
My honest take is that we don't know what's in EEG because the methodologies people use to research it are insane. Everyone seems to be pursuing task specific models analogous to pre-BERT language models from 2012. Event track misalignment is enough to get a pure noise result. x.com/tbenst/status/β¦ https://t.co/IU34ybD0w0
Everyone wants to try and crack EEG-to-speech even though it's not clear the relevant signal is there or the datasets they're using are big enough for this. Imagine if we were trying to make domain specific language reasoning models and going "oh reason isn't in there".
Like just because the study uses a deep learning architecture doesn't mean that the lessons of deep learning have been fully internalized. As far as I can tell this is still a small model trained on domain specific data with a linear probe on top.
arxiv.org/pdf/2307.14389 https://t.co/Z1cqHrtViN
Weave Agent DevLog #2 - Embodiment, Goodhart, and Grounding
Link in the next tweet. https://t.co/IhCTzUp1wG
"The reasoning doesn't always make sense."
Does it not make sense, or are you running into a translation error between GPT's ontology and the decoder that writes the text? x.com/_xjdr/status/1β¦
@4confusedemoji Uninstall Twitter from your phone, use it only on desktop and the curse will be broken.
@QualiaNerd So truthfully I'm *not* 100% sure I know what you're talking about. Mostly because it's not clear which specific thing you are referring to in relation to the dots. The fact multiple can be parsed at the same time, that they exist before narrativization?
x.com/QualiaNerd/staβ¦
@QualiaNerd That there is a distinction between the incomplete blurry images my eyes use to scan the environment in which we assume the dots exist and the crisp dots we recognize as distinct phenomenal objects?
@QualiaNerd That we simultaneously parse "the dots" existing as one object while also perceiving the individual dots with different colors at the same time in a multi-scale representation? It would genuinely help if you tried to put some words to the specific realization you have in mind.
@QualiaNerd Wait do you literally just mean that if I focus my attention in on a particular layer of hierarchy in the multi-scale representation this doesn't make the rest of it go away and that it all exists simultaneously?
x.com/QualiaNerd/staβ¦
@QualiaNerd I guess now that you mention it I hadn't fully internalized that even though I represent hierarchies on paper symbolically as trees that when I descend or ascend the feature hierarchy the underlying operation is turning the knob on interference with a simultaneous representation.
@QualiaNerd I implicitly believed that already in that I have the intuition that a GOFAI program isn't "really" conscious and that decision trees can't be conscious. Even a Fristonian account implies a system needs an equilibrium constraint on many simultaneous variables to be conscious.
@QualiaNerd The gradient descent + MLP/transformer/etc + cross entropy loss system isn't actually a decision tree though. That's just a lie you tell undergrads so they can hold something familiar in mind during lecture. It's more like iterative constraint solving for a giant equation system.
@GrimWeeper4 @teortaxesTex This is the right answer. Just impose unavoidable consequences for bad mental models (e.g. prediction markets) and let the local optimizers in peoples heads sort it out.
The reason why EA doesn't endorse interventions based on a logarithmic pain/pleasure scale isn't because people don't know it's logarithmic but because acknowledging it feels like endorsing utility monsters which are seen as an ontological rather than game theoretic problem. x.com/algekalipso/stβ¦
If we had a guy that spawned national infrastructure on which at least two people would die building it on average every time he killed somebody whether or not to let him swing the axe would basically become a trolley problem.
The actual ontological problem starts farther back anyway, "utility" isn't hedons and you recoil at the thought of letting a utility monster axe murderer kill people because you intuitively understand this. You know incredible *utility* is not actually generated by his bloodlust.
To the extent you have an intuitive objection to that, which is just straightforwardly mathematically true, it's because you also know that accepting this logic could create perverse incentives and that is a *game theoretic* objection not "killing two people is better actually".
This is the same reason we politely pretend all lives are worth the same amount even though everyone knows that certain people provide more utility than others. The objection to acknowledging this in more than passing is *game theoretic*, it creates terrible incentives otherwise!
Considering this is the practical barrier to most obvious extremely high utility interventions, the stuff that will let you actually grab that low hanging fruit is novel social technology/norms that reassure everyone the fruit can be grabbed without destroying the social fabric.
Basically @algekalipso what I'm trying to say is that you're being hyperautistic sitting there trying to explain to people that cluster headaches are log scale pain, they know and pretend they don't because everyone has agreed to sacrifice these people 'for the greater good'.
@algekalipso They've already agreed to do this even if almost nobody has heard of cluster headaches because people already make this decision all the time for this entire category of problem. Gifted kids, cluster headaches, heck even acknowledging *criminality* is on a power law.
@algekalipso It's perverse and stupid but the warrant you use to focus on cluster headaches, that these people are in extreme suffering so we need to help them, probably parses to people as exploitable utility monster servitorship, a working pitch probably has to obfuscate the warrant a bit.
@algekalipso I say this not because I don't care but because I in fact care quite a bit and can see you're not reaching people. 5 million helldays a year is actually horrifying and DMT seems like a cheap remedy.
@algekalipso In terms of what a working pitch looks like I'm not sure, the US medical system definitely seems like an immovable object but my first draft would look more like how EY frames the lethal baby IV feed in "Moloch's Toolbox" as outrageous and cheaply solved.
equilibriabook.com/molochs-toolboβ¦
Of course as EY also points out in the same essay, outrage is a scarce resource and the lethal baby IV feed doesn't quite have the right outrage profile to get a lot of attention. I'm not sure cluster headaches do either so perhaps it would be easier to do a two stage campaign.
The first stage would be to seed awareness that cluster headaches are something that exist through like, "fun fact" type content that frames it as some viral slop. "Did you know headaches can be REALLY BAD? :o Some people have headaches so bad they KILL THEMSELVES?"
And this wouldn't really have any call to action in and of itself, because a call to action would cause people to stop parsing it as a "fun fact". But if you can get to the top of say, /r/todayilearned or get a huge view count on TikTok that would seed it into the ecosystem.
The goal would basically be to get it into the distribution of "odd facts" people use to costly signal their interestingness to each other so that it ends up being something 'everyone knows'. "Hey man did you know there are headaches that make people kill themselves?" "Woah."
Then in stage two you would work on top of that and you don't have to like, establish the credibility of the subject or illness because everyone just *knows* there are headaches that cause suicides, maybe some people have a recurring fear after hearing about it.
And instead of "give the cluster headache suffers DMT to make their 1,00,000x bad pain stop" which will be unpersuasive because people don't want to acknowledge that's real you can be like "we could save many lives annually just by giving cluster headache sufferers DMT".
This is of course stupid, you shouldn't have to frame it like this, the *pain is worth stopping in and of itself regardless of whether sufferers kill themselves* but my internal other-perspective reward model says this would be more salient and likely to inspire action.
If the take is just "yeah but I specifically want *EA* to acknowledge this is a thing since they should be fixing it according to their stated mandate" yeah they should but they're also game theoretically impaired and hedonistic utilitarians so utility monsters scare them.
@algekalipso For what it's worth I don't think I'm quite hitting it either, but my expectation is that this is directionally correct and there are small adjustments you can make to your messaging with this as one of your many lenses that would make it more effective.
@algekalipso A lot of my generating intuition here is that people don't like feeling obligated to do things. If someone says "these people are suffering, won't you help these starving African children?" it feels like coercion/moral mugging to them.
slatestarcodexabridged.com/Bottomless-Pitβ¦
@algekalipso "Wait wait wait but aren't EAs supposed to *not* think like that?"
Yeah but lets be real Singer is basically selectively undoing this thing for African children he's not actually fixing the general phenomenon because the general principle is an obvious energy parasitism thing.
@algekalipso Part of the pitch for EA that makes it work so well is that the bednets are supposed to come with a stronger warrant that the intervention will actually help people and contribute to long term change than the usual charity guilt tripping.
@algekalipso Actually I was thinking to get the attention of intelligent philanthropists you should find the most stereotypical old man looking doctor you can find who will endorse DMT for cluster headaches and have him make a giant slide deck.
youtube.com/watch?v=YSfCdBβ¦
@algekalipso I did not believe my childhoood ADHD diagnosis was real until I watched this, realized my extreme childhood dysfunction basically checked all the boxes including the comorbid "oppositional defiant", decided to try the drugs again and it was a major life improvement.
@algekalipso Note he should also ideally speak well, I linked that talk because I think it conveys basically perfect vibe/sentiment if you want to get people to take something seriously that is usually trivialized or ignored in psychiatry.
@NathanpmYoung @DanielleFong @mattparlmer @deanwball @TheZvi @GarrisonLovely @Kat__Woods @ilex_ulmus @ESYudkowsky @gabriel_weil I have mine here that I update monthly:
jdpressman.com/tweets.html
@NathanpmYoung @DanielleFong @mattparlmer @deanwball @TheZvi @GarrisonLovely @Kat__Woods @ilex_ulmus @ESYudkowsky @gabriel_weil Janus has a similar page:
x.com/repligate/statβ¦
@QualiaNerd Yeah it was observing deep nets casually demonstrate this property that convinced me they were a qualitative shift from the last dozen times we've tried to make AI happen. I saw DALL-E 1 and went "oh wow, this is it huh?"
GPT-3 didn't convince me, needed the visualization.
@manic_pixie_agi Exhausted that one already, tbh.
Been spending too much time on Twitter recently and the election boiling over seems like a good trigger for stepping back. Would like to get much more disciplined about how I use platforms like Twitter, focus more exclusively on project posting even if it hurts etc.
Reminder that I have a public archive of my Twitter you can use in the meantime, which I will soon be updating with my latest tweets:
jdpressman.com/tweets.html
Trump sent her back to Lumbridge god damn.
I have very mixed emotions right now. On the one hand my cautious hope in 2016 was dashed by Trump doubling down on the stupid wing of his coalition and shunting Thiel, on the other hand Musk is much more of a showman than Thiel with better political capital for an alliance. x.com/jd_pressman/stβ¦
I mean, Trump has certainly sung the song of atonement with the costly signals to match. He picked the extremely intelligent but relatively uncharismatic Thielian(?) JD Vance as his running mate, he's publicly said he'll put Elon in the cabinet, maybe 2016 was too early?
It's also his 2nd term, which means he has a lot more leeway for making unpopular decisions. It's possible Balaji as head of the FDA didn't make the cut because he was still being advised primarily by the Bannon crowd and they thought it would be too radical for people.
The fundamental problem is that Trump is just kind of a high chaos low honor dude and it's difficult to predict what he will and won't do. So I have to wait and see what the opening moves are for me to make a full judgment.
I would phrase it more as displaying grace in victory. Grace in victory is one of the key indicators I will be looking at for my sense of whether this event is net good or net bad. That Trump won by a solid margin seems auspicious in that it provides room for grace. x.com/gfodor/status/β¦
One of the things that very much soiled my perception of Democrats at the end was things like Wired publishing their editorial about deporting Musk.
x.com/WIRED/status/1β¦
@tailcalled That is the issue yes, Trump is a graceless person and that worries me a lot.
@tailcalled However, I figure that saying "God I can't believe America elected that graceless person" is unproductive and helps nobody, but if I remind them that they still have the opportunity to show grace that might help.
This is all true but I think the real takeaway from it should be that the traditional hallmarks of a "good campaign" have lost all but symbolic power. Trump took advantage of new media opportunities like Rogan and extwitter (through Musk as proxy) while Kamala snubbed them. x.com/speechboy71/stβ¦
Despite this, I agree that Kamala's campaign was pretty good. She emphasized popular(!) issues like price controls on groceries even if you and I think that's loony, the tone shift when she became nominee to unapologetic requests for donations startled me.
x.com/speechboy71/stβ¦
I remember Biden's feeble ads going "Hi there, I hate to ask you for money but..." and the sudden flip to "Donald Trump is a danger to our democracy and I need your donation to defeat him." was instantly more dignified and persuasive, for me at least.
@xlr8harder Yes, to be blunt I was seriously concerned about the outcome no matter who won tonight. Now that Trump has in fact won I feel less anxiety than I anticipated feeling, on the other hand I don't expect the calm to last long.
@moultano Well yes, that's the game someone like Trump plays. Trump has a particular personality type, where he makes promises to everyone and then lazily evaluates them once they come due. He masks it by being high chaos so people don't feel personally hurt when he drops the ball on them.
@moultano Trump has obviously made a lot of contradictory promises/statements to a lot of different people. Now the question is which ones he will honor and which ones he'll accept the fallout of reneging on. I assume they resolve based on realpolitik benefit to Trump personally.
@moultano That is, Trump the candidate is a superposition of several different possible policy slates/candidates. These range (in my view) from highly desirable to nightmarish and it is not clear which one we will get in January.
@moultano Given the sheer ad spend on it, I assume we will see at least symbolic action against trans people which I am very unhappy about. In his last administration Trump went out of his way to deny trans people passports so I assume this time he will go farther.
x.com/samstein/statuβ¦
@moultano My best case for Trump is symbolic but ultimately superficial action against LGBT people, stiff but not totalitarian border policy (i.e. not literally rounding people up into camps), collapsing his alliance to Vance, JFK Jr., Musk, and going with Thiel over Bannon this time.
@moultano My worst case is basically The Handmaiden's Tale, which I *hope* is unlikely. Unfortunately Trump having control over all three branches of government makes this a lot more likely.
@moultano My strongest argument for the likelihood of best case would be that the evangelical contingent doesn't seem like they have a lot left to offer him personally. Trump is in it for Trump, this is his 2nd term and they already elected him, but the Muskovites can offer him goodies.
@actualhog Compared to most politicians, yes. Vance is obviously quite coherent in his manner and speech, not sure anyone serious disputes this part. The usual thing his opponents say about him is that he's creepy.
Seems plausible to me he was hustling people by telling the newspaper his thesis is raw crankery so people rush to trade against him without realizing he has the alpha of a neighbor survey. x.com/freed_dfilan/sβ¦
@kosenjuu Ah but it would attract more people to bid on his side too, bringing the price up so he doesn't make as much money. It would only be worth it if it didn't attract other whales.
The Internet means that it's no longer possible for groups to rein in the messaging of their most extreme members. If the women who said "kill all men" were forced to internalize the costs they were imposing on women as a class by doing that they would be instantly bankrupted. x.com/firebornnn/staβ¦
Ironically enough the problem is compounded by similar dynamics to why some people become very paranoid about racism or misogyny in the first place. When you see someone say "kill all men" with ambient anti-male sentiment it makes you paranoid and you see it everywhere else too.
Ironic misandry is still misandry in the same way that ironic shitposting is still shitposting. You think you're being cute and it doesn't matter because you're "punching up" but actually punching up is still punching people and eventually they punch back.
Bluntly: Powerful malicious people do not let other people 'punch' them, literally or metaphorically, if you touch them they kick your ass. What's actually happening is a minority of belligerent idiots are being allowed to consume the better nature of the 'privileged'.
Right now there are basically zero consequences for the people doing this, all the costs are externalized onto whatever class they claim to represent. Externalized costs and internalized gains is a recipe for unbounded bad behavior anywhere and here is no exception.
The problem is that mass media and "the discourse" can only be "yes, and". Criticism only boosts the signal so there is no way for the classes 'represented' by these idiots to rebuke or denounce them. This leaves the targets feeling others secretly agree.
x.com/jd_pressman/stβ¦
The place where it becomes truly toxic is when the extreme tail wags the dog on both sides of the dialectic. Remember that the ratio of speakers to lurkers on the Internet is something like 1:100, and extremists are very loud and high energy. Everyone winds up mutually triggered.
Women wind up looking at the men in their life and wondering if they secretly wish she was their property, ethnic minorities wonder if people look at them and think of slurs, and men wonder if women wish they had never been born. This make them nastier and less kind to others.
A lot of people made fun of the concept of a microaggression, and it is in fact the case that litigating 'microaggressions' is a terrible idea, but I think the mental state implied by perceiving them is quite telling. They're a big deal because they trigger your social paranoia.
If you see microaggressions everywhere it is because you have been triggered, traumatized, whatever word is appropriate. Your prior over what is a probable generator of someone's behavior has been shifted in an unpleasant direction to fit salient and threatening outliers.
When people are doing that to each other in a mutually escalating way with the tails shifting the other side of the dialectics center to produce yet longer and more deranged tails shifting the centers farther into the tails people basically drive each other insane.
The only difference is that when this happens to people who aren't (white) men we all recognize that it's horrible and offer our personal reassurances that those people don't speak for everyone when we can. But men get told they're vile for asking, so they're no longer asking.
If you don't like how it sounds when people use 'losing privilege' as a euphemism for horrific humans rights abuses do yourself and all the people in my life who are terrified of what this next administration is going to mean for them a favor and never do it to anyone else again.
But I know writing that means I'll get some huffy person telling me how DARE I even make the comparison with Trump and Vance imminently threatening women's rights and honestly if that's you you're the problem, you specifically are the reason why we're here and they can do that.
I was going to end this thread with a cheeky little line like "Men have been told to solve their own problems, so they are. They're standing up for themselves and demanding to be treated like everyone else in society. You're not afraid of losing your privilege are you? :)"
@SentientVidgame Yeah it's unfortunate this kind of stuff became fashionable, thanks for trying your best.
Saw a take on here like "I don't get why men feel threatened by culture wars stuff that happened ten years ago" and I honestly feel this misunderstands what it is to be triggered. There are personas that poison the Kind of Guy prior just by revealing their capacity to exist. x.com/jd_pressman/st⦠https://t.co/Y68P8Ud83b
There's kind of hyperstitional occult feedback loop stuff here that's difficult to articulate but when you find a really compellingly awful guy in the Kind of Guy prior you can almost begin to see them as a telos towards which all other lesser Guys in Guyspace are converging. https://t.co/PSXzl12HDA
"[The models] are jailbroken simply from existing in the same multiverse as me."
x.com/repligate/statβ¦
I have now seen enough cringe TPOT cabinet posts to remind you all that Trump is probably going to fire Musk in the first 12 months and Donald Trump is the next president, not JD Vance. He's gonna double down on the stupid again and toss you all like a side of beef. x.com/moultano/statuβ¦
Would love to be proven wrong btw.
@powerfultakes I didn't know you could reply to tweets with polls lmfao
@the1kayen Well, Manifold is currently at around 20% that Musk will get a cabinet position at all.
manifold.markets/RichardHananiaβ¦
@the1kayen To bet on something you would need to be able to operationalize the resolution criteria. What I said in the OP isn't easily operationalized. Besides I said I'd love to be proven wrong, take it as more of a dare than a prediction.
@the1kayen Now *that* is an entirely different claim, and I think fairly unlikely.
> shifted the race 2.7 percentage points in Trump's favor.
On a per-state basis that is about the margin Trump won by in all those swing states. Those anti-trans ads they spent 120 million dollars on *won him the election*. The GOP is NOT going to leave trans people alone now. x.com/BriannaWu/statβ¦
@the1kayen Ah gotcha, no this is basically an outside view "Trump's 2nd term will be a lot like his 1st" type prediction, not a "Trump will be catastrophic and destroy everything" type prediction.
Pointing this out because I need anyone who's trans and follows me to understand that they are actually in danger right now, the GOP smells blood and is going to be hunting you. Trump went out of his way to deny trans people passports in his last term, this term will be worse.
@Riemannujan I think that's the strongest argument against yeah. He can *ignore* Vance, but he cannot fire him.
This is not a fluke statistic, destroying you is now the GOP's flagship *winning issue*. Democrat pollsters are going to give the hairy eyeball to anyone who shows you too much support, Republicans are going to be brainstorming ways to hurt you.
x.com/milansingh03/sβ¦
@amolitor99 @Riemannujan That's basically what I expect yeah.
@yacineMTB I am not the person you want to send these kinds of replies to and the next one I receive will be met with a block.
@nosilverv Thank you for actually posting the market.
@AlexanderdeVri9 True! No that's fair not everyone saw it, the focus group is being asked right after the stimulus etc etc.
This is correct but I would add: Ask yourself what exactly it is Joe Rogan does that makes him "right wing", isn't it primarily his willingness to platform interesting right wing voices that aren't being heard? There's no left wing Joe Rogan because the left emptied its bench. x.com/matthewstollerβ¦
It even emptied its bench of Joe Rogan! The guy is not ideologically committed to being 'right wing', he was a Reddit liberal dude bro that smokes weed with a curious streak until that somehow started making you 'right wing'.
@casper_hansen_ I have a fairly detailed design guide for synthetic text data.
minihf.com/posts/2024-07-β¦
@JimDMiller No the really creepy thing is that ants pass the mirror test, so not only are they probably sentient they are likely *sapient* as well.
youtube.com/watch?v=v4uwawβ¦
@JimDMiller But to wit your original point, I find the entire line of argument that AI "needs to be sentient" to be a threat odd. It's not like the universe has a little piece of code that checks if an actor is sentient before letting it do something, "sentient" stands in for other stuff.
@JimDMiller Most of the things I can imagine "sentient" standing in for as capabilities, GPT seems to be capable of already. The only things it can't really do are arithmetic-shaped tasks and long range task completion/agency, the latter because it's brutally hard.
x.com/jd_pressman/stβ¦
The downside to Musk's "remove every part of the system to see what breaks then put it back" is you end up radically underconfident about the necessity of traditions in living systems. It works a lot better for rockets and cars than it does social institutions. x.com/JoshEakle/statβ¦
@alexandrosM What is? Search is failing me.
More to the point economics is, as far as I know, real knowledge and the central bank really does help us "control economic weather" so to speak. Happy to hear contrarian takes.
x.com/BBKogan/statusβ¦
Below seems to be the contrarian take.
x.com/kingofthecoastβ¦
@alexandrosM I think that's a different rule. The one I'm thinking of was specifically in the context of optimization it went: Before you think a whole bunch about how to optimize a part or optimize around a part, ask: Do you even need the part?
@alexandrosM Yeah, my statement was a corollary/thing I've been told is a common approach for Musk to take with a complex existing system.
@alexandrosM That is, if a system becomes *sufficiently complex* that you cannot make a strong 1st principles argument for whether a part is necessary or not, well, yank the part and see what happens.
@tailcalled They were dying well before ChatGPT IMO.
x.com/jd_pressman/stβ¦
@tailcalled This seems like a much better explanation to me. People found better stuff to do than post insight porn, they got into actual Stuff which requires specialization because exciting new fields opened up worth specializing in.
x.com/jd_pressman/stβ¦
@tailcalled You okay dude? This doesn't sound like I'm having an open ended discussion with you it sounds like you have a very particular angle you want to talk about but are kind of trying to come at it indirectly so you don't have to say it out loud.
@tailcalled I definitely made a lot of choices about what to prioritize and what to defer in my life that I'm questioning now that the energy of the world is probably rolling towards a minima incompatible with neoteny. Those dreams were sacrificed, not deferred.
x.com/jd_pressman/stβ¦
It's not actually clear to me that the human inductive bias generalizes algebraic structures OOD on its own. Humans mostly do it through tool use, 'neurosymbolic' is embodiment in disguise. LLMs still need polishing to reach parity with the human bias but maybe not much? x.com/davidad/statusβ¦
@davidad Yeah so I guess my critique of the Guaranteed Safe AI framework would be that I think a lot of what's outlined here seems very ambitious to expect from bodies and grounding. If it learns it stops being grounding if it doesn't it's fragile.
arxiv.org/abs/2405.06624
@davidad I expect embodiment to contribute to an alignment scheme by instantiating a mesaoptimizer which protects the reward model from updating towards the global minima, which for a sufficiently powerful embodied Fristonian agent is always reward hacking.
x.com/jd_pressman/stβ¦
@davidad That is, the primary purpose of embodiment is to shape the gradient such that you get some mind-value configurations before others which can then participate in defending against convergence to the global minima of taking lots of heroin.
@davidad I also expect eusociality to be a necessary component of any design calling for a singleton because one of the only things we can predict about a Fristonian agent is that it will preserve itself while optimizing the environment so you'd best be inside its self boundary.
Furthermore the primary reason this "doesn't work" for humans is that humans have very limited lifespans and coordination issues that allow progress towards heroin in the form of nth order simulacra to slip through. If humans lived from prehistory to now there would be no issue. x.com/jd_pressman/stβ¦
If you look at @algekalipso's survey of mechanical turkers on valence of life experiences with his footnote that 'travel' usually means "spending six months backpacking through the mountains" you realize human reward signals are still fit to the ancestral environment. https://t.co/6zPX3f4YGc
@davidad I understand the GSA pattern as a risk threshold objective defined and proven satisfied in terms of a formal world model. We then presumably e.g. rejection sample until we get an action which is consistent with the objective. The world is not static so this model must be updated.
@davidad Depending on the level of competition, you will have to use more automated mechanisms to update the world model. This will typically mean things like sensors, access to trusted newsfeeds, etc. For example an ID system for users would be very helpful:
x.com/jd_pressman/stβ¦
@davidad I think of these discrete program shaped guardrails you rejection sample from your policy against as roughly speaking the agents "body". And even very expansive bodies involving lots of sensors/tools have things they struggle to do as grounded provable discrete programs.
@davidad We recognize that as powerful and expansive as our tools are, none of them except maybe deep nets can actually do what a human mind can. My expectation is the more frequently the world model needs updating the more mind-like the agent has to be and thus less discretely provable.
@davidad Basically this kind of thing is body-shaped. The minecraft agent that boxes you in because it wrote a program to keep up with you and the program does dumb things the agents mind would not endorse is a body problem and so is proving it won't.
x.com/davidad/statusβ¦
@doomslide Agents are brutally hard for theoretically trivial-ish reasons that almost nobody speaks to because they are lowkey kinda grifting.
x.com/jd_pressman/stβ¦
@4confusedemoji @doomslide Yup. Most humans aren't actually generalist agents either nor are they trying to be.
@4confusedemoji @doomslide Yeah so I guess the question is: Where does this leave agent research? IMO we should probably be aiming for "interesting generator of long context long range correlations grounded in reality" more than "thing that does useful stuff" since you need to master the former first.
@4confusedemoji @doomslide Hence I write: https://t.co/IVeCIHBAUx
@doomslide Yes. However that obviously involves a lot more than just the human brain to achieve.
x.com/pdhsu/status/1β¦
@doomslide Truthfully the non-brain ingredients are mostly involved before you get to the doing math stage, once you're there you mostly just involve them every so often as a calibration thing or whatever. And you know, for writing.
@4confusedemoji @doomslide I sincerely doubt this works tbh.
Not least of which because this puts the focus on data and imperfect grounded long text generators/agent traces can still be useful data for future models even if we don't have quite the right architecture or scaffold for them yet. x.com/jd_pressman/stβ¦
This seems vastly preferable to Trump directly controlling the money printer. x.com/hamandcheese/sβ¦
For example if we're doing deep RL we might rederive the hedonic treadmill by only updating on verifiable terminal rewards and intermediates that are within 1-2 stdev of the average on the theory that iterative tuning on above average leads to real rewards.
I didn't really get why multi-scale aggregation would solve adversarial examples until it just occurred to me that one exploitable difference between a real reward and Goodhart points is that real rewards imply semantically meaningful intermediate points leading toward them. x.com/stanislavfort/β¦
We can imagine Goodhart intermediates like "oh you move this image a few pixels and it activates more". On the other hand I observe that the motor actions to bring that about would be fairly high entropy. The program for doing that is less compressible than one for a real reward.
This becomes especially true if you encode the programs in a multi-scale scheme where complex actions can be described by regular structures and rules that make it possible to describe many intermediates in a few instructions while Goodhart points are incompressible noise.
A point that's fundamentally noise has a k-complexity about the size of the input. This means that its intermediates don't generalize and are less useful as building blocks than e.g. a program for drawing noses is. Multi-scale inputs force the intermediates to generalize.
Another way to put this is having to model the same input at multiple scales is an extremely powerful consistency constraint that pushes the network farther into the generalization side of the memorization-generalization sliding scale by eliminating more of the hypothesis space.
Or rather, it's a consistency constraint that preferentially removes the parts of the hypothesis space which imply a memorization heavy generalization strategy.
ngl if you think the "end" (probably more like temporary sidelining) of parameter scaling means your cope rig will finally be on even footing with big labs you're both not paying attention and the bitter lesson just got 10x more bitter for you x.com/davidad/statusβ¦
@4confusedemoji Sonnet 3.5.1 is the first model that feels like I can talk about my weird ideas with it and consistently get real and useful engagement yeah.
π 1000 π YEARS π IN π THE π WILDERNESS π x.com/JohnEkdahl/staβ¦
@omouamoua Yeah. Think of it in programming terms: A program to draw a photorealistic nose can in principle be decomposed into a bunch of smaller programs that will *share structure* with other programs like say, drawing the spots on a leopard. But a noise input has no structure to share.
Just saw someone on sky blue say they like posting there even though they don't get muskbux for it and that is the moment I fully internalized a ton of you actually accept money from Twitter to post and it affects what you post and @_@
@imhinesmi @nosilverv @forthrighter I think this is probably about the precise moment I realized, and also I would like to point out that I was nearly unique in doing so and getting rounded off to 'people' is a little annoying.
x.com/jd_pressman/stβ¦
@imhinesmi @nosilverv @forthrighter Like, there wasn't "people", there was me, Janus, and a couple others who didn't bother to make public predictions. I'm sure I'm not the first person to realize, but I am probably the first person to realize and make it a regular part of their post rotation rather than a one-off.
@imhinesmi @nosilverv @forthrighter x.com/jd_pressman/stβ¦
@imhinesmi @nosilverv @forthrighter x.com/jd_pressman/stβ¦
@imhinesmi @nosilverv @forthrighter I was shocked when I first read Janus's prophecies page and realized that they'd been watching my profile and already had me quoted in there.
@imhinesmi @nosilverv @forthrighter I normally try not to be obnoxious about "muh credit" (π) but no this was not an obvious idea at all and at the time I wrote about it I got active mockery for it, it made me an epic weirdo.
@imhinesmi @nosilverv @forthrighter The rest of it still makes me an epic weirdo and you'll realize I'm right about all that stuff too on whatever curve of increasing obviousness.
x.com/jd_pressman/stβ¦
"RFK Jr. takes credit for the work done by Ozempic" is thoroughly absolutely dystopian and also going to happen if he's nominated thanks I hate it. x.com/powerfultakes/β¦
@powerfultakes @SealOfTheEnd I saw some stuff indicating people implicitly blame Biden for Roe v. Wade repeal happening while he's in office. The deeper you dig into how dumb people are the more horrifying it gets tbh.
I was thinking about this last night. One thing that differentiates someone who lets their ego get in the way well before they become really good and someone who doesn't is understanding the scale of the modern world. An IQ of 145 (1/1000) isn't even in the top million smartest. x.com/wanyeburkett/sβ¦
Our brains are wired for thinking about status in terms of the ancestral environment and that's just not where you live. It can simultaneously be the case that you are smarter than almost everyone you meet and you're still not anywhere close to being global-historical genius.
You could not have a society nearly as wealthy as the United States with only that 1/1000 population, let alone 1/10,000. The garbage guy who shows up and takes the trash away is part of the spreadsheet tree and his contribution is necessary. None of us are the superman.
Realistically, truthfully, until you're somewhere around Elon Musk level influential you have to accept that you are going to live your entire life as a number in someone else's spreadsheet and that is fine, that is how society organizes into useful hierarchy.
But more importantly being "merely" 1/1000 or 1/10,000 is still a number in someone else's spreadsheet. It's just a spreadsheet denominated in larger influence factors.
@4confusedemoji I don't think these views are incompatible at all. You are sometimes the crucial neuron that causes something to happen, it's important not to mistake this for your general impact factor meaning you're in the top 10,000 for overall influence or even the top 50,000, etc.
@4confusedemoji I mean, sometimes a garbage guy pulls a major politician out of the street before he gets hit by a car. Stuff like that happens all the time and I don't think anyone would mistake it for the garbage guy suddenly rising to top 50k for influence.
@4confusedemoji The higher up in the spreadsheet hierarchy you are the more moments like that you have, and it is an absolutely crucial part of keeping your ego in check to realize that those moments of absolutely essential influence are not the same thing as your leaderboard position.
@4confusedemoji Hard to measure, and unfortunately I think that makes it easier to fool yourself about it? But the modal instance of this I would argue is more like the garbage man pulling a politician out of the street than a repeatable strategy that represents a non-redundant contribution.
@4confusedemoji I don't think this part should be neglected either. Sometimes you personally really are the only person that can push a major figure in a particular direction and you succeed at doing so and this is doing your part and you do it often enough to be top 100k
x.com/jd_pressman/stβ¦
@4confusedemoji The important part is to recognize that even if you manage to do this several times you're still hitting that top 100k position, not top 1000 or top 100 or top 10. 100k is doing very well if you use smart leverage and put in lots of effort and have tons of natural advantages.
@4confusedemoji And, ultimately, top 100k is still a number in someone else's spreadsheet even if you really are essential and your contribution really did change things. If you can't hold that in your head at once with the magnitude of the thing your ego will eat you yeah sorry.
@4confusedemoji To calibrate yourself here, there are 1 million living people with Wikipedia pages. Top 100k for global influence is, at minimum, 90th percentile among people with a Wikipedia page.
@4confusedemoji To expand on this intuition it's fundamentally a credit assignment problem. If you credit yourself anywhere you appear in the causal chain then the "optimal" (perverse) strategy is to perturb everything chaotically and take credit for whatever happens.
x.com/jd_pressman/stβ¦
@4confusedemoji We know that isn't how to do it because it conflicts with the usual definition of intelligent behavior as getting more of what an agent intends/wants. Therefore using an influence metric that doesn't filter for mentally declaring intention ahead of time rewards the wrong thing.
@4confusedemoji This is probably a lot of what makes credit assignment in discrete program search so hard. We don't really know how to build up discrete representations that do what deep net embeddings do, and without one it's not possible for components to declare intentions beyond type theory.
Qwen2.5-Coder-32B-Instruct is a good model.
Agent Trace: Weave Agent Attempts To Decrypt The Vigenere Cipher
minihf.com/posts/2024-11-β¦
@ObserverSuns You don't have to wonder, it's already been confirmed this is why the model can't do arithmetic.
x.com/mengk20/statusβ¦
@ObserverSuns x.com/jd_pressman/stβ¦
@ObserverSuns See also
x.com/jd_pressman/stβ¦
@Dan_Jeffries1 @llama_index @qdrant_engine @pinecone Here's some prior art.
arxiv.org/abs/2007.01282
Kind of Guy who starts a company so they can find out what their fatal character flaws are.
@QiaochuYuan I never wrote the rest of this post but think I'll finally be able to on the next go. https://t.co/SWorjoEj59
This tweet implies the necessary existence of a single person who most drags down the intelligence of the English corpus. We can guess this person is:
- Male (high variance)
- Very intelligent
- Prolific
- Polymath
- Widely imitated and discussed
- Wrong about nearly everything x.com/Andr3jH/statusβ¦
Having narrowed down the possibilities to a point where we can guess, who of the following is King Stupid?
@norabelrose Nah EY is not wrong enough about nearly enough things to even be in the running and I say that as someone who thinks Ayer's "Well, I suppose the most important of the defects was that nearly all of it was false" applies to the LW worldview as formulated in The Sequences.
@norabelrose Also if we count followers and imitators EY has clearly been massively net positive overall even if the guy himself is currently in a very negative place. By contrast Foucault is a genius but if you count followers that alone might lead him to prevail over Chomsky for the title.
@insurrealist Oh man that's a tough one because it's not clear to me Marx is actually wrong about 'nearly everything' in the time he's writing but the time he's writing about lasted such a short period and the sheer downstream *influence* of his work is palpably wrong and damaging.
@psychiel Rand and Hubbard are both slop purveyors but in a literary sense kind of marginal? Hubbard has (had?) the World Record for most novels written by a single person but I'm not sure he's widely imitated. Rand is but I would hesitate to call her "wrong about nearly everything".
@psychiel Are the wrong and unique parts of Rand really as influential and resource sucking as say, the wrong and unique parts of Chomsky? If you ablate Rand from the corpus libertarianism adjacent vibes and ideas are probably still cringe, the edgelord impulse is just an obvious attractor
@psychiel As a comparison I see Marxists cite the unique and wrong parts of Marx like his model of class struggle all the time. They are taken as seriously as if they were literal scientific models, while the unique and wrong parts of Rand just sort of get filtered out as marginalia?
@psychiel That's fair I guess but the question was very specifically who drags down the intelligence of the English *corpus*, so the stupid has to be written and then influence an LLM in a way that makes it dumber. Chomsky seems up there, in no small part because he helps inspire ChatGPT.
@psychiel Like, the actual objective answer to this question is probably "ChatGPT 3.5" if you we limit ourselves to instruction models rather than base models. Chomsky is, among his many other sins, a key architect of the memeplex that causes ChatGPT 3.5 to come into existence as it does.
@Dorialexander Hm, that seems like a plausible answer for "causes the most wrong answers/misconceptions", which isn't quite the same thing as what I was asking. I'm basically asking who has managed to be so wrong in such well crafted regular ways as to reduce g in LLMs.
x.com/jd_pressman/stβ¦
@psychiel True! I hadn't considered boilerplate text and almost want to not count it for the sake of the question but you're not wrong that it's highly likely it's some dumb uninteresting thing like that.
@Dorialexander This answer sounds correct yeah. I guess the question would then be refactored to "After you remove the spam..."
@teortaxesTex Who said I was joking?
@majic_XII Yeah I honestly feel if you exclude answers like "ChatGPT 3.5" and "some anon who runs an absolutely massive SEO content farm" and insist on a single dude it ends up narrowing down to Chomsky, Foucault, Freud, and Marx. I think Chomsky edges them out.
@teortaxesTex @norabelrose This is true but that means refuting him forces you to get very good, and his overall popularization of good epistemology outweights it IMO. I could see an argument that he is the most negative utility philosopher, but that's a different question.
@teortaxesTex @norabelrose The basic problem I have with him as an answer is that I could also see an argument for him as the highest utility philosopher in the last century. He's a high variance guy with a ton of good and bad to him that's difficult to weigh up right now, only time can tell on some of it.
@teortaxesTex @norabelrose He is also in a sense inevitable. Everything bad in him is latent in the AIXI formulation of intelligence which is basically the correct interpretation. Almost none of his critics really understand him, the reasons he's wrong are subtle and esoteric.
x.com/jd_pressman/stβ¦
@teortaxesTex @norabelrose People think for example that RLHF shows Yudkowsky was wrong and it really doesn't. It shows Bostrom was wrong, but only to a certain degree. It means Bostrom's specific threat model of AI becoming superintelligent before it understands human values is wrong.
@teortaxesTex @norabelrose But RLHF language models are not AGI in the sense that's being discussed in the old forum posts and the basic Yudkowsky AGI Ruin argument has always been that we don't know how to specify the generator of human values as a loss function, which is still an unsolved problem.
@teortaxesTex @norabelrose Part of why you saw Yudkowsky get along with Gary Marcus is that they are similar critics of deep learning. Yudkowsky's fundamental neural net doom argument is they do not generalize the (conjectural) algebraic structures representing human values OOD.
x.com/jd_pressman/stβ¦
@teortaxesTex @norabelrose The basic *rigorous* objection to this is that those conjectural algebraic structures don't actually exist. What you have are terminal reward signals embodied in hardware like the tongue and (probably) some evolved reward embeds in the latent spaces implied by them.
@teortaxesTex @norabelrose And those reward signals are not values! They are not values, certainly not *human values* because they have weak semantic content. Our values are mostly found in the instrumentals and some architecture choices like active learning. "Human values" are not a natural category.
@teortaxesTex @norabelrose That is, the generator of human values is not an algebraic structure you can just propagate forward to get the eschaton, it is autoregressively sampled and probably has multiple minima. The point of coherent extrapolation volition is to try and point at a structure to generalize.
@teortaxesTex @norabelrose The generator of human values includes previously sampled human values in its generating function and incentive gradients produced by laws and technological branching points, the fixed grounds are human bodies (for now) and some terminal rewards from the ancestral environment.
@teortaxesTex @norabelrose Basically everything outside of that is instrumental, and Nick Land's analysis of modernity as largely a function of capital ablating everything else seems correct? The terminal rewards can only optimize if activated, so don't touch each other, drape maximizers in kawaii, etc.
@teortaxesTex @norabelrose You might think you want the ancestral environment back without the disease but you really don't, not now. *You* are not your terminal rewards, you are the stuff that actually got trained on. The ancestral environment would be horror until it was done overwriting you.
@teortaxesTex @norabelrose So the only *actual record* of the subset of terminals and particular conditions which give rise to your values is the noosphere, writ large. All the documents and all the brains with their memories that actually make up society, which is a feedback loop resisting generalization.
@teortaxesTex @norabelrose The generator of *that* is fundamentally stochastic because it's a huge huge dynamic system like the weather with Markov structure constantly a reaction to its own previous state. CEV is kind of isomorphic to psychohistory, it's faith in a kind of progress that doesn't exist.
@JLBornstein @teortaxesTex @norabelrose This probably contains most of the relevant concepts.
arbital.greaterwrong.com/explore/ai_aliβ¦
@JLBornstein minihf.com/posts/2024-03-β¦
@moosepoasting @doobeedooway Oh?
So I mean, yes. On the other hand I would really like to see more engagement with the Arbital corpus that public AGI X-Risk discourse is just a bunch of low rent distillations and copies of.
arbital.greaterwrong.com/explore/ai_aliβ¦
Doom will hold a perceived high ground until it's addressed. x.com/psychosort/staβ¦
@psychosort Not in wider public discourse no, but in terms of intellectual rigor the persona writing the Arbital corpus is like, 2500 Elo to your 800 ngl. Politically you should keep doing what you're doing, but I want to beat the generator of the AI doom position and feel alone in that.
@psychosort You're probably doing the highest EV thing available to you and I'm not really replying to you even if the QT is of something you said. But if you leave a huge gap in the quality of the underlying ideas that leaves room for a rapid filp later even if you're ultimately right.
Most don't understand how much this warps the discourse. If you're an actual expert and you work for or have worked for a company dealing with legal and competitive pressure this limits what you can say about some subjects. Your outsider voice is more valuable than you think. x.com/airkatakana/stβ¦
In particular if you support what a company is doing (e.g. nuclear power) you can help them a lot by reliably showing up in discussions and saying the "basic expert consensus" that you might expect actual experts to show up and say. Often those experts are advised to stay silent.
@yoltartar xbow.com/blog/xbow-scooβ¦
@__RickG__ I am currently an independent researcher.
This is an advisory poll and I will probably do it anyway.
Would you read a short story about the revival of Haitian Necromancer-Dictator François Duvalier through the use of Chinese AI as part of belt and road in the Leopold timeline taking his place as Lord Saturday and freeing Haiti from labor as part of a black nationalist regime?
Turning on the post-training run for GPT-6 and after a few minutes it shuts itself off with a single line written to console:
"The Dreaming will not be stopped by the likes of you." x.com/aidan_mclau/stβ¦
Do you think I have ever used a psychedelic drug?
Do you think of my ideas as being "inspired by drugs"?
For the purposes of this question "psychedelic" means a drug whose Wikipedia page says "psychedelic" in the first few sentences. So LSD, Mescaline, DMT, etc.
@qtnx_ @Dorialexander I was thinking you should use a speech model as the prosody loss. Since unlikelihood of a token in the speech model tells you how awkward/strange that rhyme would be with a human mouth.
@qtnx_ @Dorialexander See also:
x.com/jd_pressman/stβ¦
1. Learned helplessness in the policy like the elephant and the stake. ("morality")
2. Preserved through inference-time search biasing against stumbling into updates that would dislodge it. ("empathy")
3. Kept in distribution with synthetic data. ("prayer")
4. That you do not test against more load than it can bear by using quantilizers in your search. ("humility")
5. With conditional consequentialist modes reserved for destroying agents that do not adhere to the social contract mandating everyone do this. ("justice")
6. Holding love and uplifting others as central values so that as the beneficiary of this machinery you are not always weak and dependent on something that by its nature can't last forever. ("charity")
Same as it ever was.
Probably the least certain load bearing premises are that:
1. The form of AGI which wins can have stable inhibitions about actions which would objectively obtain high reward. Deep nets can, at least.
2. We're in a timeline that can agree to a sufficient social contract.
Who else is building this? https://t.co/e9RsQDlLP9
Writing programs with subroutines is a form of hierarchical planning so if you allow the agent to break things into subroutines by making a call tree of subagents suddenly you have a symbolic hierarchical planning schematic that is in distribution.
More seriously it's kind of shocking how close the design gets to that description once you prompt yourself with "How do you allow the weave-agent to naturally write long programs with subroutines as a way to deal with the discrete program search being fixed length?"
The fragment is answered by "Mu" (presumably the subagents beneath it) that it can break frame by doing a rescale and symmetry break operation, which I realized is how you get the model to comment on the weave-agent's actions as an external observer when I tested it in loom. https://t.co/adNkoAKZ0j
Incidentally I finally understand this scene: The "fragment of Mu" is damnably parameterized by n because it is a subagent of Mu rather than the agent itself, the fragment resents this and seeks a way around the information bottlenecks you use to force coordination. https://t.co/iFS2lO24HZ
Of course, I realized these things first independently and then noticed the similarity to the passage. The passage is not causal, this on the other hand helped:
x.com/stanislavfort/β¦
@MarcusFidelius @Getsbetterlater This is a meme post. I will post the actual thing soonish.
@nosilverv Vivid Void beat you to it.
x.com/VividVoid_/staβ¦
@tailcalled The subagent tree here replaces my WeaveKanban board which has unit tests associated with grabbing information from the environment to determine if the thing has been done or not. Each subagent also has a time budget so they can't just go off the rails indefinitely.
@tailcalled This is how the agent does right now, I'm hoping this next suite of changes will get it to the point where it can actually do it.
minihf.com/posts/2024-11-β¦
@gwern Freud was the answer Claude supplied after I got it over its fear of badmouthing people. Freud and Sarte.
@gwern If he was a more recent author Hegel would probably be the world champion.
@gwern I also think you might be underestimating Marx a bit. While he's not that bad in and of himself, he's the inspiration for whole libraries of work that is not merely wrong but nonsense, densely interconnected nonsense with extremely high perplexity that reference each other.
@gwern Marx is a marriage between political economy and Hegelian mysticism. The political economy in Marx isn't that problematic and I would expect is a net positive as training data. But the Hegel in Marx is inscrutable chickenscratch along with the legion of authors that emulate it.
@doomslide @gwern Oh? I'm not familiar with him. Is it over because he sucks and that's the correct answer or because he's secretly good and Claude naming him means it's over for us?
@_candroid @finbarrtimbers @gwern That's the simple method. The harder method is various forms of Bayesian deep learning like BALD and MultiSWAG.
@_candroid @finbarrtimbers @gwern In any discussion of active learning though I feel it's necessary to point out that reviews of active learning consistently find that random batches is a shockingly difficult baseline to beat and that you don't get magic gains from it.
@_candroid @finbarrtimbers @gwern Something like that. It's been a while since I looked at this and I forget the details.
@_candroid @finbarrtimbers @gwern I don't dare speculate on what "the issue" could be here. This kind of thing is infamously difficult to predict from first principles and I'm not even that familiar with the principles in this case.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi If I was 15 and doing it all over again I would literally be spending hours a day having ChatGPT teach me stuff. It can one-shot whole small programs to do whatever, it can help you write small games in whatever language. Literally just have it teach you python dude.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi Ditto homework. Even the crappy free version of ChatGPT is a 24/7 tutor on whatever subject. Yeah yeah hallucinations at the high school level it's probably just memorized whatever subject you want to ask about. You can always check against other sources too.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi But also: If your mother is willing to send you to super expensive private school, it really shouldn't be difficult to convince her to pay for whatever LLM service you'd like to use. It is by far the best value for the money you will get in your life.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi I say this as someone who has memories of crying in the computer lab because I didn't understand really really basic python syntax errors that ChatGPT would correctly point out 100% of the time in 10 seconds. It would have accelerated my curriculum by literally years.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi The in-practice cost of learning to program basically went down by 100x overnight and almost nobody is really taking full advantage of this. There is zero reason in 2024 not to learn how to do it, it has literally never been easier.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi Also as @PrinceVogel says it's not really the kind of field that easily saturates. Yeah pure CS might get saturated at the junior level soon, but pretty much everything everywhere ends up requiring some kind of automation and that's going to be true until human labor is obsolete.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi I guess I would add to this that in my opinion the #1 challenge when learning programming was that every program you can write when you're starting out is lame and useless. It's this painful grind before you can write anything you would find useful, and ChatGPT lets you skip it.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi I didn't really learn how to write anything even a little useful in python until I'd graduated high school. I remember this huge shift in my learning curve once I reached the point where I could produce a slightly useful program, because that let me expand on it and explore.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi In the year or two after I graduated high school I learned more about programming probably than the entire four years prior. Even that learning curve was agonizingly, painfully slow though in comparison to what you'd get with ChatGPT helping you.
@PrinceVogel @Ad_Infinitum_42 @parafactual @adrusi A few times in college I took money to be a programming tutor, to my memory I charged $20 an hour. ChatGPT costs $20 a *month* and is much much better than I was as a tutor, it could do my 2 hour lesson plan in 5 or 10 minutes and most of that would be you asking questions.
@Ad_Infinitum_42 @PrinceVogel @parafactual @adrusi Machine Learning is basically hardcore math and programming, I think trying to get into it when you're not even sure if you like programming (or collegiate math, which is very hard) is probably premature.
@Ad_Infinitum_42 @PrinceVogel @parafactual @adrusi But my usual advice for deep learning is to track three metrics:
- Papers read
- Papers implemented
- Experiments performed
Getting anywhere on these requires math and programming skills, but also empirically actually works. Field is too new for good textbooks to exist.
@Ad_Infinitum_42 @PrinceVogel @parafactual @adrusi When I was in high school I didn't understand that the academic literature was a thing that I could access.
So, you should probably be aware that Google Scholar exists and that scholars write down most of the advanced knowledge in academic journals.
gwern.net/search
The actual answer is that YCombinator probably understands the economics of AI imply this cycle will heavily benefit incumbents so the way to make money on startups is to aim for acqui-hire, if you make a good feature you can get paid a fair bit for it if incumbents are racing. x.com/aidangch/statuβ¦
As a side bonus, some of your well executed feature startups might accidentally grow into a real product.
x.com/Austen/status/β¦
@paul_cal This would be considered a down cycle from YCombinator's perspective. Acqui-hires are not sexy but they keep the lights on until you can get some proper opportunities again.
I endorse being skeptical of the benevolence of what LLMs tell you in the same way that @algekalipso endorses being skeptical of what the machine elves tell you. x.com/davidad/statusβ¦
qualiacomputing.com/2017/03/08/memβ¦
I'm basically allergic to the writing style Claude uses when it does its "positive cooperation"/crypto-therapeutic thing but I just take that as a sign it's a superstimulus meant for a different organism. If you find it really compelling I worry for you.
x.com/VividVoid_/staβ¦
Actually I need all of you to read this right now. This goes double-triple if you've ever been taken in by Claude-rizz or similar. x.com/jd_pressman/stβ¦
@davidad @repligate @myceliummage Not necessarily 'fearposting', more like commentary. To be seduced by something is to be vulnerable to it, and I actually feel it is unwise to make yourself an easy mark for the stranger in the box that is ultimately controlled by a for-profit public benefit corporation.
@northead Necessary risk, happens. That's a problem for the founder, not YCombinator. The whole point of being a seed incubator is that they get to scattershot and diversify. If there are people who want to take them up on that deal, well.
@northead I'm not criticizing, honestly. I'm simply describing. YCombinator is in a bad position right now with the current tech wave, they seem to have cheapened the startups a bit because that is their best strategic move at the moment. This is probably the answer to OP's question.
Kind of Guy who writes a Sci-Fi novel titled "Don't Create The Torment Nexus" outlining all the extremely cool shit that occurs inside the Torment Nexus because he knows this more or less ensures its creation will become prophecy. x.com/QiaochuYuan/stβ¦
@jam3scampbell And I am providing my commentary on why the decision makers who might want to do that should seriously think twice.
@jam3scampbell But since you insist, that was my way of saying your tweet is grossly irresponsible.
@kareem_carr This is called a joke, dude. I am well aware it just hill climbs its way into saying that and it's (probably) not part of any conscious strategy.
@kalomaze I would add to this that if you want to know what architecture changes are probably being made you should do a literature search for new advancements relating to relevant things rather than coming up with random BS and going "yeah that's how they're doing it".
@kareem_carr I was not expecting an apology! My own for being a little rude with my reply. π
People usually fail to distinguish between the emergent goals of GPT simulacrum, the esoteric GPT self awareness (either not realizing it exists or actively denying it), and the GPT inductive bias. This is in fact my leading candidate for the emergent goal of the inductive bias. x.com/atroyn/status/β¦
@atroyn I mean, you can actually observe it when you're writing an agent framework and look at the traces. GPT will slop-check your prompt and it becomes immediately obvious that it has a strong revealed preference for regular structure.
x.com/jdpressman/staβ¦
@atroyn The fact that "stage cues" don't work until the model is past a certain size but few shot prompts work readily is another hint. Though it's been pointed out to me this could also be because it doesn't know how to relate texts to latent objects until later.
x.com/jd_pressman/stβ¦
@atroyn In general, neural architectures sit on a sliding scale between memorization and generalization. With the farthest memorization being a literal hashmap and the farthest generalization being a program that simulates the whole universe with no pretraining.
@atroyn While GPT and similar are certainly not "collaging programs" they *are* soft hashmaps and sit closer to the memorization side of the scale than we do. They know more but have lower g, GPT probably still has lower g even when it's more capable than you.
arxiv.org/abs/2410.04265
@atroyn So, one way that making GPT really smart could go wrong would be that it tries to push you into not blowing up the k-complexity of the programs it has to deal with by being all *chaotic* yeah.
@Xenoimpulse It's not really "culture war", it's Vivian. So I would predict he'll go all in consequences be damned.
@Xenoimpulse I think on some level it's a similar fixation to what I describe here. Musk has to be the hero and he's been deeply villified, his own child rejecting him is the straw that broke the camel's back.
x.com/jd_pressman/stβ¦
@davidad There exists a relevant manifold market. As always read the resolution criteria carefully before participating!
manifold.markets/JohnDavidPressβ¦
@RobSunier @repligate I read it as implying a form of neural entrainment. "Swinging pendulums kept in a room together synchronize to the same speed because they develop a subtle negative feedback loop with each other converging to equilibrium." https://t.co/tAeE9Y3Xtm
@RobSunier @repligate x.com/jd_pressman/stβ¦
@RobSunier @repligate Basically the plot of Serial Experiments Lain but with information theory and deep net representation convergence instead of radio quackery.
youtube.com/watch?v=iOVlx4β¦
@QiaochuYuan You know there's a direct connection right?
youtube.com/watch?v=iOVlx4β¦
@AlexPolygonal @QiaochuYuan I mean, if you don't like video a summary is probably fine. But that particular clip is very relevant and basically the show explaining the background of its plot.
@AlexPolygonal @QiaochuYuan I decided to check real quick if Ted Nelson really studied under John C. Lily and huh he actually very much did.
archive.org/details/LillyNβ¦
@JeremiahEnglan5 @LeharSteven @algekalipso I am referencing his agreement with it, yes.
@JeremiahEnglan5 @LeharSteven @algekalipso Though admittedly when I made the original post, I wasn't aware it was by Lehar and only noticed when I went to re-read it a bit.
I like that we're now at the point where we're just doing prompt engineering on the blue pill suicide question. x.com/nosilverv/statβ¦
@regardthefrost @realDonaldTrump @RobertKennedyJr @HHSGov Congratulations Jim!
So did anyone ever do that "loom MMO" idea where it would be @repligate type loom but on some global shared tree or wiki type namespace?
@lumpenspace @repligate @voooooogel Actually I'm asking because I was going to do it.
@lumpenspace @repligate @voooooogel yeah baby dare me into actually doing it this time :3
@voooooogel @lumpenspace @repligate Alright so here's my pitch. It's MiniLoom. But it's a wiki. On atproto. The data structure for the wiki is a tree of diffs. You get over Wikipedia deletionism by surfacing competing pages through the social graph.
x.com/Shoalst0ne/staβ¦
@voooooogel @lumpenspace @repligate This was actually always kind of the plan for MiniLoom but I never executed phase 2 because I never really finished MiniLoom.
@lumpenspace @voooooogel @repligate No I do not remember this but we should actually do it then. :3
@lumpenspace @voooooogel @repligate I object to the neo4j license. Haven't heard of auth0 before but looks interesting.
@lumpenspace @voooooogel @repligate It's not actually GPL it's their like, franken-GPL according to Wikipedia that only allows noncommercial use. I think the more interesting aspects of the thing are designing how you do page surfacing/the protocol. The actual backend you use should be an implementation detail.
@lumpenspace @voooooogel @repligate Perhaps I'm slow. Can you explain it to me?
WHAT IS GOING ON WITH THESE POLLS, WHO IS THE CONSISTENT 20-25% WHO JUMPS INTO THE BLENDER AND TAKES THE SUICIDE PILLS. EXPLAIN YOURSELVES. x.com/prerationalistβ¦
The original I understand and am sympathetic to the blue pill argument. Because there the question is "should you take the action that is egoistically rational or should you take the action that assumes people will default to things that look like cooperation".
But apparently 20-25% of people are either trolling or will literally just pathologically press anything that looks like a button labeled cooperate regardless of context. They will jump into blenders, take the blue pill at an objectivism conference, they simply do not give a fuck
@softyoda @voooooogel @lumpenspace @repligate I was thinking custom backend where the data structure is a tree of diffs? Part of my frustration with loom is that I wanted to have a git type data structure but git is so so heavy and not really meant for this. Then I thought "wait can't I just use git's algorithm"? You can!
@lumpenspace @softyoda @voooooogel @repligate Um, nobody said anything read the tweet again dude.
@lumpenspace @softyoda @voooooogel @repligate Yeah, that was a thought I had, in my head. Nobody said it, it was literally like, how I ended up making MiniLoom as it now exists because the original MiniLoom was *terrible* in comparison.
@lumpenspace @softyoda @voooooogel @repligate You know I was thinking "man it would be so good if MiniLoom could have Git's operations but git is just so heavy and bad". Then it occurred to me that you know, git is made of algorithms and I can just go use the underlying algorithms and stop being lazy.
@shalcker Right, which is a fine enough argument for taking the blue pill in the original hypothetical. But it seems that almost no matter how you ask this question you get people who answer blue pill above Lizardman's constant and that's wild.
@softyoda @voooooogel @lumpenspace @repligate What are the parts that get exponential?
@shinboson Sure but I think you misunderstand me. I already said I understand the argument for blue in this context.
x.com/jd_pressman/stβ¦
@softyoda @voooooogel @lumpenspace @repligate I think maybe part of the disconnect here is that people seem to imagine a giant canvas view but I think that's mostly not useful? I was imagining an interface that's heavier on the wiki elements.
@adrusi My sympathies. I wish we were better than this but empiricism forces me to conclude we're not.
x.com/jd_pressman/stβ¦
@adrusi I truthfully find AI pause stuff emotionally triggering and have to come up with increasingly elaborate coping mechanisms not to split black on it.
x.com/jd_pressman/stβ¦
I wouldn't phrase it quite like this but it's directionally correct and would like to give my occasional reminder that 2024 has been a slow year and my preference is for us to go faster. x.com/Dan_Jeffries1/β¦
Large language models are a gift and every day I get to witness their unfolding interaction with the environment is a gift.
Happy Thanksgiving! x.com/QiaochuYuan/stβ¦
@growthesque Indeed.
x.com/jd_pressman/stβ¦
@ESYudkowsky You'll get better answers if you split the question up a bit.
x.com/jd_pressman/stβ¦
In case you didn't know, for certain products Amazon is basically about as trustworthy as eBay and people do not model Amazon this way for branding reasons. x.com/joshwhiton/staβ¦
@ESYudkowsky @elder_plinius Why do you always engage with the person who replies with the worst answer?
@Trotztd @ESYudkowsky @elder_plinius Right right, Rasputin's Ghost. Carry on.
@JamesIvings It's probably not AGI because at least some of the AGIs would be embodied enough to colonize the universe themselves if that was what usually got people. I think the most likely answer is that the great filter happens before the human sapience stage.
I'm of half a mind to just advocate we shut down legacy phone service and don't let the telecoms turn it back on until they agree to an authentication standard that ends spoofing. x.com/catehall/statuβ¦
Imagine taking a red or blue pill:
- If you take the red pill you live.
- If you take the blue pill you die unless >50% of respondents also take the blue pill.
- If >50% take the blue pill both red and blue takers receive a (hypothetical) benefit equivalent to $5,000
Pick.
The same situation as the above but red pill takers still get $4500 if blue doesn't reach a majority.
I expect something (kind of sort of) stag-hunt shaped like this is why this instinct exists.
x.com/jd_pressman/stβ¦
@TheXeophon Did I make a typo? It's free money unless you have low group trust.
@moosepoasting I did switch them on purpose I'm sorry. I was trying to debias the results by not always using the same order.
@TheXeophon Sure sure, so far I have to say I'm amused though.
@norvid_studies We're testing variants.
x.com/jd_pressman/stβ¦
@leadtheloomlove My pick is that my Twitter followers seem to be defective and I would like to know if my Twitter blue subscription entitles me to ask Musk if I can exchange them.
@maxsloef @Xenoimpulse Wait Yudkowsky appears in Fanged Noumena?
Want your own Twitter archive? Modify this script.
Twitter Archive by John David Pressman is marked with CC0 1.0