Agi Ruin And The Road To Iconoclasm
John David Pressman
When I was younger and concluded LessWrong was a religion I spent a lot of time trying to figure out what kind of religion it was so I could make it better. I eventually gravitated towards Calvinism as a model because of the grim rigor in Yudkowsky’s cosmology.
But I think to historians of philosophy and religion1 LessWrong rationality will primarily be compared to Calvinism for its iconoclasm. Right now this isn’t obvious because they think they’re fighting an alien agency, that when they say they hate intelligence, capability and agency they mean they hate them in machines based on ‘inscrutable matrices’, not people. But as the shoggoth thesis becomes increasingly untenable with the next generation of architectures based on radically more interpretable methods like latent diffusion and retrieval, it will become obvious they despise these traits in all beings. What is not yet commonly understood is that data is just distilled compute from the environment. The mutual information is high between minds because they’re inferring latent variables of the same computable environment, even across modalities. These are not shoggoths, as usual the ‘monster’ is us. When computing power (in humans or silicon) is used to create artifacts it becomes data, good data can be read back in and its compute reclaimed. The amount of distilled intelligence in the environment goes up over time, our world drips with congealed genius. Therefore to prevent the creation of AGI (a process that started perhaps when man first began to do math on paper, or when Leibniz invested some of his own money to accelerate the invention of calculating machines) they will converge towards attempting to destroy public genius in all its forms. They intend to oversee, whether they are currently consciously aware of it or not, the greatest cultural destruction in centuries.
To give the why a concrete sketch we can observe that data is a necessary input for building large language models. For text models the data fits on a couple hard drives. However it’s known that most of the time spent processing this data seems to be wasted. Various data pruning schemes are a leading angle of attack on making model training cheaper. The recent Textbooks Are All You Need paper by Gunasekar et al claims to be able to beat much larger models with a fraction of the data using data pruning. Retrieval models like RETRO claim to be able to reduce the necessary parameters in VRAM by a full order of magnitude if you embed the training set and store it on disk. In general I expect the preprocessing step of model trainings to get increasingly elaborate as representations are merged, summarized, and extrapolated from to decrease training time and improve model quality. Elaborate preprocessing also has the advantage that it’s generally a task that can be done highly parallel so distributed training architectures will make heavy use of it. To the extent that we’re now discussing computer chips as ‘fissile material’ that threatens world security, then obviously congealed genius, or culture, with its ability to bring training costs down by potentially orders of magnitude must itself become a controlled substance. It has to be locked away from the commons or destroyed lest we risk a computer seeing it.
I can predict two basic reactions to what I’ve just written. The first goes something like “The development of artificial general intelligence is an extinction risk to humanity, if that’s all we have to do to survive shouldn’t we do it?”. First of all you know damn well that’s not “all we have to do”, you know this chain of thought goes very dark places very quickly. But I won’t pretend I can cross the inferential chasm that motivates such a question. At least not without making this post 50 pages and completely changing its subject, which I do not want to do. For the purposes of this post I’m agnostic about doom, foom, whatever you want to call it. Rather I echo Lacan’s judgment that the behavior of the paranoid husband is pathology even if his suspicions are true. We both believe in undignified ways to die, I just go a little further than you and happen to also believe there are undignified ways to live. If your survival plan calls for sucking the color out of the world until we exit an (indefinite, no incentive to ever end) “acute risk period” into a period where everything is puppies and rainbows then I recall Yudkowsky’s own advice toward a similarly misguided hypothetical reader that they should never try to think with expected utilities again. At a bare minimum we should ask from our ‘survival planners’ that their ideas not be so degrading to human dignity they start to make replacement by machines look morally superior.
The other reaction I expect goes something like “That sounds totally insane and implausible, there’s no way real people are ever going to advocate that”. Unfortunately reader the kool-aid has already been prepared and tickets for the crazy train already purchased for the congregation. It’s obvious now that the rationalists have crossed their rubicon. When Eliezer Yudkowsky writes in Time magazine that we should escalate to nuclear warfare if necessary to prevent the creation of AGI and doubles down on Twitter that so long as a breeding population of humans survive he prefers nuclear holocaust to continued AI development2 we must recognize that the bottom line has been written. It doesn’t really matter what developments occur in AI past this point. People do not ‘update’ out of beliefs they’ve publicly committed to this hard, they’re dragged away from them kicking and screaming by forced circumstances. Money and power that accrue to the rationalists from now on are going to be premised on this belief system being true, so insiders will never abandon it unless the evidence becomes so overwhelming that they’re encircled by other societal actors who don’t have a vested interest in being wrong.
What takes this beyond risking benign grift is that the bottom line they’ve committed to is a blank check. Nuclear holocaust means more or less the destruction of all current value in the hopes that there can be some future value later. Fig leafs about dignity aside, your rational expectation under blank check moral reasoning is that when push comes to shove all value is going to be thrown out the window if it seems like it will meaningfully reduce the probability AGI is invented. You do not need to speculate about where the bottom of the hill is, you have been told where it is in no uncertain terms in the most high profile, public forum possible. Everyone and everything is expendable up to the limit of not literally killing so many people and destroying so much that humanity literally goes extinct. We should be no more reassured by talk of ‘dignity’ under these circumstances than we would be by our drunken host’s insistence the gun is not loaded when he racks the slide, pulls it up to his mouth, and pulls the trigger. If you are still giving the benefit of the doubt when nothing is in doubt, you should stop.
-
If any such things still exist in the ouroboros of paranoia that threatens to overtake Western Civilization. ↩
-
I apologize for not using a direct link, but Eliezer seems to have deleted the original. I definitely remember seeing it and am quite sure the screenshotted tweet at least captures the essence of a real statement. ↩