@JimDMiller On the other hand, GPT-N seems to have a superhuman ability to recognize persona and character in others. This is unsurprising since it frequently needs to determine who is speaking archetypally from a single fragment of speech in the context window.
x.com/jd_pressman/stβ¦
@JimDMiller In the interest of building a higher trust society, here's the prompt:
gist.github.com/JD-P/632164a4aβ¦
Note that's not code to interface with a library, but the prompt itself. You make a fake python shell with "AGIpy" loaded and ask it to do infer how the function would work.
@JimDMiller This sort of ability might mean higher levels of AI-human cooperation or even human-human cooperation than we'd naively predict, because you can get more insight into what kind of person someone is from their displayed cognitive patterns. People are bad at pretending all the time
@RomeoStevens76 @parafactual Why would it be? It's a successful costly signal of intellect. Make competing ones I guess.
@Willyintheworld 1 is probably the hard part, most EEGs have terrible ergonomics
@AntDX316 @ESYudkowsky x.com/jd_pressman/stβ¦
Another entry in this genre: A model trained on raw sensory data will kill everyone because only the most immediate latent z is learned, but also supervised learning is busted because model's learn an additive latent z incorporating all causal structure that produces the latent. x.com/jd_pressman/st⦠https://t.co/LbNZIe6SoS
@alexandrosM Maybe. I think the approach they're taking there of direct prediction using GPT-4 forward passes is probably not very helpful. They'd be better off designing an LLM-agent which uses external tools to perform a more rigorous experiment based on e.g. causal scrubbing many times.
@alexandrosM This would have the benefit that we can more directly verify the validity of the results because they would be encoded into an external environment, chain of thought reasoning, etc. Even if it worked, the more of the computation is in the forward pass the less we understand.
@JimDMiller @alexandrosM @mormo_music As I pointed out the last time you made this point, people are currently sleeping on the extent to which LLMs have already plausibly advanced SOTA for interpretability of people. So we might also benefit from that kind of coordination gain.
x.com/jd_pressman/stβ¦
@JimDMiller @alexandrosM @mormo_music x.com/alexaidaily/stβ¦
@alexandrosM @JimDMiller @mormo_music No because the fundamental problem is either getting a much higher dimensional brain latent or a much faster one, in quantity. MRI machines are too limited in availability to give us quantity, so any method requiring a large MRI is ruled out immediately.
@alexandrosM @JimDMiller @mormo_music I continue to think text is our best bet for brain latents. We have a ton of them, humans produce them naturally, they seem complete in expression of human thoughts & desires, and you have a sentence encoder in the superior temporal gyrus, so they're a direct brain activation.
@alexandrosM @JimDMiller @mormo_music They probably can be. But I'm not sure anyone has demonstrated working methods based on that yet. EEG brain latents are very low dimensional, at their best they involve 256 sensors. Considering the BERT sentence encoder has 768 dimensions, sentences probably have more information
@alexandrosM @JimDMiller @mormo_music In fairness, if we take a high bandwidth EEG reading every second and say that's 1/3 of a sentence (spitballing), you're getting a sentence worth of information every three seconds, which is faster than most people can write.
@alexandrosM @JimDMiller @mormo_music Actually lets say you write 120wpm (99th+ percentile according to Google) and write 10 words per sentence on average. You would write 12 sentences a minute, or one every five seconds. So a sentence worth of information every three seconds is equivalent to a 1/1000 typing speed(?)
@alexandrosM @JimDMiller @mormo_music However, high bandwidth EEG is not remotely ergonomic. It takes a lot of time to fill the sensors with gel and put them on your head, and the encoding is probably less good than the one you get out of your sentence encoder until you throw a lot of deep learning at preprocessing.
@alexandrosM @JimDMiller @mormo_music This is fair, I just think the problem is bulk data shaped and we should be thinking about how to get functionally infinite preference data on tap to the models.
@zackmdavis @alexandrosM I continue to trust the open scientific process and discourse overwhelmingly more than I trust institutions. Especially after witnessing the response to the COVID-19 pandemic.
@zackmdavis @alexandrosM I kind of doubt they can. There's a lot of disingenuous AI safety discourse conflating (civilizational) inconveniences (e.g. mass spear phishing, "AI harms") with actual X-Risk. As I've written about before LLMs rely on a huge cache of irreducible lore.
x.com/jd_pressman/stβ¦
@zackmdavis @alexandrosM Thanks to recent research into quantization we now have a decent estimate on the fundamental Shannon entropy of the lore in these neural nets. It seems to be something like a factor of 8x between 32 bit floats and the real limit at 4 bits.
arxiv.org/abs/2212.09720
@zackmdavis @alexandrosM So if you put in a lot of work and research, maybe you'll eventually be able to run something like GPT-4 on the best consumer cards currently available. Keep in mind that we're starting to hit the physical limits of computing, so that hardware is only going to get so much better.
@zackmdavis @alexandrosM My model here is there's something like three major known inefficiencies that a highly capable AI agent could exploit to gain total power:
- Computer security
- Nanotech/assembling biotech
- Virology
Intermediate AI models can help us patch these vulnerabilities.
@zackmdavis @alexandrosM Computer security is the simplest. As I've written about before the trend is already toward highly secure computer systems whose major vulnerabilities are legacy software stacks and lower level embedded software.
extropian.net/notice/AS7MrB7β¦
@zackmdavis @alexandrosM GPT-4 can already write decent computer code and identify a vulnerability in assembly. We're moving towards a world of zero-marginal-cost exploitation of bugs. The equilibrium of that isn't red team hacking everything forever it's blue team never shipping a SQL injection again.
@zackmdavis @alexandrosM But that's just step one. Right now everyone using C and C++ has been relying on a kind of collective security-through-obscurity to keep code secure. Sure I'm statistically writing a memory exploit every N lines, but who would ever find it? Instant exploits will force an update.
@zackmdavis @alexandrosM LLMs are going to open up new coding paradigms that weren't possible before because they were too labor intensive. Formal software has always been held back by the much higher cost of code proofs. GPT-N will be able to make proving your code cheap and seamless.
@zackmdavis @alexandrosM People literally can't imagine an end to our computer security woes, but they will end. And they'll end without onerous licensing paradigms or stifling new liability laws. We will simply make it so cheap to rewrite legacy software and prove no bugs that both go extinct.
@zackmdavis @alexandrosM I was just writing that but honestly the tl;dr is global absolute technologically enforced property rights. We will no longer have a barrier between a 'natural world' and human civilization. Every corner will be monitored by armies of intelligent machines at all scales.
@zackmdavis @alexandrosM If this is dependent on LLM-like technology, you can rest assured that big labs will get to it first. Large accumulations of capital will continue to have an advantage due to the high speed connections between nodes, scale of data, etc. They have much more architectural freedom.
@diviacaroline @zackmdavis @alexandrosM If it managed. The entire point is that it does not manage.
@diviacaroline @zackmdavis @alexandrosM No it's fine. That's precisely what I'm saying. You basically institute omnipresent surveillance at so many scales that there isn't a lot of room to build bombs or competing assembler swarms in secret.
@diviacaroline @zackmdavis @alexandrosM To do this in a way that isn't dystopian is going to require us to fully solve the principal agent problem (and alignment by extension). One of the things I'm most excited about with this technology is that it allows you to instantiate a subjective perspective that can be audited
@diviacaroline @zackmdavis @alexandrosM Sure. Or a broad mandate with sufficiently strong alignment and societal buy-in.
@diviacaroline @zackmdavis @alexandrosM I mean the thing Yudkowsky means: An objective function (with sufficient assurance that it will actually be internalized by the model in high fidelity) which if maximized doesn't result in Fristonian collapse (i.e. paperclipping) of all sentient beings.
@zackmdavis @alexandrosM The other way in which defense eventually wins in virology is more speculative, but the existence of the Kind of Guy prior in GPT-3 implies to me that we'll have functional immortality very soon.
x.com/jd_pressman/stβ¦
@zackmdavis @alexandrosM Combine strong compression of mind patterns with Drexler assemblers and it's plausible that you can simply store the mind of every citizen in your country in various high-capital backup datacenters and reinstantiate them on death.
@zackmdavis @alexandrosM In order for this not to be dystopian, it *also* relies on us having solved the principal-agent problem to a strong enough degree that you don't need to worry about the idea of storing your mind pattern with your government. I in fact expect us to get there.
@zackmdavis @alexandrosM @robbensinger Honestly I just envy Bensinger's position? You know, it must be fucking great to have an argument structure where any time someone wants to challenge your premises or go into your world model you can just say "sorry my models are an exfohazard and I don't need to justify shit".
@zackmdavis @alexandrosM @robbensinger The world system? Game theory? Capabilities? You don't need to engage with any of that shit. You just say "think about it for five minutes dude, if you have a superintelligent AI that can destroy everything that should be one guy", assume the conclusion, masterful rhetoric.
@zackmdavis @alexandrosM @robbensinger "If you have a superintelligent AI maximizing a simple objective everyone will die" is not a difficult premise, it wasn't a difficult premise when I.J. Good wrote about it in 1965. Everything is in the circumstances and technical details, which we're not supposed to think about.
@zackmdavis @alexandrosM @robbensinger I wonder how much damage has been done to our timeline by "alignment" being pushed by a dude whose primary talent seems to be writing high-cognition fanfiction? You can tell that most of these 'flunkies' as you put it are fandom pushing linguistic patterns not gears-level models.
@zackmdavis @alexandrosM @robbensinger "Alignment" as suggestively named lisp token. I know how that psychology feels from the inside because I've been that dude. They *should* be held in contempt by elite academics who are frankly better than them.
@zackmdavis @alexandrosM @robbensinger I'm not really aiming at EY with that statement tbh, but the fandom people. I have specific person(s) in mind but I'm not going to tag them because that'd be rude.
Re: EY, it was definitely helpful early on but I worry about whether we traded early gains for long term problems.
@zackmdavis @alexandrosM @robbensinger The fandom thing combined with the Manhattan Project LARP is a really toxic combo. These people need to read less about Oppenheimer and more about Groves. Manhattan occurred by accident under very specific cultural circumstances we can't replicate.
amazon.com/Racing-Bomb-Geβ¦
@zackmdavis @alexandrosM @robbensinger His first person account is shockingly good, kind of better than the biography which mostly provides context tbh:
amazon.com/Now-Can-Be-Tolβ¦
@zackmdavis @alexandrosM @robbensinger But the tl;dr is that the Manhattan Project occurred in a time period of existential war, centralized mass communication, was momentum funded by accident out of loss aversion ("already started, can't stop now")
@zackmdavis @alexandrosM @robbensinger using resources only available because of the US government's sterling reputation headed by one of the most faithful public servants to ever live. This guy was allowed to buy up the world uranium supply with briefcases full of black money,
@zackmdavis @alexandrosM @robbensinger trusted with the US treasury's silver reserves as building material, given complete personal authority over his own army, state department, and secret treasury.
@zackmdavis @alexandrosM @robbensinger Dow Chemical did their contractor work for the Manhattan Project at-cost knowing failure would destroy the company because the US government's reputation was platinum and the sense of duty was great.
@zackmdavis @alexandrosM @robbensinger Groves went to extensive paranoid measures to keep information from getting out. Tailing his high ranking officers and scientists with men in black, working two jobs so he continued to have a mundane cover, the guy invented 'national security' as we now know it.
@zackmdavis @alexandrosM @robbensinger And in the end the Soviets wound up with all the important secrets anyway. The truth is, there are very few true nuclear secrets beyond specific engineering details. Nuclear weapons are mostly just high capital (human and otherwise) projects that make you a target to large states
@zackmdavis @alexandrosM @robbensinger The only reason the Manhattan Project was able to keep cover during the war was that the Axis Powers weren't looking particularly hard for it and communications were centralized. If this happened in 2023 it'd be around the world in an hour:
finance.yahoo.com/news/time-manhβ¦
First draft training strategy for CEV: https://t.co/m7XhLTYmFF
@robertbmcdougal Have you tried GPT-4 yet? It's surprisingly good for this.
Paperclipping is posterior collapse of the computable environment.
Daniel Dennett (basically) calling for the death of AI researchers in his latest op-ed feels like a hysteria high water mark. x.com/robinhanson/st⦠https://t.co/G1jvy7MKIi
@HenriLemoine13 "I'm not in favor of killing these guys but I think we should do whatever the closest thing is that's ethically permissible."
"See he didn't call for their death, just whatever the overton window will let him say!"
"Uh..."
@HenriLemoine13 If someone opens their 'poorly worded' statement with a fairly clearly worded deathwish and you decide to fuzzy match it to something more charitable on their behalf you're in obvious bad faith. Take the L and stop following me on twitter dot com.
@HenriLemoine13 You were and I removed you.
@JMannhart @HenriLemoine13 I think it's at the very least irresponsible to open any paragraph with "I don't want these people executed but...", however GPT-4 seems to agree with you so I guess I'll concede the point. https://t.co/TKJEGIWr03
@zackmdavis @HenriLemoine13 I'm honestly not sure how much it matters? I have to imagine that someone angry with me for "misrepresenting" his view (which I quote right beneath, you can evaluate for yourself) more or less agrees with it if they're more upset at 'misrepresentation' than what is stated.
@zackmdavis @HenriLemoine13 You know, if I had written the tweet "Calling for life in prison for releasing models a hysteria high water mark" and just ignored the weird bit about capital punishment at the start, presumably these people would not be coming in to reply that's a monstrous take from him.
@zackmdavis @HenriLemoine13 If you are so deep into AI Risk freakout that you think him calling for life in prison for development of AI as it exists now isn't at least like 90% as bad as what you think I misrepresented him as saying, so you're angry at the claim but not his take, well I mean
@zackmdavis @HenriLemoine13 My model of statements like this is something like there's a direction and magnitude in latent space you're pushing the reader towards, and Dennett bringing up capital punishment unprompted is a tacit admission the direction and magnitude of what he's saying implies it.
@zackmdavis @HenriLemoine13 If I wrote a op-ed about how the kulaks are awful people who are undermining the revolution and then said "Now I want to be clear that I don't support dipping the kulaks in vats of acid, but I would be reassured if" you would raise your eyebrows and go "What in the fuck?"
@zackmdavis @HenriLemoine13 Especially if the thing I would be hypothetically reassured by is "the kulaks should be executed in a humane and efficient manner".
If someone then quoted me and said "Holy shit this guy wants to dip the kulaks in acid!" and someone went "no no he just wants them killed" well
@zackmdavis @HenriLemoine13 It would then be *even more bizarre* if the conversation evolved into "It's very unprincipled of you to accuse Mr. Pressman of wanting to dip the kulaks in acid when he clearly said he didn't want that. He simply supports the humane liquidation of the kulaks."
@zackmdavis @HenriLemoine13 Love you buddy. <3
Note for later: Unfortunately there's no "retract tweet" button, only deletion. Once there was discourse underneath deleting it felt like it'd be worse than leaving it up. If I had to do it over I'd write something like "may as well have called for capital punishment" instead.
The phrase "calling for the death of" clearly has a very specific cultural context I underestimated when writing. It doesn't really mean "implying you thought seriously about killing someone and think other people should too" but "you didn't even use a euphemism". Fair enough.
I think the thing I was intending to point at remains a valid observation (and if you disagree I invite you to explain what purpose the comment on capital punishment is serving in that paragraph, make sure you read the original op-ed for context), but the phrasing was insensitive
Locking replies because apparently the Twitter algorithm is so hungry for controversial content it'll send my irritated comments to the moon but leave real content untouched. Disappointing.
@NPCollapse We should, but I feel like this kind of dunking anti-incentivizes it overall. Not sure how we should treat failed predictions in a graceful way tbh.
@NPCollapse I also feel like our current prediction frameworks are too discrete/fragile to really hold up the discourse. I can imagine many interesting and worthwhile thinkers whose prediction record in a boolean-outcome forecasting tournament would be garbage.
@NPCollapse For example, the timing of predictions is crucial in this kind of forecasting and empirically very hard to get right. Huge survivorship bias going on with who looks like a prophet (Engelbart) and who looks like a dork (Ted Nelson).
gwern.net/timing
@NPCollapse I'd naively nominate Steve Jobs for true prophet status, in that he gambled big several times on understanding when a technology was ripe to be invented and got it right over and over. The guy clearly was well calibrated on timing.
@NPCollapse > Ted Nelson
Seems like a pristine example of the sort of guy who is very much worth reading and whose performance in a Brier-score-on-boolean-outcomes-to-precise-questions forecasting tournament would probably suck.
@NPCollapse One form of forecasting I would like to see uses latent representations of multiple paragraphs of text rather than boolean yes-no questions with extremely precise wording/conditions. Ask participants to imagine how the future looks in some area at X date in their minds eye.
@NPCollapse Then have some neutral 3rd party write down how things actually look later (perhaps an AI language model with a standard template/prompt to remove concerns about the odds being stacked?) and score similarity of the latents.
@NPCollapse Suspect this would be more useful from a discourse standpoint too. Take for example the question "Will PRC forces have an armed confrontation with IAF before 202X?" and you get it wrong because a soldier crossing the border got shot.
@NPCollapse If I wrote a 2-3 paragraph description of a big Indo-China war, or even a description of frequent border skirmishes heating up, it would be obvious to observers that my overall vision is incorrect in a way the question resolving "YES" wouldn't tell observers.
@NPCollapse Visions of the future are also much more narratively compelling in a way that minutia and disputes over the resolution conditions for a question are not, so the public could get more into the stakes than for traditional boolean questions.
@NPCollapse I don't see this as *replacing* the boolean question setup, because having precise answers to precise questions is a valuable thing and we shouldn't throw that away. But there are a lot of elements of discourse that aren't well served by it.
@NPCollapse Or more to the point: We're not even tracking peoples vibes right now. I would nominate Gary Marcus as the most astonishing transition of the last six months from "deep learning is a dead end" to "deep learning is going to destroy human civilization".
Prediction that will seem wacky now but retrospectively obvious:
When we solve grounding and GPT-6 is to all appearances more honest, more virtuous, makes better decisions, than the people we currently elect to office we'll have trouble convincing them it's not ready yet. x.com/jd_pressman/stβ¦
This will be the case regardless of whether it is or is not actually ready yet (by which I mean we have alignment solved to a sufficient degree that we can trust merely human or somewhat-above-human systems with powerful roles in society).
For this reason I suspect we are overrating the superhuman takeover scenario, not because it's impossible, but because in the alternative the "trusting the machines too fast" scenario is probably the default.
@michael_nielsen @adamdangelo Western economists are in the business of refusing to believe in singletons and monopolies. It's natural to refuse to believe in something that would invalidate your life's work.
(This is true regardless of whether they are correct or not)
> we'll have trouble convincing them it's not ready yet.
Clarification: "Them" here means the larger citizenry. People will demand to know why they have to accept subpar public service. We had best start formulating our standards now so that we're not seduced in the moment.
@Plinz @alexandrosM @k18j6 This is true, but I really wish he was a bit more potent. Right now he seems to be the standard strawman to beat up on because he's easy points. The Gary Marcus of "it's fine".
@ESYudkowsky @pirpirikos @Simeon_Cps This is because your ontologies are still too weak. You haven't reached both theoretical rigor and economy of expression. e.g. ELK writeup conflates
1. figuring out who the LLM thinks is speaking
with
2. supervised learning implying the human observer becomes your latent z.
Thus begins one of the great games that determines how this all plays out: Adversarial pressure applied to LLM-like models under real world conditions. A similar evolutionary arms race is how we got *our* general intelligence, so don't discount this one as a side show. x.com/wunderwuzzi23/β¦
@4confusedemoji @jessi_cata Cis people totally have anxiety disorders and stuff tbh.
@T0N1 @ESYudkowsky No. That's actually really not what I said at all. https://t.co/eV3vi2f7ag
@Ted_Underwood @ImaginingLaw While I agree in principle, I think there are some nuances to this discussion that will have to be ironed out. For example it's not going to be acceptable for PAC's and corporations to try and influence your vote/opinion/etc at scale with fake people.
@Ted_Underwood @ImaginingLaw "Didn't you say people don't have a right to-"
Yes, I said they don't have a right to this *in the sense that it is not a fundamental human right*. I don't think you have a (fundamental) right not to receive spam either, but anti-spam laws should exist.
x.com/jd_pressman/stβ¦
@Ted_Underwood @ImaginingLaw A sketch of a proposal I might endorse would be "Corporate entities and political committees using robots to canvas must disclose that the canvasser is nonhuman. Failure to adhere met with large fines and whistleblower payouts. Aggravated offenses are criminal."
@Ted_Underwood @ImaginingLaw I'm not even sure you need new laws for the commercial case, the FTC believes in an advertising context this is already illegal (and I would imagine it is):
ftc.gov/business-guidaβ¦
@tszzl It's a matter of the thing it is. We should stop using metaphors and start trying to get at the cruxes of our conjectures. https://t.co/pu7PEL7uWj
@vlad3ciobanu @bitcloud Epistemic Status: 80% serious, 20% rhetoric/humor https://t.co/1CWCEwftj5
Honestly think we're pretty close to solving alignment. Biggest remaining problems are verifying what we can reasonably conjecture, and subtleties like reliably binding to the unsupervised values model during RL.
(Epistemic Status of attached: 80% serious, 20% rhetoric/humor) https://t.co/2vhPgNW35l
@TheZvi I wrote it. It's a parody of:
greaterwrong.com/posts/kNcieQnKβ¦
@DeltaTeePrime @deepfates ELK arises because a loss function that mimics human text will eventually learn "mimic human evaluation" as a generalization strategy rather than do the thing we want which is "write your own observations down in English". Text-only models are supervised learning in the limit.
@DeltaTeePrime @deepfates I'm not sure there's any one solution to "ELK" because ELK isn't really one problem but a variety of different problems including:
1. What if human senses aren't enough to detect a difference between two things?
2. What if the model learns to mimic a wrong human label for thing?
@DeltaTeePrime @deepfates For example, many ELK problems I can imagine are solved by multimodality. Humans frequently have the misconception this thing is an X but in the models video latent space it is correctly encoded as a Y and we can find this out by clustering the latents.
@DeltaTeePrime @deepfates The famous "diamond in the vault" problem is not solved by multimodality because it relies on things no human sense could detect as problematic. (e.g. Advanced physics wizardry has caused the diamond to look like it is in the vault when it isn't)
@DeltaTeePrime @deepfates One simple(ish) solution is to change your architecture to an encoder-decoder model. But the usual objection to that is "well how do you know the encoder uses the same ontology as the decoder?", and we can kind of conjecture it would because they're trained together but.
@DeltaTeePrime @deepfates I didn't say it only happens in LLMs, I said that it happens any time you do supervised learning and LLMs are always a form of supervised learning. EY describes it in his Arbital posts before it was called ELK:
arbital.greaterwrong.com/p/pointing_fin⦠https://t.co/r1kh2xA6OR
@DeltaTeePrime @deepfates The thing about that EY excerpt on the left is that *this is called overfitting*. We already have a term for when your model infers too much from an outlier datapoint: It's called overfitting. So what you want there is some kind of causal regularization for the causal overfitting
@DeltaTeePrime @deepfates And the thing about the excerpt from me on the right is that most forms of causal z noise don't matter that much. Like it doesn't matter if your model learns the webcam, the webcam is a general photoreceptor which a reconstruction loss will empirically converge to truth on.
@DeltaTeePrime @deepfates The reason adding humans to z is so problematic is they are a very complex transformation on the underlying data, and that transformation does not provide an empirical sensory basis for truth. So one *obvious mitigation* for ELK is to try and remove humans from the causality.
@DeltaTeePrime @deepfates In other words: An angle of attack for most of the ELK-shaped problems that matter is to let the model correct its LLM component using grounding from other sensory modalities learned without human assistance. If it needs to e.g. chain of thought with the LM it will be fixed.
@DeltaTeePrime @deepfates To my memory the usual setup in the ELK problem is we have some fully functioning model that works without language, and then we try to bolt language onto it. Obviously that doesn't work because the model has no incentive for the LM outputs to line up with its internal cognition.
@DeltaTeePrime @deepfates And if you say "ah but what if it's not possible to discern the diamond isn't there with any human sense you could ground on?" I would ask, *how does the model know this then?*
Like the entire point of *latent knowledge* is we assume the model has the knowledge somewhere!
@DeltaTeePrime @deepfates You know if I'm a superintelligent guard at the museum and I look at the Mona Lisa, which seems to be right there, and then I say "I know it is possible to construct a bizzbizz and beam the painting out of the museum so I can conclude it's gone." I have violated Occam's Razor.
@DeltaTeePrime @deepfates There has to be *some sensory evidence* available to the model that the diamond is in fact really gone, and it should be possible to ground the language model after pretraining using this knowledge in an unsupervised way if the language model has to be used to solve problems.
@DeltaTeePrime @deepfates Anyway all this aside the purpose of that list is not to give my complete take on each problem, it is to transmit a gestalt vibe/perspective which is useful that can be summarized as "maximize unsupervised learning". It's mimicking the tone/style of:
greaterwrong.com/posts/kNcieQnKβ¦
@DeltaTeePrime @deepfates The vice of this style is that it speaks plainly 'absurd' things, the virtue is that by taking an extremely opinionated perspective you can predict the future.
> Why have separate modalities? Thatβs really dumb. Info is info, just give it all at once.
x.com/izzyz/status/1β¦
Want your own Twitter archive? Modify this script.
Twitter Archive by John David Pressman is marked with CC0 1.0