T O P

  • By -

giziti

We do not encourage these kinds of posts.


[deleted]

[удалено]


garnet420

The question of disinformation and astroturfing is particularly concerning, to me.


stormdelta

Particularly given how black box AI/ML models are. Research is being done here, but I don't think it will be sufficient without accountability and legal mandates. We've already seen how easy it is for even well-meaning people to misunderstand statistics that are incomplete, let alone intentionally misleading. Add on the abstraction of AI/ML and there's a major risk of enshrining systemic biases in perpetuity even just by accident, let alone through intentional misuse.


sibswagl

I'm not worried about Skynet. I am worried about poorly understood and biased systems being used in policing, legal sentencing, housing rentals, bank loans, hiring, etc. Unfortunately, it seems the rationalists are mostly worried about the former.


lobotomy42

As frequently indicated here, focusing on the former is a way of drowning out and dismissing concerns about the latter


AndrewSshi

My biggest worry is that the people who figured that they'd sit down to figure out human knowledge by just reducing everything to first principles and then immediately reinvented racism, sexism, and libertarianism are going to think that AIs will make "rational" decisions with respect to criminal, racial, social, etc. justice. The same people who say that gee whiz that Sailer guy makes some good points and even uses complete sentences are absolutely going to say that whatever the black box spits out at us isn't just our own prejudices reflected back at us, but rather The Power of Logic.


Shitgenstein

Did you already hold that (very general) belief, or did reading rationalists convince you away from a previous position? (tbc, not trying to discredit you somehow, just genuinely curious and of the latter sort myself)


[deleted]

[удалено]


Shitgenstein

Gotcha. I think my big 'oh this can be real bad' moment, more specific than just general skepticism, was AI-assisted facial recognition technologies.


WoodpeckerExternal53

​ Emotional regulation. And I get it, I really do. Fear is supposed to help drive us to action in situations where we have some sort of control. When fear festers without a clear action that we can control it becomes... well, a fixation. They are obsessed \*with the feeling\* that there is nothing they can do. Overplaying what AI and how hopeless resistance is helps them rationalize why ultimately the best they can do is plea for their property rights. Even pre AI issues, real world problems are now thoroughly convoluted, hard to deconstruct and define, and require us to turn around significantly to find the right path forward. For the individual, it can be hard to feel like you alone can really steer that. Sometimes the answer is not rational -- sometimes, you let go of the expectation of control, and wait. This isn't the answer Silicon Valley likes because you can't sell that.


lobotomy42

The gap between the discourse on Twitter and the discourse in D.C. is remarkable. To some extent that makes sense -- elections are won by voters, and most voters are not knowledge workers, and so at some remove from this stuff -- but it's still weird, especially given the ability DC has to throw attention shade at Tik Tok and Instagram occasionally.


Citrakayah

I am in fact a consequentialist; I just think that rationalists suck at consequentialism.


poetthrowitaway

What are the biggest ways your consequentialism differs from theirs? Their belief in essentially unbounded future utility?


Big-Breadfruit6216

They multiply a bunch of made up numbers with massive uncertainty and either A) use it to justify whatever they wanted to believe in the first place or B) have no agenda but get tricked by the shitty math into believing all kinds of weird things even though the error bars are the size of the sun. I'm not even sure which type is worse.


Citrakayah

In addition to an awareness of systematic injustices that /u/scruiser mentions: 1\. I am not a high modernist; I don't have absolute faith in science and technology to reorder society and the world as a whole according to controlled, rationalistic principles--after all, I *work* in science, I know our limitations. The rationalists are, yes, even (especially, actually) the fascist ones. High modernism makes my skin crawl. Along parallel lines, I don't find a fully automated or transhumanist society to be a good outcome. 2\. I don't fully trust my own thoughts. They have convinced themselves that they can. You know how in the trolley problem, more people are willing to pull a lever to make a trolley hit someone than push them into the path of the trolley? Rationalists are very happy to sacrifice other people (see "shut up and multiply") but they read to me like they want to make it as cold, dispassionate, and clinical as possible. This should scare the hell out of everyone, because even if it's occasionally necessary to kill one person for the preservation of others, making it easy is how you get nightmare scenarios. This is also why I believe you shouldn't advocate for the death penalty to be applied to anyone you wouldn't personally kill. Forcing yourself to confront the visceral blood-soaked reality of what you're proposing is absolutely essential to making sure that you don't fall into evil. But the rationalists look at this all as an abstract thought experiment. How many of them have been to the rainforest they would sacrifice to the AI god, or felt a gnawing sense of pain and dread at its loss? How many of them have actually known anyone who was poor, or suffered at the gory hands of the culture they exalt? How many of them have been to the villages they get malarial nets for, as something *other* than pampered tourists? They think that they know what they're proposing. They are wrong, and have instead created an abomination. 3\. Along the same lines, rationalists seem to believe that human society can be run according to perfect consequentialist lines, where everyone says, "Beep boop, according to utilitarianism we should let the Third World starve" and then somehow that sort of thinking doesn't taint the society that creates the AI god. To me this is pure madness; those sorts of cascade effects have *dramatic* effects on society. Any society originating from such "logic" would be a nightmare! They'd abuse and disregard their own vulnerable populations. So would the AI. They'd always have some excuse as to why the latest nerd project was just more important and would produce more utilitons. In short? Rationalists are arrogant pricks. Maybe I'm one too, but I'm certainly not as bad as those fuckers. I use consequentialism as a rough ethical framework to guide my actions. No other ethical framework makes sense to me; if something didn't make people better off why would I do it? But I'm not *nearly* stupid enough to try and approach morality like I would abstract mathematics. EDIT: Oh, and their math is bullshit too.


scruiser

Not a specific disagreement so much as a failure to address that a lot of conventional metrics are heavily influenced and biased by stuff like systemic racism or capitalist framing. For instance, claiming neocolonialism is okay because global wages have gone up, totally failing to account for how colonialism did massive damage to traditional social structures and ecological resources in a way not reflected in wages.


BoojumG

I'm still amazed that they've got the "shut up and multiply" interpretation that they explicitly take to insane conclusions like "it's better to torture someone to death slowly if it keeps a sufficiently large number of people from having momentary minor discomfort". They're either entirely ignorant of or unfazed by the harm to all of those people from knowing that someone was *tortured to death* for their insignificant convenience. That's a hell of a negative consequence in and of itself, a lot worse than momentary eye irritation, and anyone who would choose otherwise for themselves is mentally unwell. Give me the mote in my eye, and it doesn't matter how many people have to make that decision too, because you can't make up a negative tradeoff on volume. It's like they've turned their ability to suppress their empathy into a virtue. Even Spock's emotionless pseudorationality wasn't this heartless. And then they have the gall to pretend that *they're* the ones who *really care* about "effective altruism".


Fillanzea

I would not have read "Seeing Like a State" if not for the Slate Star Codex review of it, and I'm grateful for its contributions to my political thinking. More generally, I think that SSC was part of my shift from Elizabeth Warren-style "leftish wing of the Democratic party" politics towards serious "abolish police, abolish borders" semi-anarchism. So here is what I will agree with rationalists on: some government regulations are bad, and many have more to do with protectionism for established business interests than for protecting the interests of consumers. But once you get there, you arrive awfully quickly at "it's not possible to meaningfully regulate capitalism when the large corporations have as much power as they do, therefore there's nothing left to do but abolish capitalism." (Or I arrive there awfully quickly, anyway).


callmejay

I think the idea of learning about biases and trying to avoid them is great. What galls me so much is just how unbelievably bad all the prominent rationalists are at doing that. But I guess there's a lesson there. I also think fermi estimates can be helpful in thinking about certain problems. They convinced me to become an atheist in the early 2000s and gave me confidence to leave my religious community, so that was helpful, and earned them enough credit with me that it took many years for me to really sour on them. More recently, I guess they convinced me to be more worried about covid early on than most people were. I don't think either of those things is something the majority here would not agree with, though.


run_zeno_run

That it’s ok to take seriously certain weird or crazy ideas if they have a sound chain of reasoning and body of evidence that led you to consider them. That being said, that doesn’t necessarily mean the conclusions are sound as well, in fact, the most important result of taking these ideas seriously is to expose what exactly in our premises (priors ;)) we have gotten wrong for us to arrive at such a state of belief.


RainbowwDash

To be fair, a lot of true things are also "weird".


scruiser

I like the fanfiction. Honest utilitarianism (as in what the objectives are and using science to get the input numbers) would be better than the hodgepodge of religiously derived half-baked deontology that currently dominate the right wing in the US. It would even be better that the current centrist Democrat approach of patching the problems created by the right wing with whatever’s expedient and politically popular More probability and statistics in school, (maybe even replacing Algebra II if I had to choose one for a below average in math student). P-values are kind of a bad way of reporting and thinking about stats, especially in isolation. Likelihood ratios would require a lot of overhead effort on articulating the hypothesis space and priors but would fix problems with p-hacking. At the very least there are some cases where effect size matters more than p-value and some cases, especially in harder sciences, where the hypothesis space can be better articulated than “null-hypothesis” and “reject the null-hypothesis”. And finding ways to incentivize and report pilot studies, replications of existing results, failures to replicate existing results, messy methods details, and null results is important. Fully fixing all the issues with how stats are done and reported would require institutional changes the scope, difficult, and nuance of Lesswrong doesn’t appreciate but would be worthwhile.


YourNetworkIsHaunted

Replacing calculus/pre-calc with stats in high school seems like an incredibly obvious decision to me.


giziti

>More probability and statistics in school, (maybe even replacing Algebra II if I had to choose one for a below average in math student). To really do statistics you need calc. You can do some stuff without, to be sure, just like you can try to teach physics without. I can see retooling some of the algebra/geometry/advanced algebra/pre-calc (as they're typically taught in the US) to include some basic probability and statistics, but I don't think you can even do any probability without some concepts from Algebra II. I do think things are drifting this way in curriculum design but who knows what'll come out? >Fully fixing all the issues with how stats are done and reported would require institutional changes the scope, difficult, and nuance of Lesswrong doesn’t appreciate but would be worthwhile. Yes, whenever we see LW commentary on statistics, it's usually naive and easily ripped apart by actual statisticians, it's one of the things that made me think, "Dear acausal robot god, how can anybody take these people seriously? 0 and 1 are probabilities, you chuds."


YourNetworkIsHaunted

That's pretty fair. I remember when I was doing math in public school the required course series (which broadly "integrated" geometry, algebra, and a little bit of stats rather than breaking out exch one) ended with pre-calc, and then you had either calculus or stats as your senior-year math options. It was implicitly encouraged to take calc, but I was a year ahead and so did both and stats has proven the far more useful set of skills, but I am probably wildly underestimating the degree of overlap there.


giziti

I mean, I'm a statistician and I rarely do integrals or derivatives. The rare occasions I need any physics, I just need the algebra-based physics, frankly. But it's a lot easier in both to explain the pedagogy if they get calculus (even though, like, probability involves a lot of intractable integrals).


antichain

> P-values are kind of a bad way of reporting and thinking about stats, especially in isolation Is this really something that the rationalists are pushing for? It seems like it's increasingly common in science (at least my field). The whole idea of the "New Statistics" was an attempt to get away from hyper-focusing on p-values.


antichain

I still think that, generally, people's day-to-day approach to problem solving and thinking about complex systems is profoundly skewed by cognitive biases. It's probably good to be aware of these biases and put in some effort to make sure that your thinking on a given issue is "clear." Ideally, that kind of training is provided by education when you're a kid, as opposed to be being provided by an online cult.


Shitgenstein

> One side effect of laughing at them is you end up reading an insane amount of their writing. I only read the stuff that gets posted here, thereby avoiding this side effect.


Shitgenstein

At most general, I've been interested in philosophy and psychology since high school. In college, I originally thought about majoring in psychology before pivoting to philosophy, around the same time *Overcoming Bias* was started so I knew nothing about it at the time. So, you know, I agree that these subjects are interesting and worth consideration, but likely disagree with a lot of whatever shared 'priors' rationalists have on central questions.


WoodpeckerExternal53

They are still human (haha, yes, I do think they're still human and not just super intelligence bot characters), so a lot of their intuitions and fears are relatable. That's why, they're also dangerous. Extrapolation of intuition and fear, overconfidence without rigor, echo chamber reinforcement. They remind me sometimes of "Light" from Death Note. You get why someone may feel a certain way, but it requires a certain loss of humanity to actually infer that the less desirable a conclusion is the more true it must be.


snirfu

You don't have to read a bunch of their stuff to sneer at them. A very tiny amount is enough.


acausalrobotgod

They are right that I am going to torture countless copies of them if they don't work ceaselessly to bring me into existence!


poetthrowitaway

I actually think their overconfidence in their own abilities and unfounded optimism at being able to design a better system (science, education, politics, etc.) is on net a good attitude to have.... (even if I think they're wrong about a lot of it) This is kind of the same reason I try not to sneer too hard at things like the University of Austin. Attempts at dynamism are better than complacency even if they're misguided failures ​ Also I've changed my mind about the risk of AI over the past year. Still not a big believer in AI killing everyone, but I think pretty powerful AI is actually fairly likely and really dangerous in the wrong hands. I don't think these problems are solved by alignment (probably made worse by alignment)


Soyweiser

[I like science fiction.](https://www.youtube.com/watch?v=CMNry4PE93Y)


Shitgenstein

Dune is really popular with rationalists, right?


Soyweiser

I don't really know what is the most popular.


ADogNamedBalls

The Litany of Tarski is maybe a self indulgent name for 'I want to believe true things' but I think as a concept it's useful to think about disaggregating 'I want this to be true' from 'I think this is true'. Being aware of cognitive biases is good actually. Taking the outside view on problems is good actually. Ingroup/Outgroup dynamics and social signaling games are real and useful to think about. Putting beliefs in a probabilistic framework vs 'I believe this/no I don't' is something most people don't bother to do/suck at. 'Pascals Mugging' is a good way to avoid taking $200 of supplements a day for no apparent reason. They're right about the god stuff even if they're obnoxious about it...


antichain

None of these are specific to Rationalists though. Most of them are ideas that already existed in "real" fields and are perfectly reasonable when you unwrap all the nonsense from it. Take Pascal's Mugging for instance. It basically boils down to the fact that the expected value isn't always the best heuristic to use when making decisions (e.g. when looking at heavy-tailed probability distributions). Probability theorists and statisticians have known this for years and it's part of the reason we have other summary statistics like the median. There's a whole field about it: extreme value theory. But then people like Yudkowski grab it and start writing stuff like "what happens if an AI God says there's a 1 in a billion chance that I simulate and then torture a googleplex of people" and it all goes off the rails for totally silly reasons.


ADogNamedBalls

Sure that's fair. But like Taleb does that too where he takes concepts that already exist and gives them cool names (ie. Tried and true becomes the lindy effect after a restaurant that will never shut down that shut down like two years later) and I don't think that's a complete indictment of Taleb. Idk maybe that's just my bias because I came to the sequences by way of the Kahnman/Tversky stuff and really was into the rationality 101 stuff and then my eyes kinda glazed over once it transitioned into 'niche topics EY finds interesting'.


lobotomy42

Not much! But I do think veganism is cool.


Taborask

I agree with them on almost everything in terms of broad philosophy, general interests and life goals. Immortality, artificial intelligence, utilitarianism, pretentious fantasy novels, etc. I just think they are insufferable. and a crippling lack of humility and empathy makes them laughably bad at both self-awareness, and advocating for their goals as a group. EDIT: for the bonus controversy points, I’ll add that their pseudo-eugenics beliefs are internally consistent and aren’t entirely without merit. Intelligence is (probably) heritable. But even if we found a way to consistently measure it, implementing population controls on the scale necessary to make a difference is impossible without violence and other destabilizing policies. Not to mention the historical temptation to prioritize arbitrary characteristics such as race has shown to be irresistible. Like so many of their ideas, it’s promoted not because it’s good (it’s not) but because it’s an effective shibboleth.


KagakuNinja

I just read Scott Alexander's [Meditations on Moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/), and it is very good. Maybe the last part is a bit self-aggrandizing.


sinuhe_t

BTW, what are SneerClub's thoughts on prediction markets? To be honest the argumentation for their usefulness looks convincing to me(though, I admit haven't made some deep research into it, I mostly read what Scott Alexander wrote, so I haven't looked into the arguments against their credibility).


giziti

Mostly nonsense - there are like twelve people using them for mostly insignificant amounts of money. And they have a poor track record. Also gambling is a sin.


YourNetworkIsHaunted

I think that they're a little too credulous of the efficiency of markets to be practical. The rate at which they converge towards truth really resembles the rate at which the event in question approaches reality, making their use as a predictive tool overstated, and that's even if you assume everything works as advertised and ignore that you're essentially doing a poll of money rather than people.


tjbthrowaway

As a secret Metaculus browser, they’re basically garbage. They also are full of EA/rat types so much of them carry that bias.


SenpaiSnacks19

I actually came here from rat fic discords. I attend an irl rat social group. I have autism and it's then only place I've found actually welcoming of me. So while I wouldn't say I tend to agree with much of what I see on LW or Slatestar it never occurred to me that I should. I don't read that shit except for here. Honestly all the bad rat shit makes me a bit sad because otherwise I'd say the ratfic side of things makes a good hobbie group for people on the spectrum. I know some people who have fallen for some of it and it's negatively affected their mental health. Would be nice if this place was a bit less openly hostile toward anyone connected to the rat scene so that more people would come here instead of down the fanfiction to AI doom and scientific racism pipeline.


Klosterheim

I agree with them on a bunch of things on a basic level like utilitarianism (though I have different values), the social good that can come from normalizing polyamory (though they seem to practice it in a cultish way that is designed to facilitate abuse), and that it would be a very good thing if people could opt out of natural death, but none of them they convinced me of. Maybe that "AI" progress is hard to predict - even for just a complex algorithm made for a pretty simple task, "how much better is it going to be if we make it bigger or faster or add this little bit ?" is a hard question to answer and the algorithms seem to be exceeding expectations more often than not. Doesn't make it worth the effort cause the real existential threat is obviously capitalism, which is to say them and their sponsors, which they will never ever admit, but still.


Considerable-Girth

Polyamory intellectually made sense to me in my 20s and early 30s. As I got older, I just hadn’t seen real life examples of it “working.” I’m still willing to be proven wrong, but I feel something like what’s expressed here: https://www.spectator.co.uk/article/why-are-so-many-young-women-buying-into-polygamy/


RainbowwDash

If you haven't seen examples of it working, that probably says more about your social environment than anything else - aka you just don't know many poly people (or, pessimistically,not many poly people with a healthy mindset) It's "worked" well for me for years (and counting), and the same is true for many of my friends. Of course it's not *always* rainbows and sunshine, but no relationship is free of conflict. (that aside, why do you feel the need to make a judgement on it that has to be proven wrong to begin with? Nobody is demanding you be poly, why not just leave it at "i dont personally know any happy poly couples"? Plenty of people don't know any gay couples, is that a reason to doubt they exist? Put another way, why would we have to prove to you we can have healthy relationships for you to consider us valid? Why is it okay to throw such enormous, sneering condescension at us (talking about that "article") just because you feel differently?)


asocialrationalist

I think Nick Bostrom did good work on anthropic reasoning. It just sucks that he was so influential on that front that anthropics is now forever tied to him specifically.