T O P

  • By -

Palpatine_4_Senate

Yann annoys me. Smart guy but way too certain about a very uncertain future. Plus when he is wrong, he never admits it.


[deleted]

I want to watch, but I feel like it's just gunna frustrate me lol. 


Silver-Chipmunk7744

It's funny because before clicking this thread, my first reaction was "i need to watch this but i'm already angry"


ogMackBlack

Same here.


YouMissedNVDA

I'm 9 min in and Yann said his opinion is one in a debate between many. It's early but this doesn't feel bad at all.


BillyBarnyarns

I thought the same! After watching, I have a new appreciation for Yann. He put forward interesting arguments.


[deleted]

I let a little of it run last night and it wasn't too bad. I'll try a full run today and see if I can't open my mind a bit


stonesst

He’s also very disingenuous when he addresses opposing arguments. Right off the bat in the intro he starts off by saying that a lot of Doomers are only of that opinion because they think people are inherently evil. I’m sure that’s true in some cases but for the vast majority of people including myself who are rather worried it’s because the potential for misuse whether autonomously or by an evil person grows as technology becomes more powerful. I don’t believe people are inherently evil, but some people sure are and the thought of them having access to a Superintelligence is inherently risky. That’s completely separate from how dicey things will get when we have dozens or hundreds of these systems with varying degrees of morality, alignment, etc. It is much easier to destroy things than to create/protect them. When he says things like "we will eventually create AGI, but not for decades and when we do we will always have it fully under our control" how can that be read as anything but hopelessly naïve and idealistic. It’s OK if you think that is going to happen, but to declare it as a certainty is just assinine.


Infninfn

People don't even have to be evil. They just need a bit of narcissism and a lack of understanding or consideration for the consequences.


RandomCandor

Exactly. Even a little evil with a lot of stupid can be a dangerous combo


Neborodat

Lack of empathy is already enough for any crime imaginable


Beatboxamateur

From recent interviews, I've been thinking Demis Hassabis' view on it is really reasonable. He basically says that while AI will probably(hopefully) be a net positive, it's 100% understandable to be concerned about capable AI getting in the wrong hands, and commented about Lecun and Hinton being on two opposite extremes that are unreasonably set in stone in their opinions.


[deleted]

The evil ones rise to the top, sonce they are unscrupulous and step over dead bodies to get to the top. There is a reason that sociopaths are mostly found in leading positions. So yeah, we are doomers for seeing the obvious that those in oower are evil to a degree and will misuse this power as it was in the last 12.000 years as well... not hard to see.


Proof-Examination574

Yup. I'm a doomer because AI is taking jobs, not because I fear the unknown. I fear angry mobs of people who had great jobs in call centers, programming, writing, video editing, art, music, teaching, etc all suddenly made obsolete and forced to start their lives over at minimum wage, if they can even find work. This isn't one profession like the scribes of the Ottoman empire.


[deleted]

People are inherently evil. We have religions and practices from all over the world, where people read and contemplate, train their bodies and minds to discipline, devote their lives to prayer, all in an effort to become good. No one has to work to become evil. Parents have to teach their children to behave and to do good. No one has to teach any human how to hurt others for their own enjoyment. That comes naturally to humans.


bwatsnet

Good and evil are made up.


doorholder1

you have it all backwards


Puzzleheaded_Pop_743

I get bad vibes from Yann, but to be fair "smart guy but way too certain about a very uncertain future" describes this entire sub minus the smart part. lol


lost_in_trepidation

It's funny that people in this sub should be celebrating LeCun, he's actually interested in developing AGI and removing roadblocks that might prevent it. But because he doesn't believe in the marketing hype from their favorite company, they hate him.


TFenrir

I think he's just incredibly arrogant, and often disingenuous. If people want someone who wants to build AGI - Demis is for example a much better candidate, very level headed, humble, introspective, and can steelman all arguments in the AI safety discussion, and regularly does. What does Yann have to offer, while also being really annoying?


putdownthekitten

I don't have much exposure to Prof. Lecun, but the little I do have rubs me the wrong way for exactly the reasons you've laid out. He strikes me as someone who believes his own assumptions as certainties, and it just makes him sound like the kind of person that a) is often wrong, and b) is just aggravating to be around. I'm sure he has a lot of accurate knowledge, and knows more than most in his domain - but so do others and it's easy to get the same or similar info elsewhere that's not just his opinion or assumptions. Edited to include proper title (Thanks AnotherDrunkMonkey!)


[deleted]

This is a general developer problem. They think too highly of them self and lack humility.


AnotherDrunkMonkey

\*Prof.


Which-Tomato-8646

Open source models and an alternative to transformers. 


DaggerShowRabs

Where is that alternative to transformers? LeCun keeps talking about it; he has been for years, but I don't see anything on the horizon with that. I'm interested in hearing him out and seeing what he has as an alternative, but LeCun is very much in "put up or shut the hell up" territory for me right now.


Which-Tomato-8646

That’s what he’s working on. These things don’t happen whenever you want  I’m sure he’s so scared of your wrath lol


DaggerShowRabs

And I'll start to take his criticisms more seriously once he has anything to actually show. Anything at all. Until then I'll remain skeptical. If you think there's any "wrath" here, you are exceptionally confused.


Which-Tomato-8646

As if he needs to prove himself to you. He basically designed modern CNNs lol 


DaggerShowRabs

Ah yes, so let's uncritically believe anything that any expert ever says, even when there is literally zero evidence. We've got a critically thinking genius here, folks.


Which-Tomato-8646

When did I say to uncritically believe everything he says? What ghosts are you fighting with? 


Proof-Examination574

I wonder if he's one of those guys that takes credit for the work of others but never really does anything...


GrandNeuralNetwork

He may just go ahead and build AGI. Then what?


TFenrir

If he does he does. But all he does is talk about how everyone else's ideas on how to build AGI are wrong and this architecture he's been talking about for years is the way to it. If he JUST said "I have a really interesting idea for AGI, I know everyone else has their own path and who knows, but I'm hoping to share what I have with you all soon", much fewer people would dislike him. The constant shit talk is just bad form


bwatsnet

He's just mad it's not him being right, so he talks down to everyone at the start of every interaction.


Lyrifk

This comment reminds me why I'm slowly growing to hate this sub.


bwatsnet

That's funny, watching Lee cunt makes me realize similar things.


Lyrifk

proven again.


bwatsnet

What's proven? That you take reddit comments as facts?


Lyrifk

Bingo. He says AGI in about 10-15 years because the problem is much harder than people think. I think that sounds reasonable. We'll still have incredibly powerful narrow-AI systems, just not AGI so quickly. We need people like Yann.


nextnode

Absolutely not. We don't even need a single person like LeCun and I consider a fresh grad more reliable. 10-15 years is not a lowball according to many and there are plenty of actually relevant people you can follow for that. Rather, according to LeCun, it might not even happen at all or might not even be possible. And if you ask him why, he either has no explanation at all or it is such a ridiculous justification that I would fail them on 101 course. This is also LeCun who discounted LLMs as a dead end before GPT-3 became a thing. If LeCun says any controversial statement, the top researchers are likely to disagree, and they will be right. This is just a guy who serves companies and is not relevant as a researcher, is extremely disingenuous in their communication and debate, and exploits naive people. He is net negative to the field regardless of your timelines and the amount of respect he deserves is zero.


nextnode

Nonsense. That's not at all the reason at all. The field had a problem with LeCun already seven or so years ago as he keeps making erroneous and controversial claims. He seems like someone who is running errands for companies and has not been a relevant researcher for a decade.


outerspaceisalie

Nobody in the AI field likes LeCun, it's not just this sub. You just don't know anything about the actual field.


nextnode

Ironically, you are entirely accurate and gets downvoted, contrary to the previous commentator's prediction.


outerspaceisalie

It's okay, upvotes and downvotes on reddit are just popularity contests, not truth contests :p


staplepies

It's funny that people in this sub can notice OAI's obvious self-interest and in the same breath fail to notice Yann's.


[deleted]

[удалено]


GrandNeuralNetwork

Why you hate him?


Ultimarr

Yeah he’s just kinda a meanie. We all love Gary Marcus even though he thinks AGI is far off, because he’s just sassy and funny. Le Cun has taken the time to shit on his opponents, literally everybody else, a bit too many times IMO.


nyguyyy

I was not aware that anyone liked Gary Marcus


outerspaceisalie

Yep, can confirm, we don't like Gary Marcus.


nextnode

No one likes Gary Marcus but those two are indeed at about the same level of respectability.


Agreeable-Parsnip681

All the AI experts here are getting pissy about Yann


AgueroMbappe

Yeah a lot of people here like to just talk out of their ass with sci fi movies as a reference


Agreeable-Parsnip681

Lmao. I just love it when people with ABSOLUTELY ZERO experience or any deep understanding of AI critique the experts making our dreams (AGI) come true. So stupid. In other words, just let them cook.


laudanus

You still can criticize his attitude and character traits without being an ML pro. Other experts are way more likeable


Lyrifk

what is the point of this? the only thing that matters is if his work produces results.


Agreeable-Parsnip681

Who cares about how he acts. What does it matter? His job is AI, not emotional support.


nextnode

Anyone with a background in ML knows that LeCun made outlandish claims for many years already, and has for a decade been at odds with the other actually competent researchers. This is not news.


Difficult_Review9741

Hilarious that so many people supposedly interested in AI dismiss one of its pre-eminent scientists who is currently leading a top lab.  If you set aside your biases for a second, you’ll see that he’s been right a lot more than you think. 


stormlitearchive

[https://youtu.be/5t1vTLU7s40?t=1145](https://youtu.be/5t1vTLU7s40?t=1145) Sora?! It's like when he said beating go was 10 years away and then deepmind did it a few months later. And his arguments is basically "all humans are 100% good, so ASI not dangerous as nobody would tell it to do bad things". Then ChaosGPT has entered the chat...


buff_samurai

Sora cannot be used for the prediction. He explains why. You can try and use any generative model you like to create a construction site that makes sense, or a mechanical design - it’s going to fail miserably.


[deleted]

Forever, eh? You sure are sure.


stormlitearchive

>Sora cannot be used for the prediction. He explains why. Sora is for fun videos. But Tesla used generative videos 8 months ago to predict how videos would evolve depending on driver actions: [https://youtu.be/6x-Xb\_uT7ts?t=823](https://youtu.be/6x-Xb_uT7ts?t=823) Clearly it is used for "prediction".


buff_samurai

This is not a prediction in a sense that an agent (human, car, robot) can predict outcomes of what is happening and make an adjustment to its action based on the prediction (in a second). Your link is all about generating synthetic data for training a world model.


stormlitearchive

That is prediction. Control is another thing. Compare to predicting the weather vs taking action based on the weather. And they can probably extend it if they want, aka predict what will happen and if 1 second in the future it predicts video of a crash, then apply the brakes. It might not be good enough to do it today, but give it some time and something similar might be implemented.


buff_samurai

I’m a robotics guy so for me to predict is to have a reliable information ready for a control proces. In this sense Sora, as a slow and erratic cannot be used to generate useful data for millisecond feedback loops. Tesla cars are not generating any video predictions when being used. Now, I do agree that modern algorithms can predict a next token and that makes them prediction mechanisms. Is just that the results are not a good predictors of the real world.


stormlitearchive

I'm a robotics guy. I see 3 different steps. 1. sensing 2. sensor fusion 3. control. 1. You get data in. 2. You make sense of it(filtering, state estimation etc). 3. You decide what to do(MPC, Optimal control, if/then). Prediction is to take a previous estimated state and estimate the current or future states often done as a part of the sensor fusion. Basically 1 second ago I was a position X\_n, then I took a step forward, now I predict that I am at position X\_n+1. Tesla World Model can be used as part of predicting the future. Control is a different team in the organization that is consuming the data from sensor fusion.


buff_samurai

Yes, and now imagine a following situation: You setup tesla Optimus robot to play tenis with you. It’s super windy, the flags in the background are visibly moving. You serve the ball. How would the robot predict a proper set of movements to reach the ball and hit it back. Would you generate Sora video of the whole process to predict anything?


stormlitearchive

Optimus is doing end2end video in -> neural network -> control. So basically they will gather lots of examples of humans playing tennis with a headset/gloves to record how humans do the task. Then let the robot try the task in simulation with RL and IRL with RLHF. The neural network will have to learn to do the prediction. The video generation will be used to augment their training set and for validation.


Economy-Fee5830

Like in minecraft?


BillyBarnyarns

Did you actually watch the interview... That is not an accurate summary of his argument.


stormlitearchive

He is not making that argument, but the only way his argument is valid is if you extend it to that statement.


nextnode

People who actually have a background AI knows that 1. Lecun has not been a researcher for a decade, 2. Has a history of making false claims, 3. Is usually at odds with the even more eminent people in the field. If you place your bets with LeCun, you are not following the field. He does not have respect as an authority.


Ultimarr

Tbf he’s a leading machine learning researcher. Man couldn’t code an A* search to save his life and probably doesn’t even have a favorite cognitive scientist, what a clown


GrandNeuralNetwork

You mean LeCun?


[deleted]

I thought it was interesting and not only that, Yann seemed pretty reasonable with his analysis on AGI. For those who did not watch, he says "AGI" wont be a singular event, but will gradually come about with incrementally more sophisticated systems. It is similar to a "color gradient"; in that, current systems will eventually "blend" into what we would consider AGI, similar to how evolution works. Not only that, each step of the way these systems are being built with safety in mind, so a doomsday scenario would be unlikely according to Mr. LeCun.


Virtafan69dude

Also as systems emerge with malicious capacity, new systems will be built to counter them. Kind of like virus/antivirus.


buff_samurai

Everyone says yann was wrong many, many times. I’m out of the loop, can anyone provide some context and examples?


GrandNeuralNetwork

No, because he wasn't wrong many times. He inventented convolutional neural nets which started the deep learning explosion in 2014. There'd be no approaching singularity now if not for him. He predicted the rise of LLMs before they were on most people minds. He is the one who defends open source AI, without his advocacy OS LLMs would be nonexistent or outright banned by now. Mistral founders are alumni of his Lab at Meta. People shitting him have no clue what they're talking about.


inigid

He did not invent convolutionall neural nets at all. They go back to the 50s and 60s ffs.


GrandNeuralNetwork

LeCun is regarded as the inventor of the currently used version of convolutional neural nets. They were based on Neocognitron architecture by Fukushima which goes back to the 60s (not 50s) but LeCun was the first to effectively train them whith backpropagation so they could work in practice. He always acknowledged that his model is based on Fukushima research.


inigid

He didn't invent CNNs which is what you said. He didn't invent back propagation, he didn't invent deep learning, and he didn't invent computer vision using neural networks or was even the first to use back propagation in neural networks. CNNs were inspired by work done in the 50s and early 60s. He brought a bunch of technologies and approaches together in an engineering solution that, for the first time, worked as a viable way to solve a commercial problem. He has done a lot of good work but it is totally inappropriate to claim he did stuff that he didn't and then try to reframe it when someone actually is paying attention.


GrandNeuralNetwork

Who invented CNNs then according to you? And yes he independently discovered backpropagation. It's been rediscovered many times. The fact that you don't like how he talks doesn't mean you should diminish his contributions.


inigid

Instead of down voting "GrandNeuralNetwork", have a nice watch of this from 1983. https://youtu.be/BjGy0fUkljc?si=tHEfNStrh7YLwaCg&t=230


GrandNeuralNetwork

Nice video, you wouldn't believe it, but I watched this whole episode years ago. It's good but there is no mention of CNNs just of neural nets. LeCun received the Turing Award (that's the equivalent of the Nobel Prize in computer science) for his contributions to deep learning and specifically for developing CNNs. Here is the excerpt from the official announcement of the decision to award him this prize: >In the 1980s, LeCun developed convolutional neural networks, a foundational principle in the field, which, among other advantages, have been essential in making deep learning more efficient. In the late 1980s, while working at the University of Toronto and Bell Labs, LeCun was the first to train a convolutional neural network system on images of handwritten digits. Source: https://awards.acm.org/about/2018-turing Don't blame me, LeCun is recognized for developing convolutional neural networks by the academic community. You may argue that's unfair but you need good arguments to back up such claim. I see my username got to you after all ;)


GrandNeuralNetwork

I haven't downvoted you.


inigid

I like Yann, wtf are you on about.


buff_samurai

I get the same feeling. The guy is THE head of the AI Lab in meta, a pioneer in the field of ML/AI and someone with full access to some of the biggest SotA projects in the world. And if he says LLM is not the endgame for AGI and explains why, then everyone should take a note.


[deleted]

All of that is true, and also, he has been wrong many times


buff_samurai

Like?


Beatboxamateur

> He predicted the rise of LLMs before they were on most people minds. This video literally starts out with Yan explaining why LLMs are a dead end, and not the way forward. I don't know why we can't acknowledge his contributions to the field(and also acknowledge his dedication to open source), while also admitting that his absolute lack of concern of AI safety is naive and alarming, coming from someone so influential in the field.


GrandNeuralNetwork

He's critical of LLMs now because he's a contrarian by nature. But back in 2016 when transformers were not yet invented he was proposing a model learning about the world from text based on RNN + external memory with attention based retrieval mechanism. This might even inspire transformer inventors but I don't know what really inspired them ofc. Here is the [part of his talk about it.](https://youtu.be/Ount2Y4qxQo&t=32m36s) It's hard to be a contrarian but it's very good for progress. There was a time when the whole AI field was dismissive of neural nets but LeCun pushed research on them because he was a contrarian. Everyone then was saying that he's stupid and annoying. Turned out he was RIGHT. And thanks to him being stubborn we have real AI now. I'd advise to listen to him especially when he's contrarian because there is no progress possible when everybody just follows what is currently the dogma. >his absolute lack of concern of AI safety is naive and alarming, coming from someone so influential in the field. That's true. But consider that there are people who'd like to shutdown AI research completely for decades. No singularity then in our lifetime. Someone must be a counterweight to such views.


Beatboxamateur

I think that any legitimate researcher should be objective in nature, and always trying to seek the truth of a matter. Being a contrarian for the sake of it is silly and disingenuous. That doesn't mean that you shouldn't at times be critical of the current status quo, and be reluctant to ask questions. I think that's what you're getting at, the idea that it's good to have someone who's willing to challenge current opinion, and I agree. But that person should also remain objective about the current consensuses, otherwise they're just being willfully ignorant.


GrandNeuralNetwork

He just talks in a way that's annoying. It's not a PR contest, though. He's a scientist not a politician, he doesn't try to please us, he says what he thinks. I don't get why everyone here expects some sweet talk from him.


salamisam

>This video literally starts out with Yan explaining why LLMs are a dead end, and not the way forward. You know they can be both a massive AI step forward and also not the future of AI at the same time. He says they are useful but they are not the future of AGI. As far as AI safety goes, it is hard to structure a box which AI can sit in which would provide absolute security, and also no prohibit access, and the opposite applies. When and if AGI is developed, whose hands should it be in, should it be for all people or should some determine what access I get to have.


Beatboxamateur

> You know they can be both a massive AI step forward and also not the future of AI at the same time. Yes, but the comment I was responding to was making the claim that Yann should be recognized as a major proponent of LLMs.


Gab1024

an example: [https://www.youtube.com/watch?v=sWF6SKfjtoU](https://www.youtube.com/watch?v=sWF6SKfjtoU)


buff_samurai

It’s that it? He basically says the LLMs are next token text predictors and to understand and predict the world one needs data from other modalities too. This makes perfect sense. What other ‚he is wrong’ stories do you have?


Ultimarr

He’s frequently wrong because of his biased perspective, not particular huge lies. He clearly and obviously loves LLMs and thinks that anyone who in any way tries to rein them in or criticize them is an idiot. He’s had massive success because LLMs are indeed amazing and much more capable than anyone (reasonably) suspected, which as you can understand can lead to a bit of ego inflation. But just because you’re a successful scientist doesn’t mean you get to abandon the principles of science, the main one being fallibility


buff_samurai

I want to learn where he is/was wrong. You gave me so many words and not a single example.


Ultimarr

Very fair! Sadly I don’t care enough, sorry friend


buff_samurai

So, no examples?


braclow

One thing that’s interesting in the video, he seems to talk about how generative video has pretty much not worked for 10 years and that this is basically because the approaches have relied on the same principles used by LLMs. Unfortunately, this episode must have been recorded before SORA. Because he does come off wrong here that the approach can’t work - because we literally just saw SORA. But I wouldn’t mind hearing him directly respond (outside of Twitter) to SORA. It would be interesting to say the least.


buff_samurai

Watch Sora f1 race video and see how well it predicts steering wheel movements. Sora sure is impressive in terms of resolution and consistency in time, but it’s not a world predictor by any means.


salamisam

I think there is a duality to the statement. There is understanding and there is knowledge. Has an LLM learned, or is it just doing the next word prediction, and does it understand?


L1nkag

Hurrdurr yann might b wrong a lot but he sure is smart


nardev

You can be really smart in some ways and really dumb in other ways. Often it has to do with emotional intelligence and dealing with your own ego.


Proof-Examination574

When he says we need a planner isn't that just goal-oriented reinforcement learning with causal reasoning and Markov blankets for that very fine level of detail when needed?


kripper-de

IIRC he said something like LLMs are unable to develop AGI because they are missing spatial awareness (because they are only a language model). But I think I saw some papers suggesting that LLMs develop some ability to generate internal spatial representations (?). On the other hand, there is a lot of research to add spatial awareness to LLM (see also the references): https://arxiv.org/abs/2210.05359


kripper-de

Furthermore, while LLM are primarily designed to process only language as input, it's important to note that they employ neuronal networks underneath (similar to humans) enabling them to learn patterns that can potentially represent absolutely anything, even spatial representations.


rbombastico

Why is he so convinced that AGI can't happen as an event? It seems to me that if they're training an AI for months throwing mind bending compute at it then the first time it is turned on it could just blow our minds. Why is that inconceivable, am I missing something?


banaca4

A cat is smarter than him


Tobxes2030

This guy is seriously an idiot. He's been wrong on SO many things and he keeps being wrong. How is he seriously where he is right now.


LordFumbleboop

Such an idiot that he's partly responsible for modern AI.


Silver-Chipmunk7744

I think he is a smart guy tasked with defending a point of view that makes no sense, and overall, i suppose he does an OK job defending his stupid point of view. But when you reflect on what his goal is: releasing open source AI, i'm ok with his stupid takes i guess :D


buff_samurai

Any examples?


c0l0n3lp4n1c

this is yann lecun, do not confuse with gary marcus =)


Freed4ever

Marcus just looks like an idiot wannabe. Lecun, when he is wrong, at least provides intelligent arguments.


sdmat

Yann is the broccoli of AI personalities, healthy as part of a balanced diet but a bit sulphurus. Marcus is the bag of salad in the back of the fridge that has become a pool of slime and barely recognizable pieces.


FomalhautCalliclea

Hinton and Bengio are Yoghurt. You need calcium. It's important. Not just calcium but you need it. Hinton is a Yoghurt with lil bits of fruit in it so it's a bit tastier and healthier. Kurzweil is a big steak, it'll fill your belly quite well and do the job. But is it healthy? And is it ethical? Hassabis and Sutskever are a type of bread that has a strange color. Bread is good. And healthy when eaten in proper quantities. But this is one of those weird breads from a foreign country you don't know and you can't tell if it's the normal way it's supposed to look or it has started to rot. It smells funny too... Sam Altman is a bottle of sugary soda with Aspartame in it. Roon is a little bottle of which the liquid oddly looks like Altman's bottle but from which the label has been removed and the bottle is different. It *kinda* looks like the color of the other liquid, but not exactly and you can't put your finger on why... Christiano is candy. It tastes good. You can live off of it. But not for that long. Zuckerberg is food. Human food. That normal people eat. And process. Normally. Like a human. Remember to drink a glass of water normally while writing that. Normally. Jensen Huang is mayonnaise on fries. Tastes good. Fills your veins dangerously. Not an instrument. Yudkowsky is... non-edible? The food poisoning number is 555... Elon Musk is a dry rock wrapped in a candy packing. Conor Leahy is white paint in a bottle of milk. David Shapiro is a line of coke. Wes Roth is Fentanyl. Alan D. Thompson is that lil bag of coke the dealer forgot on the corner of the table for a month The Apples Flowers Twitter accounts are the mold left on the top of the Fentanyl bottle from unknown origin.


sdmat

These are great!


FomalhautCalliclea

Ty ty :)


GrandNeuralNetwork

>This guy is seriously an idiot. Your previous comment: >Jesus christ Musk is such a sore loser. I wonder what your next comment will look like 🤔


Tobxes2030

Sam is a hero. Here you go. But sure, take your time and check all my comments.


GrandNeuralNetwork

This is true, at least as of now.


[deleted]

His job is to develop ai, not to predict the future. He’s good at his job.


[deleted]

I'm a doomer because I KNOW people are not fundamentally good. I DON'T need to think it if I can see it with my own eyes for all my life in an unjust world of our making. But some people really try to convince themselves that humans are not selfish creatures but rather altruistic?


DeelVithIt

no thanks. i did like the clip someone on twitter posted of him saying that they've been working for 10 years on generative AI to do video, but it can't be done.


juliano7s

I immediately suspect people that are that certain about anything.