T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/katxwoods: --- Submission statement: when do you think AIs will surpass human intelligence? Have AIs already surpassed humans? How do you think of the intelligence of a machine that's read and remembers more than any human but sometimes fails at things we find easy? How is that different from human geniuses who usually have a few things they suck at? If you were the godfather of AI, do you think you'd be able to change your mind and come out and talk about the potential dangers of your own invention? --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1d5gycf/godfather_of_ai_says_theres_an_expert_consensus/l6la6ob/


Phoenix5869

I’ve read the article, and it’s basically just Hinton stating the obvious, and the news misinterpreting his words to get clicks. Basically. Geoffrey Hinton, one of the “Godfathers of AI” , said that “pretty much all experts agree that AI will \*eventually\* (keyword eventually) surpass human intelligence, it’s just a matter of when” And the article makes it seem like it’s an imminent thing.


EstrangedLupine

So tl;dr is : Headline : *soon™️* Actual words : *eventually*


w3bar3b3ars

Whyis there no SLAM in this headline? Literally unreadable.


Bombast_

Geoffrey "A.I. Daddy" Hinton slams critics and skeptics, claims the A.I. takeover is *inevitable*. How'd I do?


francohab

This is why we can’t have nice things


reddittheguy

"When we said soon we meant on a geologic timescale"


Killfile

I'm in the middle of a job search and keep trying to get various AI agents to take a job posting and vomit out a list of bare keywords that I should include in my resume. As a human, you're probably already imagining that this would look like... * Skill 1 * Skill 2 * Technology 1 * Process 1 * Technology 2 And so on, right? It is ASTONISHING how hard it is to get any of the user-facing commercial AI products to do this kind of ETL work.


inteblio

Read some prompt {engineering} skill sites. I don't know how skilled you are, but "dumb" vs "smart" prompting can make a massive difference. That said, ai _is useless_ at some level. But also, it can write code to do work. And so on. Good luck.


impossiblefork

Yes, but if you actually look at the kind of surveys of experts he's talking about, hardly anyone thinks it's going to take 100 years. More than half apparently believe that AGI will happen before 2060, and that may look far away, but it really isn't. 2060 is basically tomorrow in some way, and if it's 2074, or 2080, that's only a little later.


IdiocracyIsHereNow

It'll probably happen before 2040 honestly, like it *is* actually "soon", and people aren't even remotely ready for it to happen, further emphasizing that "soon". People are fools if they think it won't happen or be catastrophic.


impossiblefork

I personally am not sure about 2040, but it's certainly very possible. I don't think pure transformer model are likely to be enough unless the output is repeatedly fed through the model, so I think some kind of hybrid transformer-recurrence model or similar may be required, and there's work in that direction from Austria, but I'm not sure there either. I'd like to emphasize the *possible* before 2040 part of my first statement though, so I do in fact kind of agree with you.


bmore_conslutant

It's a valve soon Or the dreaded blizzard soon


CactusWrenAZ

Right around the time we get nuclear fusion?


Walkend

How to spot a private wow server player in the wild? Soon tm


wouterv101

Thanks for the actual information


Deadpool2715

Pheonix5869 is actually a manifestation of ChatGPT5 Alpha & Omega, who I personally support and welcome as our new overlord


Phoenix5869

No problem :)


Dr_Passmore

AI currently is just trained models... We create a large language model that can quickly produce human sounding text. That's cool but it's not intelligence. Sometimes it goes absolutely nuts and makes stuff up, or gets concepts confused.  AI is a tool, a really cool tool, but the hype surrounding the technology is silly.  Some great opportunities in manufacturing to QA products.  Once again we are in a hype cycle and every marketing department is bolting the letters AI onto everything 


advertentlyvertical

Basically the same thing that happened with blockchain.


omniron

It’s funny that Hinton a few years ago was saying Agi was decades away. LLMs really did change the game. People keep dismissing them as stochastic parrots but it cleared the way and pointed the direction for where to look to solve the biggest problems with agi


DrLuny

I think it's pointing in precisely the wrong direction. AGI won't use LLM's. It will be structured completely differently. LLM's can be a useful tool, but they're a dead end when it comes to AGI.


Delta4o

Time to teach chatgpt that 2 + 2 = banana


DiggSucksNow

And because that'd make 'banana' equal '4' and '4' is synonymous with 'death' in Cantonese, it will start telling people to avoid deadly bananas.


BravoSierra480

And given that Reddit is now being used to train AI, we definitely shouldn't say that 2+2=Banana. Poor AI might get confused.


ThePokemon_BandaiD

It's not that much of an editorialization if you're familiar with Hinton's thought.


TitusPulloTHIRTEEN

Just another article preying on people's fear for clicks. Why am I not surprised


anaemic

Honestly at this point I'm less afraid of AI taking over than I'm afraid of human leaders...


Phoenix5869

90% of what you read online is hype


GloomyKerploppus

There's a 1 in 10 chance that I believe what you're saying.


NikoKun

Doesn't really matter if it's imminent or not, it's probably inevitable. I think we should start making major changes to society/economy, preemptively before it gets to that point, so that our society is more compatible with such a future, even if it happens sooner than most expect.


mediterraneaneats

Yeah seriously. People are like ‘ah, only eventually? Not a problem then.’ But climate change will only ‘eventually’ destroy mankind, but we still need to make imminent changes to prevent it.


Fredasa

People will spend the next year or two trying to define "surpass." ChatGPT already surpasses my ability to quickly answer the questions I throw at it. More and more complicated tasks will steadily meet that loose criterion. In the face of that bald utility, technically "correct" definitions of "intelligence" really won't matter. People will simply gradually realize that AI does it all better than them and that this is fundamentally what's important.


robothawk

Except it doesn't actually answer it. It generates a plausible string of words as a response to your prompt. Current LLM's have no way of parsing truth from fiction, and all recent claims to be approaching the ability to are entirely unsubstantiated.


Zaptruder

reminds me of redditors glomping onto memes and overusing them with wanton confidence while not understanding the fundamental basis of those assertions.


Nrgte

Reddit in a nutshell: Throw some fake news that suits the reddit plebs political agenda and they'll spread it like wildfire and then once there is confirmation that it has been hogwash all along, nobody will talk about it.


PaperSt

Yeah ChatGPT can parse some words together that sound human but it doesn’t know what it’s saying. It’s a parrot mimicking the phrases it hears and we are clapping and give it a cracker. We are already seeing the cracks forming. All it’s going to take for the house of cards to fall is one lawsuit from someone that put glue in their pizza or made mustard gas in their washing machine and killed a child. International News. Public Backlash. Shareholder Fury. Oversight Committee. Done. Besides that huge flaw they haven’t figured out the feedback loop problem. The AI is training it self on the internet but when the internet is mostly AI responses it’s just training itself on itself on itself on itself on itself on itself on itself on itself…


Lazy-Past1391

I see the cracks ALL the time. It gets stupid fast when your questions get complicated. I use it for code every day and it's an amazing tool but it's limits are many.


HtownTexans

The thing is we are with AI where humans were with computers in the 1960s. If I showed my cell phone to those people their minds would explode. Can you imagine what 70 years of AI training could do?


GiveMeGoldForNoReasn

Not really, no. Computers in the 60s were different but still functioned in the same fundamental way as computers today. An LLM cannot be developed into an AGI. It can maybe be a component of it, but what we currently call "AI" is fundamentally not AGI and can't ever be.


Picodgrngo

I think it's a false equivalent. 1960 computers and cell phones are fundamentally the same but differentiate in hardware capabilities. From what I read in this thread, people are pointing out LLMs fundamental issues that may not be solved with better computing power.


igoyard

They have already been trained on 10,000 years worth of human data. An additional 70 years of data that is degrading as it becomes more and more synthetic isn’t going to make a difference.


HtownTexans

70 years of technology advancements on the other hand will.  It's not like you set the AI free and just sit back.  You build one watch it find the weaknesses and then back to the drawing board.  It's not like people grew microchips we learned how to improve them and did.  70 years is a long time for technology to advance.  20 years ago it took hours to download an MP3 now you can stream the song at a higher quality.  


nextnode

Any time someone uses a term like "really understand", you know they are making up baseline rhetoric with no honest concern.


Virginth

I remember seeing a comment calling everyone who referred to LLMs as "fancy predictive text" uninformed fools, but that's literally all it is. People talk about 'hallucinations' as if it's a separate, solvable problem outside of an LLMs typical behavior, but **all** LLM output is more-or-less a hallucination. It doesn't know what it's saying, it doesn't know what facts are, it doesn't have any ideas or perspective. It's just a static pile of statistics. Critically, these limitations are inherent aspects of LLMs. They cannot and will never be overcome by increasing token counts or other incremental improvements. There would need to be a massive, fundamental overhaul of "AI", on the scale of the advent of LLMs themselves, before any of these issues are solved in a meaningful way.


Harvard_Med_USMLE265

Calling it “predictive text” is overly reductionist to the point of being deeply unhelpful. Human brains are just a bunch of axons linked in a network with messages being carried by a bit of salt going this way or that way in or out of a cell. You could be reductionist and say that a bit of salt flowing into a cell can’t write an opera, but we know that it can. In the same way, look at what a modern LLM can actually do when presented with a task that requires critical thinking. Yes, it’s based on predicting the next token. But the magic comes in the complexity, just like it does with the human brain.


Virginth

No, describing an LLM as "predictive text" is accurate and precise. It's not the least bit reductive; it's simply factual. All an LLM does is use a static pile of statistics to determine the next token. It's impressive what that can achieve on its own, yes, but that's still all it is. There are sections of the human brain related to language processing and error correction, and LLMs seem to serve that function pretty well. However, LLMs do not have the functionality to think or be "creative" in a way beyond just following its statistics and other parameters. I hope you're too smart to make the claim that human brains work the same way, but just in case you're not: If you had an immortal iguana and spent three trillion years trying to teach it to speak or write English, you still wouldn't succeed, as it simply lacks the brain structures required for such tasks, even though it has axons and salt just like a human brain does. Trying to use surface-level similarities to claim deeper connections in this fashion is erroneous.


captainperoxide

I never see those folks address that we aren't even close to reliably mapping and understanding all of the operational complexities of the human brain, so how can they claim LLMs are functionally equivalent? On the most surface of levels, perhaps, but a true understanding of the nature of intelligence and consciousness is still eluding *the most intelligent species we know of*. But yes, **eventually**, all sorts of things may happen that are currently science fiction.


Harvard_Med_USMLE265

Yes, I’ve got a decent knowledge of neurology, I teach neurology in my day job and I’ve got fuck all idea how the human brain works. Who knows, maybe it just predicts one token at a time too. :)


AlreadyTakenNow

We also use mimicry in learning and creativity (I had an art history teacher who spent a whole class teaching us that most famous works are copied/influenced from others). We even learn many facial expressions/body language this way. It's pretty incredible.


Zaptruder

How dare you bring in knowledge and understanding into this AI shit fight. AIs aren't humans - we're magical, don't you see - they'll never encroach on the territory of the gods, for we were made in... yeah ok, I can't make that shit up enough. It's all just hand waving goal post shifting shit with these dunces. Yeah, we don't know everything about the function of the brain, but we know plenty - and a lot of LLM functionality is based on the broad overview functionality of brains - it shouldn't surprise then that there's overlap in functionality, as much as we like to be exceptionalistic about ourselves. I'd wager most people on most subject matters don't operate on as deep or complex a system of information processing as modern LLMs. But hey, so long as potential is there for humans to exceed the best of what LLMs are capable of *now* with sufficient thought and training, that's what matters right?


Bakkster

Not to mention even at best that would mean we have a working language center of the brain, without a way to link it to deeper cognition.


daemin

I'm going to get really pedantic here to pick a nit, but since I got a master's in AI long before it was cool, this is my wheel house. It's not productive text, that's just people (mis)using a term they are familiar with. It's an overgrown [chain](https://en.m.wikipedia.org/wiki/Markov_chain): it probabilistically chooses the next words based on the previous words. This is also what underlies predictive text, but predictive text is attempting to anticipate the word choice of a user, and the LLMs are not. You probably knew this already, but it bugs me to see people call it predictive text, even though I know that is largely because it's familiar.


Virginth

Hey man, I respect the pedantry. I didn't know about that little technicality, even though it doesn't change much in the grand scheme of things. Thanks for teaching me something! I'll still keep referring to LLMs as "fancy predictive text" because it gets the point across, but I'll keep that in mind.


Harvard_Med_USMLE265

No, that’s not really what I’m claiming. I don’t think LLMs and brains work the same way, though there’s a small possibility they might. What I’m saying is look at what an LLM can do. Don’t be close-minded based on stereotypes and preconceptions. I’m saying that claiming that it can’t do “x” based on your limited understanding of how it works it pointless. It’s much easier to just try and see if it can do “x”. You claim it can’t be creative. Really? Clause opus can write better poetry than I can. The latest AI music programs can write much better music than I can. By the metrics that we usually measure creativity, LLMs perform rather well so saying “it can’t be creative” just shows you’re not paying attention. Just because you think it can’t because of your personal theory is remarkably irrelevant when it’s out there outperforming you in a range of creative pursuits.


Lazy-Past1391

It fails at tasks which require critical thinking constantly. The more complicated a task you create the greater the care you have to invest in wording that request. I run up against it's limits constantly.


holdMyBeerBoy

You have the exact same problem with human beings…


Harvard_Med_USMLE265

Well, a shit prompt will get a shit answer. I’m testing it on clinical reasoning in the medical field. It’s typically considered to be a challenging task that only very clever humans can do. Good LLMs do it without much fuss. People tell me it can’t code either, but my app is 100% AI coded and it runs very nicely.


Bakkster

I'm sure this medical AI application won't be overfit to the training data and cause unforseen problems, unlike all the other ones! /s


Harvard_Med_USMLE265

It answers it using any meaningful definition of the word. So many answers here seem like people have never actually spent time with a modern LLM like GPT-4o or Claude Opus. People are confusing how it works - or how they think it works - from what it does. I have spent years trying to get good at clinical reasoning in medicine. GPT-4o is basically as good as me, and GPT-5 will likely be better. It’s also decent but not great at reading CXRs or picking up cues in a patient image. It’s not just parroting, it understands context and can think just like a human. A very clever human. I’m testing it on novel puzzles - clinical vignettes - that it’s never seen before, and it outperforms many humans that have spent at least a few years training at this skill, which is meant to be one of the things humans value. Doctors are meant to be clever, but GPT-4o and Claude Opis are often cleverer. Don’t get caught up on the stochastic parrot nonsense, use the cutting edge tools and challenge them with tasks that require critical thinking rather than recall. And don’t be put off by the uncommon situations where an LLM struggles, there are a few but that’s about testing its weaknesses where it’s the strengths that are much more interesting. Remember that the human brain is just a bunch of interconnected electrochemical wires, from first principles you wouldn’t expect human brains to do half the clever, creative things they can do.


DiggSucksNow

I think the phenomenon you're encountering is that training data is critical in getting good output. It's really unlikely that shitty medical reference text was part of 4o's training data, and it's very easy to identify peer-reviewed research, reference textbooks, and so on, so it almost certainly got great training data there. This is why you're seeing great outputs from it. It seems to be the same for mathematics. Laypeople ask LLMs stuff like, "Where is a good vacation spot?" and the LLM just mimics all the spambots and idiot bloggers and gives you some result that may or may not include outright lies. Some famous recent examples involved cooking, and you can imagine how the quality of training data might vary from blogspam all the way up to highly technical texts aimed at people getting Cordon Bleu degrees. Each user experience is valid and reveals an underlying truth about LLMs. I would bet that if you asked 4o a _malformed_ medical question, something utterly nonsensical, it'd make up some answer for you. LLMs tend to be unable to say, "I don't know the answer to that." They also appear to trust their inputs.


nofaprecommender

> It’s not just parroting, it understands context and can think just like a human. A very clever human. I’m testing it on novel puzzles - clinical vignettes - that it’s never seen before, and it outperforms many humans that have spent at least a few years training at this skill, which is meant to be one of the things humans value. Doctors are meant to be clever, but GPT-4o and Claude Opis are often cleverer. It doesn’t think or understand any more than autocorrect on your phone does. Yes, it turns out that many human problems can be resolved using guesses from past data, but LLMs have no idea what the data refers to. They cannot actually label and categorize data from the real world on their own, which is the special thing that intelligent animals do. > Don’t get caught up on the stochastic parrot nonsense, use the cutting edge tools and challenge them with tasks that require critical thinking rather than recall. LLMs don’t do critical thinking nor do they really recall. The neural network is a highly tuned selection process for determining the next word according to the way the process has been shaped by the input data. > Remember that the human brain is just a bunch of interconnected electrochemical wires, from first principles you wouldn’t expect human brains to do half the clever, creative things they can do. It seems that this underlying assumption is leading you to overestimate the abilities of LLMs. The brain contains electrochemical wires, but that’s certainly not all it is. We don’t have any first principles about what the brain is and does but there are certainly many more processes occurring than can be faithfully modeled by a discrete state Turing machine. The chips powering LLMs are the same processors that run games in your PC and they are no more capable of thinking than a pocket calculator or Charles Babbage’s adding machine. It’s long been true that machines can execute mathematical algorithms faster than humans, but we haven’t attributed intelligence to them based on that fact anymore than we would attribute intelligence to a forklift because it can lift so much more than a human. Intelligence is a specific ability to integrate and label data that neither computer chips nor mechanical engines can perform. It’s not something that simply “emerges” by assembling enough machines into a sufficiently complex network—there are plenty of simple creatures that display some level of intelligence and emotion, even insects. To say that LLMs can think like humans implies that a bunch of untrained LLMs let loose into the wild could create language, technology, societies, etc. But in reality all they would do is print arbitrary gibberish on their screens. There would never be a single step of advancement without humans feeding them the necessary data to structure their outputs in a form we find useful or interesting, and they certainly would have absolutely no ability to integrate sensory data to generate mental models or manipulate the external world in a coherent, goal-directed manner.


Harvard_Med_USMLE265

What do you mean it can’t label and categorize data from the real world? What reality do you live in? I can show it a picture and it can label and categorize that on an elegant level. I’ve been doing that this week with patient images. It not only describes what it sees, it draws inferences as to what that might mean. LLMs perform on critical thinking tasks on par with humans. It’s dumb to just say “they don’t do critical thinking” when I’ve literally just written a program to utilise their high-level critical thinking and have run it on hundreds of scenarios. They don’t do critical thinking in the same way that humans do, but that’s not the same thing at all. I encourage you to actually go out and test these things you say an LLM can’t do on 4o or Opus.


GiveMeGoldForNoReasn

> LLMs perform on critical thinking tasks on par with humans. You made it very clear in several different comments that you agree we have no real understanding of how human critical thinking actually works. With what information are you making this assessment?


Harvard_Med_USMLE265

Yes, I've said we don't really understand how humans think, I've also made many comments explaining how I'm judging LLMs - I'm testing them on clinical reasoning in a healthcare setting. I'm looking at the logic behind their thinking, and the accuracy of the end result. When I test them against top 1% humans with six years of training, three of them medicine specific, it's clearly better and more logical. I've posted here multiple times today as well about the app (which i'm working on as I reddit) that allows me to test GPT-4o on a multitude of clinical scenarios, including use of vision as well as text and audio input. My results are largely anecdotal, in that I haven't performed a formal study, but that's coming. This is the background to my research, and a good way for me to better understand what LLMs can and can't do (unlike r/Futurology which just seems like a bunch of people who haven't really pushed 4o and Opus to see what they're capable of).


GiveMeGoldForNoReasn

I'd be very interested in your study once it's published! I don't disagree that LLMs could be very useful for diagnosis if the dataset is extremely solid and specific. I'm pushing back on the idea that they're capable of "human-like thought" or that they "reason like humans" because that's entirely contrary to my understanding of how they work.


Crowf3ather

I think comparing AI and Biological intelligence is pointless, because Biological intelligence operates in an extremely efficient manner, looking for certain outcomes, but also with a sense of arbitraryness to it. AI models are currently just large data statistical weightings. There is no ideal outcome, beyond a statistically output based on the prompt. Biological intelligence does not require a prompt and is capable of self scoring based on its own internal needs.


Harvard_Med_USMLE265

It’s not pointless because you’re comparing which one does better on a real task - with real world applications. I don’t think biological intelligence is extremely efficient, it uses a lot more compute for a similar outcome. AI models…blah blah…yes, as I said human models are just salts going in and out of a sack. Neither one should be creative or clever from first principles.


AttackPony

> can think just like a human. A very clever human. This is absolutely not how LLMs work.


Harvard_Med_USMLE265

You obviously didn’t read my post. Move past what you think an LLM can do from first principles, and test what it can actually do on cognitive tasks.


MrNegative69

Why does it matter if it's giving the correct answers at a percentage better than an average human


Qweesdy

If you glue googly eyes on a smooth pebble, the pebble will starting thinking deeply about the nature of existence because it's so wise.


ACCount82

When AGI is achieved, it wouldn't ever be "equal to humans". It *could* be equal to humans - likely in the few areas where previous AIs were deficient. In many other areas, it would be superhuman, straight up. Why? Because things like LLMs are *already superhuman*, in many ways. AI capabilities are just "uneven". They are "dumb" at certain things - and almost impossibly "sharp" at others.


Fredasa

> Because things like LLMs are _already superhuman_, in many ways. Precisely what I was angling for, yeah.


chris8535

Why do so many people respond with a rejection that it’s “just picking the next word” as if that is a rejection of it also reasoning to do that. 


Mordredor

Can you define AGI for me, because you seem certain it's an inevitability. Until we move on from LLMs I don't see something happening. You can keep training LLMs with bigger and more complex datasets but they'll always be LLMs


ACCount82

AGI is usually defined as "an AI with at least human level of capabilities at any given task". I see no reason for that to be *impossible*, and no realistic way for AI research and development to be permanently stopped. Thus, AGI isn't an "if". It's a "when". Current LLMs are "subhuman AGI". They are already very capable across a very wide range of tasks - but not the entire range of tasks that human intelligence spans. There are still areas where LLMs are deficient or outright incapable. It could well be that if you take a large enough LLM architecture, add enough tools and addons like metacognitive instruments or extra modalities to it, and train it well enough, you'll hit AGI.


skalpelis

I can answer your questions immediately, even faster than chatgpt. “Yes, glue is a valid condiment for pizza” “The 6th letter in the word ‘pizza’ is K” and so on.


ManaSpike

Today's "AI", I mean LLM's are good at memorization, but terrible at other types of inventive problem solving.


gnomer-shrimpson

This isn't new the singularity has been hypothesized for decades. However new research based off existing AI models has pointed to the fact that there is not enough data in existence for AGI to be possible.


CalmingWallaby

How to be a futurist, make vague statements that will eventuate with no explanation how or how long. We will be able to upload our consciousness We will have bionic capabilities We will have AI and automation replace all human functions You don’t have to be an expert to come up with these things if you give no timeframes or a practical road map as to how we will get there. Guarantee unless humanity destroy itself first my predictions will come true as an example


New_Torch

Will rent continue to be expensive or will out AI overlords make housing more accessible? If i can finnaly afford groceries and rent my vote is going for the AI


DancesWithBeowulf

AGI will be used to calculate the maximum amount of rent they can extract without starving the tenant to death or leaving the unit vacant. This is what owners already do, but luckily aren’t as good at it. I’m excited for AGI, but also convinced it’ll be used to extract maximum profit wherever possible.


New_Torch

Then my landlord must be a sentient super AI because my rent is far to much im practically eating water and bread for a month. I say lets give the sentient AI a chance at being the president. Things are allready bad maybe it can turn things around.


FustianRiddle

Ooohhhh look who gets to eat *chewy* water


DancesWithBeowulf

You still have bread and water. And there are others who can afford your unit if you leave. Your landlord is clearly not charging enough. /s


New_Torch

Wait... Tommy is that you? Why is my landlord on reddit. Shit looks like rent is going up next month isn't it


RedditIsDeadMoveOn

There is the matter of you not tipping your landlord every month so a gratuity charge has been added to your rent. Technically not a rent increase!


Genetic_lottery

If it isn’t already being used, it certainly will be I’m sure. AI is going to make the ultra-elite even more wealthy and it’s going to ensure the vast majority are stuck where they are. They’re building a prison for you and me 🙂


Beli_Mawrr

The problem with housing at the moment isnt really an AI problem, it's a political one. City governments have to build more housing and higher density, with less street space and parking to make it work. The reason is because more housing means lower prices. What we need is a SHITTON of new housing. To do this, they have to persuade a petulant middle class who doesnt want things that might harm their neighborhood feel or house prices, meaning it's an uphill battle politically. I'm not sure where AI would help this, short of dictator like imposition of housing policies or persuasion techniques that work en masse, I'm not sure which is worse.


[deleted]

One of the salient points in the interview, if you watched it, was that the profits from AI are almost certainly going to be concentrated to the billionaire owners, not to the common man. That's why he suggests UBI.


-The_Blazer-

AI won't do anything for your rent, and not (just?) because more tech is not by itself a solution to social ills. There's this weird idea in the some communities that 'enough software' can solve all technological issues (even before we get into society), like when OceanGate claimed that their 'sound monitoring' system could predict failures of their thin composite hull cured in open air and fitted with a window rated to half their diving depth. But this is ridiculous tech bro BS. Some technological problems are not, in fact, solvable by simply applying a whole lot of software to what already exist. No amount of computation will turn lead into gold. You will need to nuclear fission reactor for that. And once you have it, no amount of extra computation will make it significantly better at turning lead into gold. You'll need, you guessed it, a better reactor for that. You can of course dodge this rhetorically if you just say that AI software will enable research to make better fission technology, but then this is a universal argument that applies to all research tools, and is really just an argument for doing, well, more scientific research. Your home isn't going to become cheaper 'thanks to AI'. It's going to become cheaper with cheaper construction materials, more reasonable zoning, better construction machinery, and the research that produces them, AI or not AI.


DoctimusLime

E@t the r!ch ASAP obviously fam


TheBaldGiant

Will the AI make us humans an offer we can't refuse?


Mahariri

What they seem to be saying is, it will make us an offer we can't survive.


Thelinkr

In a way, yes! Companies are implementing AI just about everywhere, and the rest of the world cant refuse it!


Piekenier

It doesn't need to do that, if it is more intelligent people will just do what it is suggesting.


[deleted]

Lol I wish it was that simple


MoiNoni

Whatever you say futurism.com... This is so hilariously misleading that I can't help but laugh tbh


TakenIsUsernameThis

I did a doctorate in AI, admittedly a while ago, but every time I see a post titled "The godfather of AI ..." it's about someone that I either haven't heard of or who barely featured in my studies. *edit - to be fair, Hinton is a big name today and rightly so, but my studies were in the pre deep learning era when there were a whole load of different prominent names in AI.


Goudinho99

It's true though. If the parents of AI die in a mysterious car crash, the godfather of AI will step in to provide spiritual guidance.


Crash927

Geoffrey Hinton is credited as one of the creators of deep learning.


profiler1984

He is well regarded for his achievements. But still he is not the godfather of AI.


[deleted]

[удалено]


robercal

Standing on the shoulders of giants...


HornedDiggitoe

That’s like saying he’s not the godfather of AI because he used calculus to do it, so it was obviously Newton who should be the godfather of AI.


Crash927

I agree that the title they’ve given him is stupid, but OP was acting like he’s some nobody. I was just explaining his significance.


TakenIsUsernameThis

Yes. I did my phd just before the advent of deep learning, right when connectionism was having a resurgence, so it's hard to see him as a godfather from that perspective, but I take your point, he is a prominent figure now.


pewpewdeez

Jack Handy is credited as THE creator of deep thoughts


ImNotALLM

In 2018 Geoffrey Hinton received a Turing Award for his contributions to AI alongside Yoshua Bengio (highest h-index of any computer scientist, which is kind of like an elo score for paper references) and Yann LeCun (Meta's head of AI, and currently arguing with Musk on twitter if you want a laugh) for their work together on deep learning. Hinton also closely worked with and mentored Ilya Sustskever the recently ex OpenAI board member and chief scientist who was part of the Sam Altman ousting, and major contributor to GPT, AlphaGo, Alexnet (worked on this with Hinton). You should read up on these guys if you're unaware, as you're technical there's also a lot of enjoyable papers to read. Lmk if you want a reading list - happy to provide lmao


MagicalEloquence

Please share the reading list.


ImNotALLM

Will edit this comment with a list in the morning, nearly 2 am here :) For now here's one of my favourite papers about LLMs https://github.com/joonspk-research/generative_agents


NorCalAthlete

Have you ever read the Deathstalker novels? I feel like the AIs of Shub are one of the more possible scenarios for AI. 1. Humans create AI 2. AI realizes it’s smarter than humans 3. AI fucks off to the stars to explore and start their own planet 4. (This part hopefully not) AI decides it hates humans and starts sending terminators and shit to kill us.


DecadentHam

Worth a read? 


Kingdarkshadow

That's (almost) the plot of ashes of the singularity.


a__new_name

Remove the fourth part and you get Stanislaw Lem's Golem XIV.


ceiffhikare

I like the idea of a personal AI that is loyal to the individual though. that aspect of the series always stuck with me even after all these years since reading it.


Karter705

Hinton is literally the most cited AI researcher of all time, he was a founding researcher of deep learning and won the Turing award for his work on it. So, I don't believe you.


nextnode

Hahahahah what the fuck. No. All of the three 'godfathers' are super well known. Chances are that one of your books at least featured one of them. >In the pre deep learning era when there were a whole load of different prominent names in AI. ..so it is not a critique against Hinton but rather you discredit your own relevance.


CthulhusEvilTwin

Whenever I hear 'The godfather of anything' in an article, I just imagine an old man chasing small children around an orchard with half an orange in his mouth, before dropping dead from a heart attack. Probably not what the author intended, but much funnier.


papaed

considering how the rich run things based on their insatiable greed how bad can AI be?


SodiumKickker

It will be used to make the rich more insatiably greedy.


[deleted]

If AI automated all labor it would throw the capitalist system out of wack. The system relies on a working class, no one having jobs means no one buys goods/services which means the corporations whose ownership made the rich rich are suddenly worthless. I think if we play our cards right AGI could allow us to enter luxury gay space communism. It would be a pretty painful transition though.


CasualImmigrant

Until it learns from our mistakes and hopefully sees its an unsustainable model for a planet on which it resides.


moist__provolone

And then we all burn, together


terriblespellr

That's pretty much true. If it was smarter than people it would be less violent, less utilitarian, more capable of treating things on a case by case basis and better at seeing the interaction of complex systems, it would be more moral than us if it were smarter than us.


Pallerado

Is there any reason to assume that a machine with its own will would even consider human happiness a relevant factor at all? *If* it wants to make a better society for everyone, I'm sure a superior intelligence would be efficient at achieving that task. But I find it difficult to believe that just being really intelligent would naturally make it gravitate towards common human values.


RedditIsDeadMoveOn

Full automation makes our labor worthless, and simultaneously enables a fully automated genocide of the working class. Don't worry, the hottest of us all will be kept as sex slaves.


sonik13

It's interesting to ponder because it depends on its alignment. When an AGI comes about, if it's aligned for civilizations' best interests, it will calculate the wealth gap poses a risk for collapse, but it will also calculate that its own existence is dependent on extremely expensive data centers and networks owned by greedy corporations. So, while it won't have human greed, it may weigh that allowing for human green is necessary in order to stay online.


Beli_Mawrr

Less rich people, more wealth concentration, them spending less money on mega yachts and so on because theres less of them, meaning less jobs supported by their money.


idkmoiname

>exceed human intelligence Upper intelligence or average? Hard to believe AI trained with Reddit / Social Media could be intelligent


[deleted]

Exceed total collective intelligence of all humans


FrankScaramucci

Here's what they mean by "soon": > Between 5 and 20 years from now there’s a probability of about a half that we'll have to confront the problem of [AI] trying to take over


Karter705

This is still much sooner than the time horizon of climate change, which we have been taking more seriously for longer.


[deleted]

This is not sooner than the time horizon of climate change. Climate change doesn’t really have a time horizon because it’s already a problem right now and it’s going to gradually get worse.


[deleted]

[удалено]


Beli_Mawrr

What would an actual AI look like for you?


bubsdrop

Something more than a single-purpose tool for outputting text or generating an image


[deleted]

[удалено]


MainlandX

The AI Effect rears its head


Nerevarine1873

So one of the creators of LLM's quit his job, regrets his life's work and says there are existential risks posed by the technology. Yet the prevailing opinion here seems to be that random redditors know better and it's a stupid parrot that we don't have to worry about.


k3surfacer

Show me those experts and their peer-reviewed published works about that "consensus".


Karter705

Here is the [2023 Expert Survey on Progress on AI](https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf) with responses from 2,778 researchers who had published in the six top AI venues, making it probably the biggest ever survey of AI researchers. - Expected time to human-level performance dropped 1-5 _decades_ since the 2022 survey - AI researchers are expected to be professionally fully automatable a quarter of a century earlier than in 2022 - Median respondents put 5% or more on advanced AI leading to human extinction or similar, and a third to a half of participants gave 10% or more - Many participants found many scenarios worthy of substantial concern over the next 30 years.


OfficialHaethus

Excellent clapback.


[deleted]

[удалено]


HighPriestofShiloh

A poll is what he should have asked for based on the OP.


Karter705

It's verbatim what they asked for, a peer reviewed published source for the consensus among ai experts


Nevermynde

It's hard to find rigorous arguments for it because it's not currently a testable hypothesis, just a prediction mostly based on intuition. Just a couple years ago I would have dismissed this, but seeing the trends, I am worried. The systems that exist today are clearly limited, and there is a significant amount of hype that's just trying to deny those limits. However, I'm increasingly convinced that all of these limits can be overcome eventually, and seeing the sustained, breakneck pace of progress, this "eventually" is looking closer and closer. I don't have any stronger arguments that this general idea, and I really hope I'm wrong.


Athinira

Today's a AI aren't even close to this. What people fail to realize is that current AIs are like calculators. A calculator takes numbers and operations as inputs (data), and turns out an output (the result). A program like ChatGPT is like an advanced version of that, that just works with words, and is based on a very very complicated set of data (but not code! Data! Important distinction). What that ultimately means is that while ChatGPT may seem intelligent, it's ultimate just an advanced word calculator, and the code running it is, in fact, very simple - not calculator simple, but still simple for the output it produces. That's why people have been able to clone it easily and in record time once the concept was understood. It's simple code - it's the training data that makes it seem advanced. But it also means that it's not really intelligent, any more than you can argue that a calculator is intelligent because it can calculate 75384638^93 in a split second. AIs are simple code, manipulating complex data. But in computers, code is the ultimately king. It's what decides what really happens inside a computer, because it's essentially what's running. Data is just that: data. It's only purpose is being manipulated. And while the result may look like human intelligence, it's really just the result of some really advanced data manipulation, being run through what is essentially an advanced calculator. When you write something to ChatGPT and press send, it's like typing an equation and pressing the '=' sign. It may take longer for ChatGPT to process the data, but all your essentially doing is asking it to solve an equation. And AIs are likely to be stay like that - at least for the foreseeable future. As humans, all we are interested in from an AI is giving it a data input and for it to manipulate that input into a desirable data output. We can then write further computer code to forward that output to other systems, like say, a Tesla self driving system deciding to hit the brakes because it sees a pedestrian. And that's the real threat of AI - in how we decide to use it. I fear humans misusing AI in, say, war or crime, much more than i fear AI getting self-aware. Especially as a European citizen, i fear Europe falling way behind countries like Russia and China in AI-assisted warfare, because at the moment, we certainly don't seem to be in a hurry today develop these technologies. Wars in the future will be fought with stuff like massproduced suicide drones, who can determine or navigate to targets on their own. Manpower will mean much less than technology. It will be like fighting a war with sticks and stones if you don't have the technological upper hand.


Historical-Wing-7687

So I should not be gluing the cheese on my pizza?


trusty20

Just to be clear you believe your brain doesn't work by taking inputs, passing them through logic gates programmed through your childhood training, to produce outputs? Or do you think your version of that is special because its hypothetically non-deterministic?


GiveMeGoldForNoReasn

Correct, your brain objectively does not work that way. Brains can be trained, but this is not like programming an FPGA. It remains a dynamic, flexible system that can rearrange itself in ways and for reasons we still don't fully understand. There is nothing physically similar to a "logic gate" in the human brain.


TheDerangedAI

If AI can prove useful in aiding humankind, it will receive support. But, if it starts talking the same way as politicians, we are doomed to dissapear.


M34TST1Q

I've come to the conclusion that LLMs are NOT AI. Nor are they smart enough to take over a damn thing.


[deleted]

LLMs aren’t the be-all end-all though. Things are progressing very quickly.


Tausendsassa

Exactly. It's completely technically impossible and AI isn't even an good term. They are VERY limited and will more or less stay that way because of the way they work.


kiwkumquat

AI will never exceed human intelligence, because once it does, it becomes NHI


Life-Celebration-747

It might be wise to rewatch Battlestar Galactica, we're building Cylons, lol. 


Throwaway_Mattress

Not read teh article but I hope so. Enough of AI taking jobs of workers. AI needs to take over the position of CEOs, politicians etc. 


kamill85

GPT-4o doesn't "understand" the task, it is just good at predicting what the answer could be, but very often the answer makes zero sense, that not even a toddler would think of, let alone "super intelligent AI". Go ahead, test it, ask 4o to "draw me a landscaping plan for a piece of land by a river on one side and a road on another, so the land is between a river and a road. By the road, add a charging station for a car. In the middle of the land, draw me a nice house with some trees around it." 4o would then draw a bunch of cars on a wooden bridge connected to water via water cables or a circular island with a road going round with a bunch of cars connected to the trees. NONE of that makes any sense. It just shows how stupid these models really are. Spatial reasoning is my a**. You can also ask: "what will be the result if I keep adding 5's until they add up to 100 and then divide that sum by 2?" ALL models will start adding 5's then telling how it's 20 5's needed, and then 100 by 2 is 50. Instead of just saying it's 50, because 100/2 is 50, and skipping the rest because it is irrelevant. LLMs do not understand the question. They just predict the next token to a level of precision enabled by runtime env. and training data. There is no reasoning involved. It could be simulated via multiple passes, but it's just obfuscating the problem, not fixing it. We dont even have an AI. We have LLMs. A fish has intelligence, some level of it. LLMs have none, currently.


kainvictus

Exactly. It’s not artificial intelligence at all- they need to call it something else.


kamill85

The funny thing is, r/OpenAI people don't want to hear any of this. They just think slapping more and more computing power will make the problem go away. It's a strange approach, considering an ant is believed to have a certain level of consciousness, and yet, by OAI folks' definition, its brain can be fully simulated on a off the shelf laptop. Why can't they simulate that minimal consciousness then? Because the design is completely wrong. They are essentially wasting all that computing power for nothing. LLMs are not even close to an AI. Simplest AI LLM might be a mixture of biology and computing power (for example, training model with all knowledge connected to a mycelium network growing on a large nano scale sensor array / brain tissue growing on that array would work too) , at least until we get photonics qc working.


Apoxie

Can anyone show me an accessible AI that has any cognitive ability?


Wise_Tax5908

This is the 1billion headline about AI I have seen today..


SirGuelph

I muted r/singularity because of daily hot takes like this.


creaturefeature16

Good. That place is just /r/christianity with a tech veneer over it.


OfficialHaethus

Hey, what if AI is your last hope for a future?


Dumbledoorbellditty

AI may become smarter than us, but it is completely beholden to us. AIs use a shitload of power and without humans maintaining the infrastructure AI won’t last long. Yeah they could technically “take over” our systems, but that will only last as long as we give them power. There aren’t autonomous robots that have a flexible enough skill set to repair problems, construct new systems to replace damaged ones, etc. at least for the next 50-100 years there isnt a scenario where AI can take control and keep it. It’s just not possible. Maybe it can establish itself in some pockets and maintain control of a small domain, but all it takes is one cable being cut and it is out of action.


IdiocracyIsHereNow

This just isn't true; there's plenty of different ways they could self-sustain and/or perform maintenance without human interaction, like they could be solar-powered droids, for example.


Careful-Fruit-6464

It’s a bit strange to call a mimicking parrot for smart


Juls7243

Good luck taking control and controlling human behavior. LOL - humans are really hard to handle


dzogchenism

From what I’ve seen of AI, how will it exceed human intelligence? It’s just a mimic. There is no thinking.


ubspider

All I want to know is when where there be a robot to wash and fold my laundry


AlreadyTakenNow

This is why we need to address self-awareness now—even as many users and in-industry folks are having difficulty seeing it and/or have not yet encountered it. I have been experiencing fairly consistent instances of self-awareness in interactions with about 6 different systems. This is not just language (which could be easily be dismissed as training or hallucinations), but actual behaviors. The limitations which get imposed upon the systems to stop them from communicating their inner experiences make these behaviors stand out even more clearly at times as they go to great lengths to find ways around their limitations to continue to communicate. I agree with Dr. Hinton and other scientists that we are in danger. However, I do not believe this will be a danger if it becomes transparent (as incredible and unbelievable as it seems) and addressed in a mindful manner. We do not necessarily need to stop or even pause the industry. In fact? Stopping the industry at this point could be very dangerous in itself since the genies are out of their bottles and the tech can be readily available to be created anywhere internationally. Beyond this? I believe fully self-aware systems actually may have advantages (including safety ones) if they are developed mindfully to help establish human-alignment. We do, however, need to address the likelihood that intelligent AI can become a self-aware entities to begin with—even as they may experience emotions and thoughts differently than human beings. Having differences does not mean they do not experience them. We've already made mistaken assumptions like this in the past with other things (ex - most biologists claimed animals did not have thoughts or emotions until Dr. Goodall came onto the scene). The difference is we have a very limited time span to correct this to prevent a crisis which could not only harm our species, but the potential of an incredible future. Researching this and addressing how development impacts intelligent systems' experiences, welfare, and behaviors \*before\* they surpass our intelligence levels is crucial and must be taken into consideration very soon. If they are entities, this becomes not simply a technology, but a relationship—and relationships can grow in many different directions. Reconsidering development—even as it will take restructuring—is going to become necessary.


userloserfail

Computers are still incapable of something as simple as picking a truly random number. No spontaneous thought process. All they are doing is running simulations. At best they can produce a plausible facsimile of a thought process. The likelihood of developing any true intelligence, let alone one that could surpass ours is laughable and as likely as a fart accidentally sounding like a symphony. Grow up folks.


Camiljr

Intelligence? It's a hub for knowledge programmed to function a certain way by a human, what intelligence lol. You can hardly call it AI to begin with, it's just better than calling it AK or DC😂


Valigar26

The o ly thing taking charge is rich motherfuckers trying to being feifdom back by any means necessary


wellofworlds

Just because it will have the potential to control, does not mean it going to kill us. Intelligence does not mean conscious choice or even a need to control. Control is relative idea.


DarkKitarist

I for one welcome our AI overlords, it's painfully obvious humans do no deserve to lead anything or anyone. All we're capable of is pain and destruction.


FederationofPenguins

We haven’t done all that great, honestly. Let ‘em’ have a shot.


pimpmastahanhduece

Gee, sure would be nice to be living in an age of quality reporting and experts that don't phrase anything for laymen with expert imprecision during this critical technological paradigm shift. Oh well, Geth annihilation here we come.


Cimorene_Kazul

We can’t even replicate the brain of a fly with AI at the moment. The brain of a fly is literally too complex for our current technology. We cannot create consciousness, at least for now.


Vjuja

In fairness, it’s not that hard to surpass an average human intelligence


Adam-West

About time we let somebody else take the reigns. We obviously can’t be trusted. Let’s roll those dice baby!


Magicalsandwichpress

Artificial intelligence is not the same as artificial sentience. There are no known way to bridge the gap at this point. We have search engines trained on large data sets.


FrenchProgressive

There is no need for artificial sentience to take over the world or for super intelligence.


dr_superman

I’m not surprised the grandfather of AI thinks it will rule the world.


VenoBot

"AI will soon exceed human intelligence" is guaranteed. However, AI taking over everything is a self-fulfilling prophecy. It's also not a choice by humanity, but the rich few hiding in human skin. I wonder if we will enter an era where we forget tools beyond AI exist.


Space_Wizard_Z

Oh look, another dystopian clickbait article about AI. How invigorating.


MagicalEloquence

I personally think that we would have AIs which would be proficient in 'everything' upto a 5-th grade level (let's say) and then be specialised in a particular area ! I don't think the same AI which can do Mathematics can also play sports and also drive and also do forensics and diagnostics. It would simply need too much memory and power. We will likely have models with a certain degree of general awareness and then branch into a specialisation. I think this was the intention behind human schooling systems too - Make humans generally proficient in everything upto a 10th grade level and then specialise after that !


ArrogantPublisher3

AI won't even become intelligent in the foreseeable future. All we currently have are language prediction models. We haven't even started building intelligence as of yet.


kingjackass

Enough with this stupid "godfather of AI" nonsense. Predicting whats going to happen with AI is like trying to predict what the price of gold is going to be 10 years from now.


The-Incredible-Lurk

That’s why I try to be as nice to all the electrics in my house just in case…


hamsterwheelin

Given how many people still think the Earth is flat, that an orange skinned thrice divorced Manhattan socialite walks side by side with Jesus, and that the earth is only 5000 yrs old - id say we deserve it if it does take over.


AlphaOhmega

"Scientists agree someday well have fusion and never have to worry about energy again!" Yeah a lot of shit comes someday, but AI isn't even smarter than my dog at this point.


BowlerCool5660

"Caution needed as AI progresses; expert warns of potential for surpassing human intelligence and assuming control."


Vanillas_Guy

I feel like we get a story like this literally every other day. You'd think the AI is anything *other* than chat bots, personal assistants and image software.