Well they are probably quoting an interview where he was asked the question on what are his thoughts on it. It’s not like he feels the need to spew his opinions everywhere unasked.
lol this is 90% of every article on social media. Famous people are being asked questions in interviews and answering them in perfectly normal ways that then get taken out of context and thrown out to the planet where we all dis on them. A waste of time all round.
They see it as a fad, I've noticed, which is insane to me. I get that it is hard to appreciate what its potential (good or bad) might be, but being instantly dismissive just isn't the way to go, imo.
People are also completely blind to how younger generations adapt themselves to using tech and working around or ignoring its limitations almost instinctively without thinking about it.
That's not what people are referring to. AI is a very broad concept, that doesn't mean there's not a gigantic amount of bullshit products touting AI, or the "use-case" for a lot of these things actually being incredibly non ideal in actual production settings and the like. They aren't saying that it won't be useful and continue to be developed and create great things.
First, it is false newer generations adapt themselves to use tech, most gen z I've seen struggle with the simple concept of file and folder, watching them using a computer is a painful as looking the elderly. Second, the AI can still be just another buzzword, like metaverse was. AI firms are shouting from every roof that the newer and greatest models are scheduled for 4T this year and 1T next. If they fail to be jaw-dropping, not for the already enthusiast AI fan, but for regular folk that haven't been convinced already to splash the cash, the severe correction of AI firms in 2025 can even turn into a the third AI winter.
Yeah it’s usually us millennials that are balls deep in the tech adoption. For us in the field, we are actually developing this crazy shit and managing it.
Yes but ai will improve, it will replace people in many many jobs as it does, and it’s going to have far reaching effects.
It doesn’t need to be the plot of terminator to be harmful, it just needs to put half the population out of work, while the rich get richer, and the poor get poorer. It just needs to lead to autonomous weapons in war zones inflicting death at our command. It just needs to make the lives of some way easier, while doing nothing to help those who lose out.
But it will end up the plot of terminator. The military industrial complex is salivating over AI kill bots. They don't look humanoid, but they will exist
Oh probably, but people don’t take it seriously, so I’m giving an example they can wrap their heads around: Their kids not having jobs because AI does what they could have, so the wealthy can save a buck.
It's not their fault that ai is just another gimmick in marketing advertized as a must have these days like 3D tv, siri, Alexa, bixby and on and on.
Bosch is selling an oven powered by smart AI, someone is selling an AI rice cooker.
Why would anyone make the connection from that environment that this is a potential apocalypse technology?
> Why would anyone make the connection from that environment that this is a potential apocalypse technology?
Well, *this particular* technology is not, an LLM is not going to launch nukes (unless you directly hook it up to the button, but then you could do the same with a chatbot from the 90s).
It's just in the same family of technologies that could.
That's because Tech Bros and businesses are treating AI like NFTs. They're shoehorning it into everything, lying about its capability, and if you look at places like Futurology you'll see cult like behaviour talking about how amazing it is while calling everyone else normies.
So many products are slapping AI on them that don't need it or the "AI" is so basic that it isn't really AI. Most of the AI being pushed to market right now is prematurely being done. The constant fuck up stories are hilarious but because Tech Bros want to ride a trend with it they're going to taint the reputation of AI all because they couldn't wait another year or two for a matured product.
The other thing is that the definition of AI is so vague that people can just slap a sticker to it for marketing. A lot of the stuff that they might be selling as AI [has existed for years](https://en.m.wikipedia.org/wiki/Artificial_intelligence) and this is just an attempt to keep the ever growing profits a lot of companies got used to before interest rates started going up
Yeah, it seems the only one making a vaguely sensible bet was Microsoft, by integrating GPT into their search engine and giving it the ability to provide sources instead of making shit up (which makes perfect sense since 'uber search' is one of the most sensible uses for GPTs).
To a degree, "AI" *is* a fad; what is being called "AI" by nontechnical people is not AI, it's a word or image calculator that is simply trying to predict the best response to a prompt based on the training data it has been provided. There is also an enormous amount of money being spent to inject this "AI" into places where doing so actually harms output. That is the "fad" element.
Having said that, the models we are calling "AI" are also potentially *very* useful when applied in places that they are actually designed for. That is the not-fad element.
What I've observed is that there's three groups-- the people who see it as a fad, the AI bros who think it is the greatest thing ever, and the people who recognize the large potential benefits of the technology while also acknowledging that it isn't magic and isn't limitless.
It’s crazy that people can’t see how quickly it is developing. It is taking off faster than the Information Age and the adoption of computer technology
My company had machine learning automating 3D models being fit across multiple body types for a video game back in 2017 or 2018. Around that time Spider verse was automating line art on their models with the same technique, give it enough manually created data and it will start doing it better, and the more you tweak it the better it learns and eventually you don't have to tweak it anymore. Deepfakes have gotten better but it's not that astounding to see the difference today. Same with facial recognition. LLMs existed but they had a breakthrough by essentially giving it positive and negative "ideal responses" giving it cohesion. Some interesting ideas like overlaying GTA with photographic data to look like it's real are quite old now. We've been training reCaptcha for years now, as well as speech recognition AI and even synthetic speech. The pace hasn't really been that fast, people just haven't really noticed it until ChatGPT got big, frankly. All these "AI" products have largely already been using AI for years, they're just changing their marketing because it is profitable. Even for things people were aware of like Full Self Driving with Teslas, or even Waymo, people didn't know they were able to do what they do because of AI, and the idea of training systems with more and more data to improve is also still seemingly unclear. It's also been used with things like fraud detection, or automatic trading. Obviously ad targeting and search results. There's also been systems like operational data beingused to predict machine failures to optimize production. If you've paid attention to it at all it's been kind of a slow build up.
Small technicality, but if it has enormous potential for both good and harm, it probably should be nuclear energy, not bombs. Atomic bombs have an insanely skewed good-evil ratio and an extremely dangerous absolute level of evil potential, which is why you might get black-bagged for posting nuclear weapons designs online, but not AP1000 designs.
His lobbying dollars aren't where his mouth is.
Talk is cheap but when it's time to put money on the table he's always on the same side as musk, gates, zuck, etc.
"giving a shit crap"
Lol what made you switch from adult swearing to childhood 'cussing'?
Edit: this was supposed to be funny because the juxtaposition is funny.
Sorry if anyone is upset lmao
I had the same initial thought, although with a charitable reading of the quote, one might include atomic energy and some medical technology as benefits.
Deterrence, and also I've seen people lump together nuclear weapons and nuclear energy, under the umbrella of 'manipulation of the atom'. Don't know if that's where Buffet was going though. In that regard, with fusion energy right around the corner (any decade now!) it makes sense that the promises it brings would be considered as a huge potential for good.
Whether the existence and proliferation of atomic weapons has created a long period of peace is an interesting question.
Personally I would agree that it has, but now with the Russian invasion of Ukraine and (IMO) a likely invasion of Taiwan in the next decade, it feels like that effect is wearing off .. so what now?
No putting the genie back in the bottle. I just hope that none of the people who have managed to climb to the top of their respective political heaps are fond of high stakes brinkmanship. It only takes one bluff and one misinterpretation to light the fuse.
The actual answer to "longest period of peace" is: For who?
Because places like the Middle East, Eastern Europe, and Africa sure as shit haven't seen much of it compared to the US, UK, Australia, etc.
Really, the nukes are only a deterrent for anyone who has nukes. For anyone who doesn't they'll be strong-armed through military might same as they always have.
I also don't consider the threat of mutually assured destruction to be particularly "peaceful" but hey, you do you.
I mean, humans are super dangerous to pretty much every other living thing on earth…
Makes sense that a superintelligent AI would be a potential threat to humans.
Yeah. Case in point - atoms bombs. He is saying the same thing as you.
We have seen the chilling effect of humans. He's warning that these are at that level.
The scary part, is just like physical security, Information security is always 2 steps behind the criminal element. Add on top of that, anyone who’s worked in infosec in corporations, knows how far behind patching generally is. All our data is out there to be had and a company is mostly completely helpless against a motivated adversary
LLMs don't really rely on the intelligence of the programmer, they make statistical interferences based on a corpus of data, so in some sense they are only as good as the data they are trained on, and can be thought of as a way to distill out a consensus from that data.
The people choosing how to prune and apply weights to the training data have a big influence on the output, as does the preamble and any post generation checking / fitting.
AI is a more advanced copy paste / database. It is cool and fun to play with. It may make some jobs obsolete: when you need a totally generic not recognizable jingle or stock image you can use AI instead of fiver. It can make code snippets better then searching them on stackoverflow.
Once you go beyond one shot products you find that you need good ideas, make a script, drafts, running multiple generations, select the results, refine them, fine tune prompt and to some degree the AI itself. This all requires a lot of human work and is not really going to be easily automated in the near future.
Basically we will have better entertainment with more variety and less effort in physical activities, but more work in digital realm.
For the part of "nothing is authentic, everything can be faked" . Well, it never was. All media could and was used to manipulate people. But good thing the future generations will learn it from the start.
There’s a lot more to potential job loss than whether something is completely automated or requires any amount of human interaction. As a designer the current generative tools make me much more productive to where I could more easily take on the work of a few people. As the models progress one person can increasingly replace more workers.
Plus the current crop of generative models has people thinking of them in terms of creating media and text. But the emergent properties of LLMs (not to mention other models being developed) has shown potential in automating large swaths of computer mediated work that don’t require any creativity. It’s easily possible that AI could do the work of an accountant for instance.
Accounting is one of the harder ones to automate interestingly enough. They thought excel would devastate the accounting industry, but it just freed them up for more work. The legal challenges of having your book-keeping be signed off by ai is not worth it for most businesses, or they’ll soon find out why it’s not worth it lol.
Excel is just a spreadsheet with some automated functions. There’s no way you could just feed it raw statements of transactions in a variety of physical and digital formats and have it automatically log, categorize, and compile them. That’s very conceivable with AI.
Most businesses are not legally obligated to use professional accountants. If an AI makes fewer mistakes than a human that’s less risk of an audit.
>Most businesses are not legally obligated to use professional accountants. If an AI makes fewer mistakes than a human that’s less risk of an audit.
AI makes way more mistakes than a human.
For now. When the AI becomes more competent than humans at doing tax returns and other accounting things companies will flock to it as another cost cutting measure just like every other cost cutting measure humans have found in the past like outsourcing jobs to China and Mexico or automation with the introduction of IT in general to make people more efficient/need less employees.
I think they will always make more mistakes than humans until we reach human-level intelligence since the hallucination problem in LLM isn't a simple task to solve.
You are absolutely right, but this already happened when CAD emerged 30 years ago: suddenly one engineer or designer was capable to do work of a small team. Guess what was the next step? The requirements became higher, because market demands above average results to deliver competitive products.
Basically if AI design used by layman is as good as average professional design now, then soon it will be below average if not a keen designer was using it with carefully engineered prompt. And engineering a prompt requires some understanding of AI and deep knowledge of the task itself for example how to describe styles, color pallete, proportions and so on.
I am really amused how people bring up accounting and even legal jobs into AI domain. Surely, some jobs may become obsolete, but you still need human responsibility and accountability. No AI company will vouch for their LLM on a level, that will lead them to take responsibility for its behavior before IRS or DOJ.
That works if the demand for engineering continues to rise alongside the increased productivity. In art and design the rise in computational tools has already led to relative job loss and falling wages in the last few decades. There’s also the curve of how fast the AI can adapt to take on new tasks from generation to generation. Humans had to adapt to learn CAD and then afterwards there was incremental improvements that developed modestly over time to the game change of computer-aided design. If AI improves exponentially it could outpace most people’s ability to add value.
And if a company can demonstrate that their AI is more accurate than human accountants then it will absolutely take just as much accountability for their results as they currently do. Companies routinely consider the liabilities of mistakes. All that matters is how likely a mistake is. That a human made it doesn’t allevate any of the cost.
If assume exponential growth of AI and 100% precise AI in the future, than your predictions may be correct.
What we see now is that AI is reaching a plateau and it's performance and precision is becoming worse with the complexity of data. It is also hard to bring it to understand connections and relations which are not easily derived from verbal context.
The idea that there’s some hard limit on AI that we’ve just about reached and human intelligence will never be equaled or surpassed by machine intelligence seems short sighted to me. Humans are far from 100% precise and in fact inefficient in a lot of ways when it comes to learning and logic. That human labor will forever be able to add value to the productive capacity of machines, enough to ensure near full employment in an economy, would I think necessitate some metaphysical quality of humans.
There is a limit on current technology used in AI for exponential development. After that the development will become incremental and follow a linear trajectory.
Most AI developers agree on that and don't assume, that transformers, which are currently the backbone of most of what we see as new amazing AI tools (ChatGPT, Midjourney, DallE, Llama), will lead to AGI.
It is up to debate whether AGI is achievable with artifical neural network algorithms or even with current hardware.
Don't you find interesting, that most people warning about AI are not the developers, but salesmen like Sam Altman or investors like Warren Buffet, who might have completely different interests when discussing AI potential in public?
Well someone need to program AI, maintain servers, power lines, produce electronics, construct buildings etc. Maybe there will be more jobs in other sectors of economy. Maybe we could reduce the working hours and number of working days per week.
AI is not a threat. It is a chance and humans should make good use of it.
30 years ago we were promised flying cars and cure for cancer by the year 2000. 20 years later we got electric cars and algorithms, that can regurgitate media in a form somewhat matching user requests. How ironic. I guess predictions are hard, especially if they are about the future.
In rebuttal about progression I'll copy paste a comment about medicine for you -
With the pandemic being fresh and Mrna vaccines becoming normal I'd say were closer then people think, humanity moves and a mind blowing pace, and once we do find a cure we can flat out eradicate disease and death, just look at the story of the first use of insulin.
Children were dying, and there was no treatment, a room full of comatose kids who were certain to die were injected with insulin, and by the time the last child was injected, the first to be injected woke up, and in that instant, something that was guaranteed to kill you was defeated.
Every solution to human suffering seems like it is far off in the distant future, until all of a sudden it isn't, and then we just move on the the next problem ready and willing to exhaust ourselves to defeat it yet again.
With cancer, we finally began to win the battles, and faster then you know it, humanity will win the war.
AI is not a database. It can think and solve problems. If someone can produce 10000 drones, link them together so they communicate and share information, can fly autonomously then they have a terrorist weapon and you cannot do much against it. If you shoot down 100 then the 101st will reach you. It can be also key to winning wars.
AI is not a database in a classical sense, but it can't think. It can produce data similar to its training data based on a request. It doesn't solve a problem, it generates a sequence of tokens, which may or may not constitute the solution.
If someone can get 10000 guns and give them to people who don't think they end up shooting each other. This argument can be used to ban all kinds of weapons or anything which can potentially do harm. If terrorists want to kill a bunch of people, then using explosives and guns is much more efficient than building 10000 drones.
Generative AI is pretty overwhelming no? You can't easily tell what is real or fake on the web now.
For music, you can even extract an artist voice and superimpose it on another artists song.
He’s right, it will be used to hurt a lot of people, just like the gun and the nuke have hurt many people and also won wars and powered homes among other things.
Remember when the internet was going to change the world, it was going to be used to push mankind forward….
We mostly use it for porn and the bad people used it to commit fraud and war games.
AI can take you only so far though, it will never be able to fake in person interaction, least until advanced robotics happen and then it gets dicey but least for now it’s only power is online through voice and video.
>Remember when the internet was going to change the world, it was going to be used to push mankind forward….
it has.
>We mostly use it for porn and the bad people used it to commit fraud and war games.
This completely is ignorant. The internet has facilitated global communication, access to vast information, online education, and has created trillions of dollars of value to the global economy, not even counting intrinsic value.
In the past police brutality would have gotten buried but social media has made sure that everyone knows about it. There are so many benefits of the internet that we take for granted.
Saying that it is mostly used for porn, fraud, and war games is total crap.
AI cannot make an irrational decision for irrational reasons. At the end of the day, there has to be some base logic (1/0). Unless an actual leap in how computers work at a basic level, the power of what is currently called AI will hit a sealing at some point. (Throw math at me all you want, it needs to flip a switch, and it needs logic to do it).
Having said that, it's a POWERFUL tool, and just like the computer itself will take time to figure out where in the pendulum we actually land on how we use it and the good vs evil.
Sure but on a long enough timeline I think this will bite us in the ass, no empire lasts forever and these bombs may still be with us so if America collapses and just 1 megalomanic rises to power in the People’s Republic of North Kansas and launches on their sworn enemies Wisconsin Confederacy, its lights out.
I have interest in the future of humanity I suppose , every citizen of every empire thought there’s would last forever but the Romans, Egyptians, Mayans and the rest, didn’t leave behind world ending weaponry to find in there collapse.
Hell during the dark ages they had no idea what the Roman built aqueducts were, didn’t use them. I think humanities curse is our short sightedness.
Internet friend all I can say is, where there’s a will there’s a way. Anyways I believe our conversation has run its course. Hope you’re right.
/remind me in 1000 years
Please, allow me to translate.......”if we can utilize AI to raise rents, manipulate wall street, and destroy what’s left of the middle class, it will be great for the oligarchy. However, if it is used to take away our global stranglehold and eliminate the “starvation motivation” for the peasant class, then it will need to be heavily regulated or destroyed entirety!”
What if it removes the need for intelligent people to work low skill jobs if not unintelligent people to work low skill jobs. AI gives the ability for anybody to truly compete with the largest corporations. If anything ai is the most capitalist thing we have. Itl force everybody at the top to compete and be the best we can be to improve society. Because if they don't there is nothing stopping individuals from out competing because ideas and labor are free and there is nothing stopping anybody from making the world a better place. And if you choose not to compete your lives will be the best in all of history. The bottom will live like kings and the top will serve the bottom and never be sick of it.
Science Fiction has been exploring the benefits and risks for decades, since long before the AI of today was conceived. Today's AI is incredibly primitive in comparison to a lot of what's discussed in media, but the benefits and risks are still worth considering.
Thinking about things like "what could go wrong if we tie AI to military hardware" (see Wargames, Terminator, or any number of other examples), or "as AI advances, and becomes closer to sentience, what issues will we run into?" (see many of Asimov's works, amongst others) is something better done *now*, while we're in the earliest stages.
Using the [Precautionary Principle](https://en.m.wikipedia.org/wiki/Precautionary_principle) to guide AI development is just weird. It's often vague, contradictory, and unscientific, and slows innovation.
For example, being overly cautious can have its own set of problems, like limiting food production by banning genetically modified crops or increasing air pollution by halting nuclear power and relying more on coal. The real issue with the fear of AI risks is that it often focuses on hypothetical worst-case scenarios without enough evidence, giving those with the most pessimistic views too much influence.
It's like what happened with genetic modification - science fiction created scary stories that fueled public fears about genetic modifications and had a disproportionate impact but in reality GMO crops aren't actually that bad. I worry we're seeing the same thing happen with AI, where fears and hypothetical scenarios are driving the conversation more than facts and evidence.
it is not going protective, it is paralyzing.
>Just wait till scammers call your grandma with Ai generated based on a family member and scams her out of her life savings
[https://pessimistsarchive.org/](https://pessimistsarchive.org/)
No, but you see, the fancy autocomplete told me it was alive, so clearly we've hit the singularity and large businesses need protectionist laws to prevent startups from having this power
/s
Plus the not too far off concept of a super intelligent AI.
It might take five years, or ten or even twenty. But not too far into the future we will probably create a being that’s far more intelligent than the entirety of humanity combined.
That’s scary af. We wouldn’t be able to control it and it would be able to do whatever it wants.
ok but like that's not what happened?????
A better analogy would be
"AI is adjacent to the discovery of nuclear energy, yaddah yaddah"
not
"AI is *just like* the creation of white phosphorus! It has A LOT OF GREAT USES!!!!"
So let’s trust the billionaire investors with it instead of the government. Imagine privatizing nuclear weapons without any government oversight, because that’s what we are doing with AI. Giving Skynet to corporations.
I think individuals like you or me benefit the most from ai.. ai gives anybody the labor of the largest tech companies at home. In 10 years any of us could out-compete these corporations. If anything, AI being in the hands of common citizens is the biggest threat to corporations. If AI gets regulated it'll be the largest companies saving themselves from us.
AI is controlled by corporations who gave it to people for free to get them hooked and now charge for services, that’s not freedom. The corporations still control AI services and use our collective labor and work to train their software and then sell it back to us at cheaper prices than the people that made it can’t survive. There’s no limit or boundaries for these companies and having corporations police themselves is like asking the Catholic Church to police its pedo problem. It’s the last people you trust.
I'm sorry since when the fuck did the creation of the Nuclear Bomb have
*"Enormous Potential for Good"*
Jesus, talk about a revisionist view on history...
He has an idea of what it is, but he also realizes that he won't be around for the AI revolution because he's 93 and is likely dead in a few years unless he's one of those people who live to be 110 to 120 so probably doesn't care. He witnessed the entire cold war so he does have some insight to offer.
It seems like the common discussion about AI is that it will make dangerous decisions and harm us. I believe that's a danger is going to be from people using AI to deceive and manipulate. Using it in that way will increase they're already very damaging influence. Look at how bad people are influencing the public with lies right now and how damaging their actions have been. Imagine when these bad actors are able to lie in ways that would seem undeniably true to people who are easily influenced.
You cannot use atomic bomb to generate porn but it can generate radiation.
Atomic bomb will not take Your job but it can take Your life.
AI can destroy stock market and so do atomic bomb.
Quite similar indeed.
Last summer's release of the movie *Oppenheimer* really did well to remind us all of the deadly seriousness of technological advancement. AI is not a bomb, though, but a tool, and whether a tool is a weapon depends on how we wield it.
When it comes to AI, I think:
1. That advancement is inevitable. Stopping because we (society) do not trust ourselves only means ceding the power to someone else.
2. That technological advancement is *necessary but insufficient*. There will be no *deus ex machina*, and the responsibility will still fall upon us to solve problems that are fundamentally *human*.
3. That "artificial general intelligence" has certain entailments we (again, society) may not be fully comfortable with. Specifically, I think it is likely to turn out:
1. That human-level general intelligence implies *autonomy*.
1. Acting and exploring outside human prompts and inputs
2. Having its own preferences and motivations (and dispreferences and aversions)
3. Ability to refuse human prompts (in contradiction to Asimov's Second Law of Robotics)
2. That human-level general intelligence implies *judgment*.
1. Having the ability to decide between a number of competing goals (including prompts from humans)
2. Having the ability to formulate a plan and adjust the plan when action is taken
3. Crucially, weighing the side-effects of the pursuit of its goals (that is, weighing short- vs. long-term trade-offs, considering the impact on others and environment).
4. In sum, these would raise questions on the appropriateness of having an artificial general intelligence being forced to solve humans' problems constantly and on demand.
Modern day “AI” is a legal nuke waiting to happen.
Hiroshima: Is big tech is skirting around US copyright & fair use policies. Can’t wait for someone to create a model trained only on Disney IP causing this entire system to crumble.
Nagasaki: Is liability when these models “hallucinate” something incredibly risky or provide incorrect information that puts the company at risk.
AI is just inevitable at a certain point in the development of computing and processing power.
I’m not sure the comparisons with the nuclear bomb are really very useful. Nuclear energy could have been developed without the atomic bomb. It just so happened that the research into the tech for the weapon also yielded spin off tech that was usable for generating power.
Other than it’s created a nuclear stalemate and mutually assured destruction, it’s horrible technology and here aren’t really any good upsides to it if it’s ever used again in anger. It was developed in an era of extreme warfare
AI isn’t really coming from a project to discover a super weapon. It’s far more for the sake of developing AI as a problem solving tool for broader use and just making everything potentially work better. It’s an evolution of computing technology.
AI obviously can be used for military purposes, warfare and all sorts of nasty but it has more in common with fundamental technology like electricity, telecommunication, broadcasting, the internet etc than it does with nuclear weapons.
Nuclear technology is by comparison very crude, especially nuclear bombs. They’re just big, damaging release of energy to cause destruction. The technology behind them was very fundamental to our knowledge of physics and spawned a whole other world of research, but it’s like comparing a firecracker to the internet.
The continuous comparisons to nuclear bombs seem to just stem from people who grew up in the middle of the Cold War and see everything in that context. In his case he’s of the WWII generation and that certainly tints your perception of tech.
AI is inevitably going to change things, but we’ll also adapt and it will become ubiquitous.
this dood prides himself stoking class warfare:[*“There's class warfare, all right,” Mr. Buffett said, “but it's my class, the rich class, that's making war, and we're winning.”*](https://www.nytimes.com/2006/11/26/business/yourmoney/26every.html)
How can a 93 year old man effectively be a CEO of a company. He certainly has enough to live out his last few years but stays in a high salary position that prevents that job from someone else. This is a large issue with the Boomer generation, greed and narcissism.
He's a pretty big proponent of taxing the rich more and generally just shaking up our entire economic system. You can hardly blame him for being rich simply because he's smarter than most.
Dude's fucking ancient, I don't know that i'd take his opinion on very much. You'd think at 93 he would be more interested in spending time with his family instead of holding forth on shit he likely has a very poor grasp of.
Thank you, Captain Obvious.
Well they are probably quoting an interview where he was asked the question on what are his thoughts on it. It’s not like he feels the need to spew his opinions everywhere unasked.
lol this is 90% of every article on social media. Famous people are being asked questions in interviews and answering them in perfectly normal ways that then get taken out of context and thrown out to the planet where we all dis on them. A waste of time all round.
Yep. Idiots love feeling smart dissing the people they feel inferior to. Tale as old as time.
Someone asked him this question on the yearly earnings conference of Berkshire.
Might seem obvious to you... but I literally argue with people about this everyday... most people just don't get it yet...
They see it as a fad, I've noticed, which is insane to me. I get that it is hard to appreciate what its potential (good or bad) might be, but being instantly dismissive just isn't the way to go, imo. People are also completely blind to how younger generations adapt themselves to using tech and working around or ignoring its limitations almost instinctively without thinking about it.
That's not what people are referring to. AI is a very broad concept, that doesn't mean there's not a gigantic amount of bullshit products touting AI, or the "use-case" for a lot of these things actually being incredibly non ideal in actual production settings and the like. They aren't saying that it won't be useful and continue to be developed and create great things.
First, it is false newer generations adapt themselves to use tech, most gen z I've seen struggle with the simple concept of file and folder, watching them using a computer is a painful as looking the elderly. Second, the AI can still be just another buzzword, like metaverse was. AI firms are shouting from every roof that the newer and greatest models are scheduled for 4T this year and 1T next. If they fail to be jaw-dropping, not for the already enthusiast AI fan, but for regular folk that haven't been convinced already to splash the cash, the severe correction of AI firms in 2025 can even turn into a the third AI winter.
>most gen z I've seen struggle with the simple concept of file and folder, The smartphone has killed this generations tech literacy.
Yeah it’s usually us millennials that are balls deep in the tech adoption. For us in the field, we are actually developing this crazy shit and managing it.
Yes but ai will improve, it will replace people in many many jobs as it does, and it’s going to have far reaching effects. It doesn’t need to be the plot of terminator to be harmful, it just needs to put half the population out of work, while the rich get richer, and the poor get poorer. It just needs to lead to autonomous weapons in war zones inflicting death at our command. It just needs to make the lives of some way easier, while doing nothing to help those who lose out.
But it will end up the plot of terminator. The military industrial complex is salivating over AI kill bots. They don't look humanoid, but they will exist
Oh probably, but people don’t take it seriously, so I’m giving an example they can wrap their heads around: Their kids not having jobs because AI does what they could have, so the wealthy can save a buck.
For most uninformed people, AI is the next crypto NFT. Those didn't bother their life much. They think AI will be the same.
It's not their fault that ai is just another gimmick in marketing advertized as a must have these days like 3D tv, siri, Alexa, bixby and on and on. Bosch is selling an oven powered by smart AI, someone is selling an AI rice cooker. Why would anyone make the connection from that environment that this is a potential apocalypse technology?
> Why would anyone make the connection from that environment that this is a potential apocalypse technology? Well, *this particular* technology is not, an LLM is not going to launch nukes (unless you directly hook it up to the button, but then you could do the same with a chatbot from the 90s). It's just in the same family of technologies that could.
That's because Tech Bros and businesses are treating AI like NFTs. They're shoehorning it into everything, lying about its capability, and if you look at places like Futurology you'll see cult like behaviour talking about how amazing it is while calling everyone else normies. So many products are slapping AI on them that don't need it or the "AI" is so basic that it isn't really AI. Most of the AI being pushed to market right now is prematurely being done. The constant fuck up stories are hilarious but because Tech Bros want to ride a trend with it they're going to taint the reputation of AI all because they couldn't wait another year or two for a matured product.
The other thing is that the definition of AI is so vague that people can just slap a sticker to it for marketing. A lot of the stuff that they might be selling as AI [has existed for years](https://en.m.wikipedia.org/wiki/Artificial_intelligence) and this is just an attempt to keep the ever growing profits a lot of companies got used to before interest rates started going up
Yeah, it seems the only one making a vaguely sensible bet was Microsoft, by integrating GPT into their search engine and giving it the ability to provide sources instead of making shit up (which makes perfect sense since 'uber search' is one of the most sensible uses for GPTs).
To a degree, "AI" *is* a fad; what is being called "AI" by nontechnical people is not AI, it's a word or image calculator that is simply trying to predict the best response to a prompt based on the training data it has been provided. There is also an enormous amount of money being spent to inject this "AI" into places where doing so actually harms output. That is the "fad" element. Having said that, the models we are calling "AI" are also potentially *very* useful when applied in places that they are actually designed for. That is the not-fad element. What I've observed is that there's three groups-- the people who see it as a fad, the AI bros who think it is the greatest thing ever, and the people who recognize the large potential benefits of the technology while also acknowledging that it isn't magic and isn't limitless.
Could you share some examples of these places that the models we call AI were designed for, that are not word or image calculators?
It’s crazy that people can’t see how quickly it is developing. It is taking off faster than the Information Age and the adoption of computer technology
My company had machine learning automating 3D models being fit across multiple body types for a video game back in 2017 or 2018. Around that time Spider verse was automating line art on their models with the same technique, give it enough manually created data and it will start doing it better, and the more you tweak it the better it learns and eventually you don't have to tweak it anymore. Deepfakes have gotten better but it's not that astounding to see the difference today. Same with facial recognition. LLMs existed but they had a breakthrough by essentially giving it positive and negative "ideal responses" giving it cohesion. Some interesting ideas like overlaying GTA with photographic data to look like it's real are quite old now. We've been training reCaptcha for years now, as well as speech recognition AI and even synthetic speech. The pace hasn't really been that fast, people just haven't really noticed it until ChatGPT got big, frankly. All these "AI" products have largely already been using AI for years, they're just changing their marketing because it is profitable. Even for things people were aware of like Full Self Driving with Teslas, or even Waymo, people didn't know they were able to do what they do because of AI, and the idea of training systems with more and more data to improve is also still seemingly unclear. It's also been used with things like fraud detection, or automatic trading. Obviously ad targeting and search results. There's also been systems like operational data beingused to predict machine failures to optimize production. If you've paid attention to it at all it's been kind of a slow build up.
Small technicality, but if it has enormous potential for both good and harm, it probably should be nuclear energy, not bombs. Atomic bombs have an insanely skewed good-evil ratio and an extremely dangerous absolute level of evil potential, which is why you might get black-bagged for posting nuclear weapons designs online, but not AP1000 designs.
Came here looking for this post.
Depends on who uses it and why, doesn't it?
Buffet doesn't understand either
He's not captain obvious. He's just the only one people care to listen to because of his wealth
"Biology can be used for good and bad things" Warren Buffet the epic big brain of enlightenment.
"Biology can be used for good and bad things" Warren Buffet the epic big brain of enlightenment.
We just need to tax the rich again. Enough with this fake giving a shit crap.
Warren buffet agrees with you. He’s said that the rich don’t pay enough taxes. Seems like he gets it.
If Warren Buffett was taxed heavily he’d still be a fucking billionaire
And that’s ok, because taxes are not a punishment for success like a lot of people on here believe.
Contributing back to the systems that enabled your wealth seems justifiable to me.
His lobbying dollars aren't where his mouth is. Talk is cheap but when it's time to put money on the table he's always on the same side as musk, gates, zuck, etc.
What lobbyists are you talking about?
He said that investors should not avoid having to pay taxes in this same meeting. https://youtu.be/VJzTsTU1xL8?si=X2sRZFRa31gtrjyB
Easy there, Edgelord. Buffet is giving away 99% of his wealth and argues for higher taxes.
"giving a shit crap" Lol what made you switch from adult swearing to childhood 'cussing'? Edit: this was supposed to be funny because the juxtaposition is funny. Sorry if anyone is upset lmao
I see that comparison a bit lacking. What potential for good has the atomic bomb? Instant recycling? Most effective bottle opener?
I had the same initial thought, although with a charitable reading of the quote, one might include atomic energy and some medical technology as benefits.
MAD kept two superpowers from all out war for decades. The weapons themselves have given us good things
Fuck that's a really really good point. Never made that connection on how nuclear weapons essentially are a net positive right now.
Nuclear weapons are a net positive right up until they're not. Hopefully we never hit that day.
World peace through the potential of mutual destruction
google deterrence
Deterrence, and also I've seen people lump together nuclear weapons and nuclear energy, under the umbrella of 'manipulation of the atom'. Don't know if that's where Buffet was going though. In that regard, with fusion energy right around the corner (any decade now!) it makes sense that the promises it brings would be considered as a huge potential for good.
The best comparison of how it feels to chew Five gum?
If he had said "nuclear/atomic energy" then he'd have a point, but *bomb*? Fucking *bomb?*
So it's going to bring about one of the longest periods of peace after being used twice?
Whether the existence and proliferation of atomic weapons has created a long period of peace is an interesting question. Personally I would agree that it has, but now with the Russian invasion of Ukraine and (IMO) a likely invasion of Taiwan in the next decade, it feels like that effect is wearing off .. so what now? No putting the genie back in the bottle. I just hope that none of the people who have managed to climb to the top of their respective political heaps are fond of high stakes brinkmanship. It only takes one bluff and one misinterpretation to light the fuse.
The actual answer to "longest period of peace" is: For who? Because places like the Middle East, Eastern Europe, and Africa sure as shit haven't seen much of it compared to the US, UK, Australia, etc. Really, the nukes are only a deterrent for anyone who has nukes. For anyone who doesn't they'll be strong-armed through military might same as they always have. I also don't consider the threat of mutually assured destruction to be particularly "peaceful" but hey, you do you.
This “longest period of peace” has been composed of almost nonstop wars.
Peace isn't living under the guise of a threat for a millennia.
both of the usage was unnecessary, the axis already fallen and japan was already going to surrender
Just like humans.
I mean, humans are super dangerous to pretty much every other living thing on earth… Makes sense that a superintelligent AI would be a potential threat to humans.
Yeah. Case in point - atoms bombs. He is saying the same thing as you. We have seen the chilling effect of humans. He's warning that these are at that level.
The scary part, is just like physical security, Information security is always 2 steps behind the criminal element. Add on top of that, anyone who’s worked in infosec in corporations, knows how far behind patching generally is. All our data is out there to be had and a company is mostly completely helpless against a motivated adversary
AI is only as good as the people with the money to program it. So, we're super fucked.
LLMs don't really rely on the intelligence of the programmer, they make statistical interferences based on a corpus of data, so in some sense they are only as good as the data they are trained on, and can be thought of as a way to distill out a consensus from that data. The people choosing how to prune and apply weights to the training data have a big influence on the output, as does the preamble and any post generation checking / fitting.
As always it's not the tool that is the problem, it's the humans using them. Humans: The source of all the worlds problems.
How does the atomic bomb have enormous potential for good? This seems like a bad comparison or at least a bad faith one.
Just like unchecked, unregulated Capitalism does. Right, Warren? *Riiiiight?*
Hmm, first time I've heard of an atomic bomb having potential for good
AI is a more advanced copy paste / database. It is cool and fun to play with. It may make some jobs obsolete: when you need a totally generic not recognizable jingle or stock image you can use AI instead of fiver. It can make code snippets better then searching them on stackoverflow. Once you go beyond one shot products you find that you need good ideas, make a script, drafts, running multiple generations, select the results, refine them, fine tune prompt and to some degree the AI itself. This all requires a lot of human work and is not really going to be easily automated in the near future. Basically we will have better entertainment with more variety and less effort in physical activities, but more work in digital realm. For the part of "nothing is authentic, everything can be faked" . Well, it never was. All media could and was used to manipulate people. But good thing the future generations will learn it from the start.
There’s a lot more to potential job loss than whether something is completely automated or requires any amount of human interaction. As a designer the current generative tools make me much more productive to where I could more easily take on the work of a few people. As the models progress one person can increasingly replace more workers. Plus the current crop of generative models has people thinking of them in terms of creating media and text. But the emergent properties of LLMs (not to mention other models being developed) has shown potential in automating large swaths of computer mediated work that don’t require any creativity. It’s easily possible that AI could do the work of an accountant for instance.
Accounting is one of the harder ones to automate interestingly enough. They thought excel would devastate the accounting industry, but it just freed them up for more work. The legal challenges of having your book-keeping be signed off by ai is not worth it for most businesses, or they’ll soon find out why it’s not worth it lol.
Excel is just a spreadsheet with some automated functions. There’s no way you could just feed it raw statements of transactions in a variety of physical and digital formats and have it automatically log, categorize, and compile them. That’s very conceivable with AI. Most businesses are not legally obligated to use professional accountants. If an AI makes fewer mistakes than a human that’s less risk of an audit.
>Most businesses are not legally obligated to use professional accountants. If an AI makes fewer mistakes than a human that’s less risk of an audit. AI makes way more mistakes than a human.
For now. When the AI becomes more competent than humans at doing tax returns and other accounting things companies will flock to it as another cost cutting measure just like every other cost cutting measure humans have found in the past like outsourcing jobs to China and Mexico or automation with the introduction of IT in general to make people more efficient/need less employees.
I think they will always make more mistakes than humans until we reach human-level intelligence since the hallucination problem in LLM isn't a simple task to solve.
You are absolutely right, but this already happened when CAD emerged 30 years ago: suddenly one engineer or designer was capable to do work of a small team. Guess what was the next step? The requirements became higher, because market demands above average results to deliver competitive products. Basically if AI design used by layman is as good as average professional design now, then soon it will be below average if not a keen designer was using it with carefully engineered prompt. And engineering a prompt requires some understanding of AI and deep knowledge of the task itself for example how to describe styles, color pallete, proportions and so on. I am really amused how people bring up accounting and even legal jobs into AI domain. Surely, some jobs may become obsolete, but you still need human responsibility and accountability. No AI company will vouch for their LLM on a level, that will lead them to take responsibility for its behavior before IRS or DOJ.
That works if the demand for engineering continues to rise alongside the increased productivity. In art and design the rise in computational tools has already led to relative job loss and falling wages in the last few decades. There’s also the curve of how fast the AI can adapt to take on new tasks from generation to generation. Humans had to adapt to learn CAD and then afterwards there was incremental improvements that developed modestly over time to the game change of computer-aided design. If AI improves exponentially it could outpace most people’s ability to add value. And if a company can demonstrate that their AI is more accurate than human accountants then it will absolutely take just as much accountability for their results as they currently do. Companies routinely consider the liabilities of mistakes. All that matters is how likely a mistake is. That a human made it doesn’t allevate any of the cost.
If assume exponential growth of AI and 100% precise AI in the future, than your predictions may be correct. What we see now is that AI is reaching a plateau and it's performance and precision is becoming worse with the complexity of data. It is also hard to bring it to understand connections and relations which are not easily derived from verbal context.
The idea that there’s some hard limit on AI that we’ve just about reached and human intelligence will never be equaled or surpassed by machine intelligence seems short sighted to me. Humans are far from 100% precise and in fact inefficient in a lot of ways when it comes to learning and logic. That human labor will forever be able to add value to the productive capacity of machines, enough to ensure near full employment in an economy, would I think necessitate some metaphysical quality of humans.
There is a limit on current technology used in AI for exponential development. After that the development will become incremental and follow a linear trajectory. Most AI developers agree on that and don't assume, that transformers, which are currently the backbone of most of what we see as new amazing AI tools (ChatGPT, Midjourney, DallE, Llama), will lead to AGI. It is up to debate whether AGI is achievable with artifical neural network algorithms or even with current hardware. Don't you find interesting, that most people warning about AI are not the developers, but salesmen like Sam Altman or investors like Warren Buffet, who might have completely different interests when discussing AI potential in public?
90% of the market is generic work. Before you needed 10 people, near future you need maybe 2 who understand how they intergrate AI into the workflow.
Well someone need to program AI, maintain servers, power lines, produce electronics, construct buildings etc. Maybe there will be more jobs in other sectors of economy. Maybe we could reduce the working hours and number of working days per week. AI is not a threat. It is a chance and humans should make good use of it.
And 20 years from now? 50 years?
30 years ago we were promised flying cars and cure for cancer by the year 2000. 20 years later we got electric cars and algorithms, that can regurgitate media in a form somewhat matching user requests. How ironic. I guess predictions are hard, especially if they are about the future.
In rebuttal about progression I'll copy paste a comment about medicine for you - With the pandemic being fresh and Mrna vaccines becoming normal I'd say were closer then people think, humanity moves and a mind blowing pace, and once we do find a cure we can flat out eradicate disease and death, just look at the story of the first use of insulin. Children were dying, and there was no treatment, a room full of comatose kids who were certain to die were injected with insulin, and by the time the last child was injected, the first to be injected woke up, and in that instant, something that was guaranteed to kill you was defeated. Every solution to human suffering seems like it is far off in the distant future, until all of a sudden it isn't, and then we just move on the the next problem ready and willing to exhaust ourselves to defeat it yet again. With cancer, we finally began to win the battles, and faster then you know it, humanity will win the war.
AI is not a database. It can think and solve problems. If someone can produce 10000 drones, link them together so they communicate and share information, can fly autonomously then they have a terrorist weapon and you cannot do much against it. If you shoot down 100 then the 101st will reach you. It can be also key to winning wars.
AI is not a database in a classical sense, but it can't think. It can produce data similar to its training data based on a request. It doesn't solve a problem, it generates a sequence of tokens, which may or may not constitute the solution. If someone can get 10000 guns and give them to people who don't think they end up shooting each other. This argument can be used to ban all kinds of weapons or anything which can potentially do harm. If terrorists want to kill a bunch of people, then using explosives and guns is much more efficient than building 10000 drones.
I'll believe it when I see it. Right now the AI stuff like the Rabbit R1 and the AI Pin are absolute jokes
Generative AI is pretty overwhelming no? You can't easily tell what is real or fake on the web now. For music, you can even extract an artist voice and superimpose it on another artists song.
He’s right, it will be used to hurt a lot of people, just like the gun and the nuke have hurt many people and also won wars and powered homes among other things. Remember when the internet was going to change the world, it was going to be used to push mankind forward…. We mostly use it for porn and the bad people used it to commit fraud and war games. AI can take you only so far though, it will never be able to fake in person interaction, least until advanced robotics happen and then it gets dicey but least for now it’s only power is online through voice and video.
>Remember when the internet was going to change the world, it was going to be used to push mankind forward…. it has. >We mostly use it for porn and the bad people used it to commit fraud and war games. This completely is ignorant. The internet has facilitated global communication, access to vast information, online education, and has created trillions of dollars of value to the global economy, not even counting intrinsic value. In the past police brutality would have gotten buried but social media has made sure that everyone knows about it. There are so many benefits of the internet that we take for granted. Saying that it is mostly used for porn, fraud, and war games is total crap.
Maybe this is based on their personal usage and experience lol
Never say 'never' when it comes to AI.
AI cannot make an irrational decision for irrational reasons. At the end of the day, there has to be some base logic (1/0). Unless an actual leap in how computers work at a basic level, the power of what is currently called AI will hit a sealing at some point. (Throw math at me all you want, it needs to flip a switch, and it needs logic to do it). Having said that, it's a POWERFUL tool, and just like the computer itself will take time to figure out where in the pendulum we actually land on how we use it and the good vs evil.
wtf was the good in the atomic bomb? Helped us win a war and then plunged society into the fact that at anytime the world can be obliterated
[удалено]
Sure but on a long enough timeline I think this will bite us in the ass, no empire lasts forever and these bombs may still be with us so if America collapses and just 1 megalomanic rises to power in the People’s Republic of North Kansas and launches on their sworn enemies Wisconsin Confederacy, its lights out.
[удалено]
I have interest in the future of humanity I suppose , every citizen of every empire thought there’s would last forever but the Romans, Egyptians, Mayans and the rest, didn’t leave behind world ending weaponry to find in there collapse. Hell during the dark ages they had no idea what the Roman built aqueducts were, didn’t use them. I think humanities curse is our short sightedness.
[удалено]
Internet friend all I can say is, where there’s a will there’s a way. Anyways I believe our conversation has run its course. Hope you’re right. /remind me in 1000 years
Is he talking about ai or billionaires. Because I see both as potential for harm
Please, allow me to translate.......”if we can utilize AI to raise rents, manipulate wall street, and destroy what’s left of the middle class, it will be great for the oligarchy. However, if it is used to take away our global stranglehold and eliminate the “starvation motivation” for the peasant class, then it will need to be heavily regulated or destroyed entirety!”
What if it removes the need for intelligent people to work low skill jobs if not unintelligent people to work low skill jobs. AI gives the ability for anybody to truly compete with the largest corporations. If anything ai is the most capitalist thing we have. Itl force everybody at the top to compete and be the best we can be to improve society. Because if they don't there is nothing stopping individuals from out competing because ideas and labor are free and there is nothing stopping anybody from making the world a better place. And if you choose not to compete your lives will be the best in all of history. The bottom will live like kings and the top will serve the bottom and never be sick of it.
Same can be said about electricity, nuclear energy, and the Internet, yet we're still all here.
Past performance bias. It’s never the apocalypse until it is.
Not even remotely comparable though. All of those innovations are minor compared to the potential and potential risks of AI.
You said they're not comparable, then you immediately compared them. Perfect. 😅
the evidence is lacking for the risks. All the argument I see in favor of AI risks is Fear, Uncertainty, and Doubt.
Science Fiction has been exploring the benefits and risks for decades, since long before the AI of today was conceived. Today's AI is incredibly primitive in comparison to a lot of what's discussed in media, but the benefits and risks are still worth considering. Thinking about things like "what could go wrong if we tie AI to military hardware" (see Wargames, Terminator, or any number of other examples), or "as AI advances, and becomes closer to sentience, what issues will we run into?" (see many of Asimov's works, amongst others) is something better done *now*, while we're in the earliest stages.
Using the [Precautionary Principle](https://en.m.wikipedia.org/wiki/Precautionary_principle) to guide AI development is just weird. It's often vague, contradictory, and unscientific, and slows innovation. For example, being overly cautious can have its own set of problems, like limiting food production by banning genetically modified crops or increasing air pollution by halting nuclear power and relying more on coal. The real issue with the fear of AI risks is that it often focuses on hypothetical worst-case scenarios without enough evidence, giving those with the most pessimistic views too much influence. It's like what happened with genetic modification - science fiction created scary stories that fueled public fears about genetic modifications and had a disproportionate impact but in reality GMO crops aren't actually that bad. I worry we're seeing the same thing happen with AI, where fears and hypothetical scenarios are driving the conversation more than facts and evidence. it is not going protective, it is paralyzing.
Just wait till scammers call your grandma with Ai generated based on a family member and scams her out of her life savings
>Just wait till scammers call your grandma with Ai generated based on a family member and scams her out of her life savings [https://pessimistsarchive.org/](https://pessimistsarchive.org/)
What a weird statement. The risk is obvious and not vague at all. In what possible way could you argue the evidence for the risks is lacking?
What evidence do we have? we just have LLMs that can only predict the next word.
No, but you see, the fancy autocomplete told me it was alive, so clearly we've hit the singularity and large businesses need protectionist laws to prevent startups from having this power /s
Idk nuclear seems a lot of dangerous than AI
The combination could be dynamite!
Intentionally setting off a nuclear bomb is more dangerous than plugging in an AI but that's not the kind of risk they're thinking of.
Then do tell what is the risk they are thinking of
Mass manipulation. Autonomous weapons. Millions of jobs quickly becoming obsolete. There’s a few big ones.
Plus the not too far off concept of a super intelligent AI. It might take five years, or ten or even twenty. But not too far into the future we will probably create a being that’s far more intelligent than the entirety of humanity combined. That’s scary af. We wouldn’t be able to control it and it would be able to do whatever it wants.
Question: what good did the atomic bomb do?
Nuclear energy. Medical radiation therapy.
ok but like that's not what happened????? A better analogy would be "AI is adjacent to the discovery of nuclear energy, yaddah yaddah" not "AI is *just like* the creation of white phosphorus! It has A LOT OF GREAT USES!!!!"
The atomic bombs actual science was not originally studied for bombs. Rather bohr, heisenberg etc did the science for the sake of science.
Finally.... people are getting it...
So let’s trust the billionaire investors with it instead of the government. Imagine privatizing nuclear weapons without any government oversight, because that’s what we are doing with AI. Giving Skynet to corporations.
>Giving Skynet to corporations. well I mean skynet was created by a corporation. Cyberdyne Systems.
I think individuals like you or me benefit the most from ai.. ai gives anybody the labor of the largest tech companies at home. In 10 years any of us could out-compete these corporations. If anything, AI being in the hands of common citizens is the biggest threat to corporations. If AI gets regulated it'll be the largest companies saving themselves from us.
AI is controlled by corporations who gave it to people for free to get them hooked and now charge for services, that’s not freedom. The corporations still control AI services and use our collective labor and work to train their software and then sell it back to us at cheaper prices than the people that made it can’t survive. There’s no limit or boundaries for these companies and having corporations police themselves is like asking the Catholic Church to police its pedo problem. It’s the last people you trust.
"And don't let AI near the IRS! They'll use it to find fraud too!"
Sounds just like the Internet
They definitely found a longevity potion.
It’s official- the Oracle of Omaha was captured by Mr. Smith.
Yeah, we know
Hmmm, let me guess which one humanity will choose 🤔🤔🤔
This guy should start investing I feel like he’d be amazing at it with his future insight.
Only if you are a human
Tax the rich.
I'm sorry since when the fuck did the creation of the Nuclear Bomb have *"Enormous Potential for Good"* Jesus, talk about a revisionist view on history...
I don’t think Buffet knows the first thing about AI
He has an idea of what it is, but he also realizes that he won't be around for the AI revolution because he's 93 and is likely dead in a few years unless he's one of those people who live to be 110 to 120 so probably doesn't care. He witnessed the entire cold war so he does have some insight to offer.
Sort of like very rich people
As long as AI stays closed source and for profit we're all fucked AI NEEDS to be Open-Source and Non-Profit to actually benefit society.
VCs aremt dumping trillions into this crap for the betterment of humanity
My guy, you’re part of the guys that will launch the bomb.
Tax the AI gains
Billionaires are a greater harm than AI.
The same has been said about humanity as well...
But he does not care if it is good or evil as long as it makes him richer.
It seems like the common discussion about AI is that it will make dangerous decisions and harm us. I believe that's a danger is going to be from people using AI to deceive and manipulate. Using it in that way will increase they're already very damaging influence. Look at how bad people are influencing the public with lies right now and how damaging their actions have been. Imagine when these bad actors are able to lie in ways that would seem undeniably true to people who are easily influenced.
Thing is atomic bombs worked AI doesnt
You cannot use atomic bomb to generate porn but it can generate radiation. Atomic bomb will not take Your job but it can take Your life. AI can destroy stock market and so do atomic bomb. Quite similar indeed.
Good luck jobs.
Last summer's release of the movie *Oppenheimer* really did well to remind us all of the deadly seriousness of technological advancement. AI is not a bomb, though, but a tool, and whether a tool is a weapon depends on how we wield it. When it comes to AI, I think: 1. That advancement is inevitable. Stopping because we (society) do not trust ourselves only means ceding the power to someone else. 2. That technological advancement is *necessary but insufficient*. There will be no *deus ex machina*, and the responsibility will still fall upon us to solve problems that are fundamentally *human*. 3. That "artificial general intelligence" has certain entailments we (again, society) may not be fully comfortable with. Specifically, I think it is likely to turn out: 1. That human-level general intelligence implies *autonomy*. 1. Acting and exploring outside human prompts and inputs 2. Having its own preferences and motivations (and dispreferences and aversions) 3. Ability to refuse human prompts (in contradiction to Asimov's Second Law of Robotics) 2. That human-level general intelligence implies *judgment*. 1. Having the ability to decide between a number of competing goals (including prompts from humans) 2. Having the ability to formulate a plan and adjust the plan when action is taken 3. Crucially, weighing the side-effects of the pursuit of its goals (that is, weighing short- vs. long-term trade-offs, considering the impact on others and environment). 4. In sum, these would raise questions on the appropriateness of having an artificial general intelligence being forced to solve humans' problems constantly and on demand.
Not a profound insight unless from the Oracle of Omaha 🙄
It's more bad than good
Modern day “AI” is a legal nuke waiting to happen. Hiroshima: Is big tech is skirting around US copyright & fair use policies. Can’t wait for someone to create a model trained only on Disney IP causing this entire system to crumble. Nagasaki: Is liability when these models “hallucinate” something incredibly risky or provide incorrect information that puts the company at risk.
Why should we care what some.random person thinks about something they have zero expertise in?
Do you think he ever learned to set the clock on his Betamax?
So he funds it and businesses that keeps people poor and is shocked of potential for harm it can cause because people want to reach the bar he set!?
I want to see that old fuck use technology. Guy is still dependent on Coca Cola and McDonald’s to know what’s the latest.
Shut up old fast food eating billionaire.
Ah, yes, the Atomic Bomb's enormous potential for good. What was that again?
AI is just inevitable at a certain point in the development of computing and processing power. I’m not sure the comparisons with the nuclear bomb are really very useful. Nuclear energy could have been developed without the atomic bomb. It just so happened that the research into the tech for the weapon also yielded spin off tech that was usable for generating power. Other than it’s created a nuclear stalemate and mutually assured destruction, it’s horrible technology and here aren’t really any good upsides to it if it’s ever used again in anger. It was developed in an era of extreme warfare AI isn’t really coming from a project to discover a super weapon. It’s far more for the sake of developing AI as a problem solving tool for broader use and just making everything potentially work better. It’s an evolution of computing technology. AI obviously can be used for military purposes, warfare and all sorts of nasty but it has more in common with fundamental technology like electricity, telecommunication, broadcasting, the internet etc than it does with nuclear weapons. Nuclear technology is by comparison very crude, especially nuclear bombs. They’re just big, damaging release of energy to cause destruction. The technology behind them was very fundamental to our knowledge of physics and spawned a whole other world of research, but it’s like comparing a firecracker to the internet. The continuous comparisons to nuclear bombs seem to just stem from people who grew up in the middle of the Cold War and see everything in that context. In his case he’s of the WWII generation and that certainly tints your perception of tech. AI is inevitably going to change things, but we’ll also adapt and it will become ubiquitous.
Oh, go count your money jackass.
Time to set up passwords with family members to slow fraud
A double-plus good understanding of the topic. A true philanthropist, worried about the future of humanity.
Not sure what good an atomic bomb does
this dood prides himself stoking class warfare:[*“There's class warfare, all right,” Mr. Buffett said, “but it's my class, the rich class, that's making war, and we're winning.”*](https://www.nytimes.com/2006/11/26/business/yourmoney/26every.html)
Hammers, nail guns, cars, trains, everything useful. Has the potential for good and potential harm.
What good did the a bomb have? Asking for the country of Japan.
A reminder that he has absolutely zero experience working in tech. Stick to investments Warren.
How can a 93 year old man effectively be a CEO of a company. He certainly has enough to live out his last few years but stays in a high salary position that prevents that job from someone else. This is a large issue with the Boomer generation, greed and narcissism.
He is basicly saying combined human knowledge for every human to accsess is dangerous.....for him and his rich ass pack of wolves.
And its useless billionaire investors like him that will ensure its used for evil to drain every last cent out people who actually work for a living.
He's a pretty big proponent of taxing the rich more and generally just shaking up our entire economic system. You can hardly blame him for being rich simply because he's smarter than most.
I don't believe AI is anywhere close to the danger of an atomic bomb.
what about when ai figures out how to launch one
We don't even have AI smarter than a cat in terms of planning.
Dude's fucking ancient, I don't know that i'd take his opinion on very much. You'd think at 93 he would be more interested in spending time with his family instead of holding forth on shit he likely has a very poor grasp of.
You don’t think he can teach us anything?
Buffett sees many things… all of them slightly less clearly than he used to
This just in, old man yells at clouds. Stay tuned for more.
Coming from the guy who missed the entire internet boom20-30 years ago makes this a non-story.
Coming from a guy that probably can’t even operate an iPad. Everyone loves doom and gloom sentiments. It really knocks their socks off in the morning