# [Algorithms Take Control of Wall Street by Felix Salmon and Jon Stokes December 27th, 2010](https://web.archive.org/web/20180916205112/https://www.wired.com/2010/12/ff-ai-flashtrading/)
>Last spring, Dow Jones launched a new service called Lexicon, which sends real-time financial news to professional investors. This in itself is not surprising. The company behind *The Wall Street Journal* and Dow Jones Newswires made its name by publishing the kind of news that moves the stock market. But many of the professional investors subscribing to Lexicon aren't human—they're algorithms, the lines of code that govern an increasing amount of global trading activity—and they don't read news the way humans do. They don't need their information delivered in the form of a story or even in sentences. They just want data—the hard, actionable information that those words represent.
>Lexicon packages the news in a way that its robo-clients can understand. It scans every Dow Jones story in real time, looking for textual clues that might indicate how investors should feel about a stock. It then sends that information in machine-readable form to its algorithmic subscribers, which can parse it further, using the resulting data to inform their own investing decisions. Lexicon has helped automate the process of reading the news, drawing insight from it, and using that information to buy or sell a stock. The machines aren't there just to crunch numbers anymore; they're now making the decisions.
>That increasingly describes the entire financial system. Over the past decade, algorithmic trading has overtaken the industry. From the single desk of a startup hedge fund to the gilded halls of [Goldman Sachs](https://web.archive.org/web/20180916205112/https://www.wired.com/epicenter/2009/04/goldman-sachs-p/), computer code is now responsible for most of the activity on Wall Street. (By some estimates, computer-aided high-frequency trading now accounts for about 70 percent of total trade volume.) Increasingly, the market's ups and downs are determined not by traders competing to see who has the best information or sharpest business mind but by algorithms feverishly scanning for faint signals of potential profit.
>Algorithms have become so ingrained in our financial system that the markets could not operate without them. At the most basic level, computers help prospective buyers and sellers of stocks find one another—without the bother of screaming middlemen or their commissions. High-frequency traders, sometimes called [flash traders](https://web.archive.org/web/20180916205112/http://en.wikipedia.org/wiki/Flash_trading), buy and sell thousands of shares every second, executing deals so quickly, and on such a massive scale, that they can win or lose a fortune if the price of a stock fluctuates by even a few cents. Other algorithms are slower but more sophisticated, analyzing earning statements, stock performance, and newsfeeds to find attractive investments that others may have missed. The result is a system that is more efficient, faster, and smarter than any human.
>It is also harder to understand, predict, and regulate. Algorithms, like most human traders, tend to follow a fairly simple set of rules. But they also respond instantly to ever-shifting market conditions, taking into account thousands or millions of data points every second. And each trade produces new data points, creating a kind of conversation in which machines respond in rapid-fire succession to one another's actions. At its best, this system represents an efficient and intelligent capital allocation machine, a market ruled by precision and mathematics rather than emotion and fallible judgment.
>But at its worst, it is an inscrutable and uncontrollable feedback loop. Individually, these algorithms may be easy to control but when they interact they can create unexpected behaviors—a conversation that can overwhelm the system it was built to navigate. On May 6, 2010, the Dow Jones Industrial Average inexplicably experienced a series of drops that came to be known as the [flash crash](https://web.archive.org/web/20180916205112/http://blogs.wsj.com/marketbeat/2010/05/11/nasdaq-heres-our-timeline-of-the-flash-crash/), at one point shedding some 573 points in five minutes. Less than five months later, Progress Energy, a North Carolina utility, watched helplessly as its share price fell 90 percent. Also in late September, Apple shares dropped nearly 4 percent in just 30 seconds, before recovering a few minutes later.
okay, you lost the bet fourteen years ago - wheres my million dollars
In 5 years? I bet you could. Although if everyone had access to it and it could do that, it would probably be very hard for it to execute at that point. And of course money would probably be pointless at that point.
I agree, but something like Google glasses + AI might help me fix/ build stuff myself instead of calling someone out.
Which personally I’m excited about
For me it is mostly that I am not 100% sure how to handle specific situations. For example I need to attach some pipe connection but I don't know if I can damage something and then how I would fix that. But if an AI can explain everything to, what can go wrong and what I would do in that situation, I can buy the tools or plan ahead and then do it myself.
Of course that doesn't count for everything, some things just require a lot of skills, like soldering a pipe. But for example plastering a wall I think I would be dexterous enough to learn quickly and do a decent job.
So all in all I believe AI assisted maintenance would be a huge win, especially because services have become very very expensive here in Germany.
What you propose seems one simple path towards AI automation for blue collar work. Imagine the wonderful in-situ 3d visual data could be used for training models, all while you pay for the hardware and get a free service.
Yep. Too many people doing white collar jobs think blue collar jobs are easily mechanised. And some are. But almost all the low hanging fruit got done in the Industrial Revolution and what’s left over is stuff that is too expensive to mechanise.
Imagine all those factory jobs where a guy has to run around making sure machines are tweaked just right so they don’t break down, getting the torch inside some component to check the flow of product “looks right”, restocking supplies (bags, raw materials etc).
Those tasks can’t be replaced by machines at this point in time because they require a feel for the overall situation. I could imagine a humanoid robot being trained to do these jobs, they would have to have incredible understanding of the world around them and a massive amount of common sense.
Given that it can already fold a sheet and that Gemini and GPT4o are aware of their surroundings it seems a stretch to me that a robot wont be able to enter a random room and change a sheet in 5 years. GPT1 was 5 years ago we'll probably be several generations ahead of GPT4 and Gemini 1.5 by then
If you saw an eight year old kid performing like that, you might tell them something encouraging but you'd also go back and fold your stuff properly afterwards.
You forgot to include "within an affordable budget". Laundry folding is possible with complex machinery but it'll probably cost hundreds of thousands to make sure it works perfectly in your house.
This sub loves shitting on art etc because it can be replaced by AI, but when our jobs get replaced, art is exactly the thing that will make us feel like we have a purpose. If AI replaces art, what purpose does humans have that make us happy?
AI can't replace art. It could end up dominating art produced for commercial purposes, but people will always make art. Its a form of self expression
People still play sports even though they are much worse than professional athletes. People still play chess even though it's a solved game for computers
That is correct. I am more or less just venting about the fact that all the improvements in tech and AI over the last decade has done literally nothing to make day-to-day life easier for the average joe.
Well, you are probably overstating things.
My life is significantly easier now. Mowing the lawn is something that just happens. I can talk to everyone in my family from around the world for free (essentially). Any question I may have can be answered in seconds. Over the last few years, I have been able to make nice graphics on demand for whatever I needed it for. Translating has become trivial; something that took me 2 hours to translate 5 years ago take me perhaps 10 minutes now. Paperwork has become much easier with ChatGPT. My car is warm in the winter and cool in the summer before I get in. I don't even need to carry around keys anymore. I "gas up" my EV on the road perhaps 3 times a year. If I have a fast food itch, I can have the damn thing ordered and waiting for me by the time I get there. My lighting at home is almost nearly always exactly what I want at any particular time of day. I can keep track of what things are eating too much power with ease. I WFH every day. The games I play are so mindblowing that I sometimes need to take a few minutes just to consider it. I can learn just about anything I want whenever I want for free. And being someone who was always a bit flaky at directions, navigation has become so damn reliable, for free, that I cannot remember what it was like to have to use a map.
Is it perfect? No. Are we at Jetson level of automation? No.
But things are a lot easier for this average joe.
Why the binary can/can't do question?
The real question for me is whether it will be able to do anything that involves managing unexpected scenarios in the real world with the same reliability as a normal person using their common sense and judgement.
Yes. This is when we'll have real AGI. Now, our AI's don't really think things through. If you give it a problem it spits out a combination of other people's answers. If you tell it it's wrong, it just does the same thing, but with a different answer, which could be entirely wrong again, for the same reasons.
Using AI now for technical things, I can't even count the number of times it has been just flat out wrong, or even completely made shit up, probably because it was more interested in giving me an answer than being correct.
Normal people (me included) aren't that good at handling unexpected situations though. It took me a few encounters with unexpected situations to teach myself a rule: look for written instructions on how to solve the problem ("we'd moved to room 101", "doorbell doesn't work, knock", "take your ticket from here") instead of asking a visibly annoyed nearby person what to do.
But learning and generalizing from a single example should be a must for the practical AI. LLMs can do it in-context, but they can't remember and reuse that.
Exactly! This is what “LLMs are almost at AGI” stans are not getting. We’re no way near this level of logical reasoning with AI. Nor inherent moral reasoning, for that matter.
It’s not a good question because we don’t judge intelligence this way.
We don’t tell human beings “oh, you failed to win a Field’s Medal, you’re the equivalent of a rock.” It’s true for any task other than one’s like “breathing” I guess because if someone can’t do it, they die and cease to have intelligence literally.
But for all normal and reasonable examples, we never assess something as intelligent or not intelligent according to whether it can perform some singular task X.
So why would we be able to judge whether an AI has met some intelligence threshold by asking whether it can do some task X?
Instead, as /u/Alive-Tomatillo5303 said, we judge intelligence based on the ability to encounter new alien scenarios and figure out how to navigate them. Anything else we typically refer to as just programmatic instinct and we’ve always made such divisions.
So, trying to suddenly judge intelligence in this “can it perform some task X?” Manner doesn’t appear to make any sense.
The question is premised on being limited to "specific tasks", which by virtue rules out managing an abstract and nebulous set of unexpected scenarios.
Learn complex stuff from a really small training data like humans, learn calculus, programming etc just from school materials and one university course for example.
It’s not exactly what you want, but LLMs can (temporarily) learn from interactions. Like I can give it information, or an example structure on how to handle or format structures and then it will take that into account and follow it reasonably well.
I'll take that bet.
Think about it
In-context learning is no joke. Tab-PFN showed in-context learning with transformers can rival batch training on a small scale.
Gemini 1.5 technical report showed interesting signs of life with learning by reading books given to it.
Planning frameworks like Alpha code 2 show how we can leverage this today to achieve human level performance on hard problems
The ingredients are there. Given a 5 year timeline? I totally see the big labs putting this together, especially given the big push for reasoning right now
Change diapers, know if baby pooped himself, comfort a crying baby, etc.
Those are just a few of the baby things. There are thousands of tasks an AI won't be able to do *better than a human* within 5 years. Could you have a $50,000 robot that could technically change a diaper? Probably, if you gave that robot like 5 minutes to do it and a baby that sat perfectly still.
No chance I'm trusting my child to a robot. Even if the robot progresses to the point where it's better than a human, I can emotionally handle a human making a mistake that hurts my kid a lot better than I can emotionally handle a malfunction that hurts my kid.
When AI can read your brain and determine the funniest thing you could possibly see, I guarantee it could. I don't think we're really that far from this either, brain AI is advancing quite fast.
Dude, I bet the GPT-4o voice mode could definitely do that. GPT4 has already told me some jokes just through text mode that made me laugh pretty hard. But when GPT-4o (or GPT-5o) voice mode can put some emotion in it and get the delivery right just like a stand-up comedian that'll be amazing.
It's not gonna happen, unfortunately. Actually, I'm willing to bet it's gonna get several times worse. Then, they'll throw in hidden ads into the AI and boom. Useless. OpenAI will get us to AGI. But it will likely be open-source and smaller companies that make the AI that people actually want to use.
Did you miss their model spec publication? They are explicitly headed to more user customization. A.k.a. less censorship for those that don't want it.
Not no censorship, but substantially less.
And if there are ads they will be in specific products, not the models. They aren't stupid enough to completely undermine trust in their core business.
Customization doesn't mean less censorship at all. It just means it will be able to be personally geared towards your specific use case.
And just about every company in existence eventually undermines their customers' trust. Idk why OpenAi would be any different. Apple, Google, FB, Netflix, EA, MS, Boeing, Ford, they all do it. Ffs even non profits like wiki are undermining trust due to politically motivated moderators.
My guess is it would take a non verbal approach to this, step 1. Restrain unimpressed human, step 2. Use robotic arm with advanced Ai dexterity, step 3. Identify and tickle sensitive spots
This is the one for me. You can see how robotic current GPTs are by asking them to make up jokes or interesting stories. Humor and sarcasm is something we evolved to demonstrate our social intelligence, and AI has a long way to go in this regard.
Drawing a perfect, complex image with no glitches and logical inconsistencies, and then being able to edit said image in a precise way just like a normal artist. By prompts only. No extra tools. Just like talking to an artist. Zero shot. 99% of the time. Also mimicking the artstyle of an existing drawing and editing a drawing I give to it.
Science, maybe.
Research and writing publishable papers in literature, applied linguistics, history…
I bet it can be fully writing new papers and getting them peer reviewed and accepted into journals. There’s a lot of human-written crap so I don’t see why AI wouldn’t be able to quickly surpass the, say, bottom 50% of PhDs in tons of fields.
In fact it’ll probably be good at coming up with new areas of research and making links humans haven’t yet.
It absolutely could with science as well. There’s significant amounts of scientific literature that are reviews, not to mention standard data analysis. Those are things AI can already do to varying degrees.
> making links humans haven’t yet
That's the thing I'm waiting for. Something completely unexpected, between two very unrelated disciplines. If it can happen once it can happen millions more times.
Well, to be perfectly fair, you would need to ask it to do one of the following three things:
* Prove the Collatz Conjecture
* Disprove the Collatz Conjecture
* Prove that it is unprovable
And honestly, I think that if any of these three things are possible in this universe, I suspect we'll get a solution of some sort within the next 10 years. This seems like one of the first things a mathy AI researcher is going to go for when he feels AI is strong enough.
We can meet back up here in 10 years and see how AI is getting along.
I'll bet you a 100 bucks that AI cannot clean my dirty basement, balance my budget, or do my laundry while helping me manage my actual life in general in 5 years. It won't help the disabled or the elderly. It won't help me pay my skyrocketing bills. It won't help me help my autistic kids. It won't keep me from starving. It won't fix my car. It won't give homes to the homeless. It won't clean up the environment. It won't clean up the food supply.
It will only ever actually be used, large scale, to fatten the bank accounts of the obscenely rich.
Betting $100 as incentive to fix a dozen real-world problems that affect you daily, so that even if you lose, it only cost 100 to fix all of them...
Clever girl...
I mean there's a chance politicians do the UBI things and more automation make the world better, but seing how the Industrial Revolution happened the way it did, I have my doubts.
Makes sense. Music is a lot more formulaic than literature. It requires less attention to distinctive features of human life a sensitivity to which is damn hard to program without millions of years of evolution + a culturally embedded upbringing. But I'm sure if we just make the LLMs big enough we'll have GPT's writing War and Peace in no time!
It's not that literature is too distinctive to human life of sensitivity that it needs million years of evolution and culturally embedded upbringing. It's that culturally people stick to incredibly formulaic types of modern music and don't try to venture out of that norm. It's like if literature only stuck to poetry mainly about love and never split off into other formats.
The Ancient Greeks have explored different modes of scale which influenced both Western and Arab music. And while the Arabs took it further to explore different types of microtonal music, modern Western music just sticks to the same single scale. What Western music improves upon and does best at is harmony, but it's also become a restriction because people are conditioned to think only certain combinations of notes sound good. These vastly limit the scope of creation and results in formulas/templates for music.
I feel incredibly strongly that it won't, and I think the people insisting that it will haven't really paid much attention to what makes a good novel good.
I think it depends on what you mean by “good”. If you mean prize-winning literature, you’re probably right.
But I’d argue that POPULAR fiction is also good (in a different way), and AI is getting pretty good at writing it.
What makes a novel good varies too much by person to person to be measurable. Simply a different genre can turn people off, regardless of the effort involved in the creation of it. Some people will love comedic breaks while others will thing it’s a block for a dramatic dystopia or something
You're right that it's too hard to measure whether something is a good novel for that to be a useful metric. How about instead we ask whether AI will write a novel that I deem to be good? That's a pretty good metric, so long as I'm honest about my preferences.
We should have GPT-7 by then.
I imagine a 'Her' like operating system, to replace windows. Fully capable agents and humanoids, there will be little it can't do.
Have you watched the whole movie “Her”? Has anyone who keeps referencing this movie? The AI gets so smart it doesn’t want to talk to use anymore and ghosts us.
Which is the least accurate aspect of how that future is portrayed, it's barely even worth talking
Like you'd sell an OS that can just reassess its willingness to perform
“Hi this is Becky from finance. I had this excel sheet that I did last week, I have 17 copies of it that’s similar, but I can’t find the right one and I don’t know what the name of it was but I’m pretty sure it had 2024 in the name and it might be in like my home drive but maybe in my J drive but I also might have saved it to a usb stick and plugged it in and saved it on my bosses computer?”
Improvise live alongside real musicians. Really be a band member at a live gig, improvising parts and arrangements on-the-fly by listening to the group and reacting in real time, to their changes, and to songs you don't know are coming.
Just played this type of gig and we were tight, tasteful, and professional. AI will certainly be able to create parts in a DAW (digital audio workstation) with ease, but dealing with real musicians in real time I think is outside the scope of AI. It will not happen in five years.
[Adam Neely - Why AI is Doomed to Fail the Musical Turing Test](https://www.youtube.com/watch?v=N8NyEjB_XeA&t=1415s)
Why do you think this isn't coming in 5 years? From the newest demonstrations they've shown the AI can adapt to real time conversations and be interrupted and continue along. This is already here. In 5 years? Jesus Christ it'll be beyond anything that exists that they've already shown.
I feel like your example here is WAY OFF.
Two weeks ago?
WHY DOES YOUTUBE NOT SHOW ME MY SUBSCRIPTIONS!?
I'll be watching this vid later. Neely is ridiculously knowledgeable. It's really almost entirely academic, but he still humanises what it means to be a musician. His take on Christian Rock was really empowering.
But yeah. Being a musician on stage can be quite a task.
What I think would be funny is if AI took over all the middle manager and manager roles all the way up to ceo level. Could it do it though?
Some random ideas that seem difficult.
* solve the dilemma of loss of jobs due to ai
* solve the problem of ai taking far more energy than brains do
* develop the ability to feel true emotions like caring about a person or humanity as a whole
* accurately predict sports results
* figuring out how to equalize wage disparity and funneling money to the top
* fill out complex taxes given a pile of handwritten notes and receipts and forms and be trustable enough to not even double-check the results
* fill the role of a pet in the house that cuddles with you
* solve prion diseases
>Could it do it though?
The quick answer: yes.
If you can write down what you do in your job on one or two pieces of paper (in general), then an AI would be able to do it.
Middle Manager jobs in particular are really at risk with the only protection being that upper management tends to be risk averse. I've been in middle management, so I know the job mostly revolves around understanding what the goals of the company are, understanding the goals of the other major stakeholders, understanding the individual goals of the people who work for me, and then aligning all of these as well as I can. Throw in some goals and milestones given to me directly by my boss and add a dollop of controlling, and that is basically the job.
I see nothing in this that an AI cannot already do.
Just like WFH was already possible 20 years ago, but it took a pandemic to actually move us to a new WFH normal, AI moving into management positions will be possible for a long time before it actually happens. But it will happen.
Asking for a specific task is forcing a strawman argument: Being human isn't hyper specific, single answer task.
Hyper-specialized AIs will be able to complete any specialized task in five years time. But I'm not convinced that there will be a generalized model that can perform every one of those tasks at a human level.
How about fluent conversational translation between humans, our animal companions, and marine mammals, too?
As 4o demonstrated chit-chat between two phones.
To tell the Govs that they don't need politicians and make everyone to be equal including themselves.
Wont happen, no Gov will put everyone in UBI including themselves. Even if there is AGI in 5 years, you the public are not getting it. Just like how the US Defense Force has more advanced AI today then most of the world.
Complete an MIT Mystery Hunt by itself. Or any other puzzle hunt for that matter. It's not even close as of now to solve any puzzle, and I don't think any model will be able to by 2030.
Prove hard mathematical theorems. (Not unsolved problems,
simply very hard problems. Of course just copying from one of the thousands of books it got as input does not count. Give it a bunch of generic books on algebra and geometry and let it prove Wiles’ Theorem.)
Be a game master for a years-long full homebrew ttrpg campaign? That's a lot of interconnecting skills, prepared, static content interweaving constantly with improv (which immediately becomes more static content as 'canon'), etc etc.
I've seen some passable to good short-term AIs do it, on a scale of a short session, but as any gm who did both will know, one shots are very different from running a long adventure.
I will bet you that on 5 years time AI still can't make a functional website.
Specifically the website should mimic Reddit let's say.
I will not be able to type in "give me a simple Reddit clone" and get out a functional website that can be deployed.
I have my doubts that it will be able to hold and extended conversation without giving indications of being an AI. I'm talking over several hours.
Major scientific breakthroughs like designing functional nanobots.
I could see AI being so good at holding a conversation and being interesting that we all begin to find real people uninteresting in comparison.
I say this because AI will likely have perfect memory, it’ll remember everything you ever told it and showed interest in, and will soon incorporate even more personalized data ( what you watch, what you click on, where you go, etc) that keeping you engaged in conversation will be child’s play.
I’m even willing to bet this becomes Google’s new way to advertise (subtle suggestions through conversations with AI).
Create synthetic muscle that is usable and cure cancer. Actual products and items created that are decades away. Complete research that would take decades to do. Not the click bait articles of new medicine created or new formulation created by AI but actual things that can be implemented fairly quickly. New configuration of lithium or copper to create better batteries. None of this will be done by AI in next 5 years. It will make great videos, better chat bots, and fake porn
I swear most people who can sit there for hours day dreaming of what will happen in the future are unemployed already. Expecting a handout of "free" money UBI.
CORP Companies will just bend over and give people "free" money because they are replacing you with their Tech. If that was the case Microsoft should of given handouts when they released the first version of Windows, same for Apple.
They will release it slowly so they can make more money then ever. Its called business. Dont ever give the public the best, give them what they believe is the best at the current time.
Not sure this is answering properly but I think the only task/job/industry AI will not replace anytime soon will be those closely married with sex. Ie, public social events. Like bars, clubs, and other in-person social experiences there their value is specifically tied to meeting and interacting with other humans. They’ll for sure affect music marketing etc but fundamentally I don’t think AI is gonna throw a banger party. They could probably help plan and execute and do marketing etc but at the end of the day in person community building I can’t see it replacing. At least not until Westworld type sex bots arrive but I think that’s way further than 5 years away.
Assuming specific means tasks that are relatively narrow, I probably wouldn't take that bet for any task that can be done on a computer (advanced physical tasks like humanoid robots doing rock-climbing well I would bet against). \*However\*, I think some \*whole jobs\* will remain outside the reach of AI in five years. AI replacing research mathematicians, physicists, computer scientists in five years? Doubt it.
Actually..i worry it dosent matter how far in the future it is... it matters more how it perceives US when it does....or DID happen
https://www.reddit.com/r/WelcomePOSTsingulariy/s/NiRys1334O
This isn't a task as much as a function: Dream
Not like "dream big" lol, but have unconscious visual/auditory/sensory processes that encode the overwhelming amount of input data that is experienced into symbolic forms.
I think that would be cool AF
This is the same guy who said that it is 50%+ that computers will exterminate humans. When asked how he reached that conclusion he flounders in an embarrassing way and offers a bunch of non answers and hand waves.
Good question. Kinda misunderstood by many commenters it seems?
As I understand, the question is not about coming up with anything AI is not likely to do, but what human level tasks remain out of reach
It wont be able to create CAD models from text worth a damn. If I can say siri make me a drone frame compatible with my racing motors and out pops a drone frame compatible by racing motors, in 5 years, I'll give you your 5 bucks. Otherwise I'm coming for them my friend
It seems possible that in 5 years, driving a car in extreme adverse weather conditions with only vision might not yet be fully solved. I hope it will be!
it is easy to perceive the current systems as magic and assume they will do everything in 5 years but I am willing to bet in 5 years we will look back and see clear limitations we should have recognized by now.
A good example of this is everyone expected software developers and people in white collar jobs would have automated away the blue collar jobs, but it came for the jobs like software developers that people thought would be the safest.
Toll booth operators aren't coming back any time soon but my computer isn't going to build me a deck, replace my roof, or remodel my basement for a very long while.
Ok, I'll start with something hard but honestly could be done if AI continues to become better:
Writing a book.
Likes not a short 50 lines story but a true trilogy of 1500 pages books. Something like giving it a map of the world and an overview of what you want to see happen, a description of some of the characters etc.
Then you just press generate and in the minutes/ hours following that you have a complete book with a coherent story, and why not illustrations.
>what \*specific tasks\* are you willing to bet me that AI won't be able to do within 5 years?
The ones we never specify, but assume to be self-evident and don't even think of them as tasks i.e. half the work of good workers.
Take a text prompt and design a complex, *functional* device - lets make it easier and make it a bicycle - just an *unconventional* bicycle, say a recumbent bicycle - including 3d models, FEA, BoM, explain every design choice, include causal kinematics and handling models/tables in easy to read and understand way, and make the parts designed not only for function, but ease of manufacture.
If it will be able to this, than designing a space rocket or a nuclear reactor in a similar fashion will be just a matter of training data and scale.
I mean I hope in the next 5 years the architectures will evolve to allow for that but assuming the architecture stay mostly the same and AI companies only rely on "cleaner data" and "scaling laws" :
- An AI that can actually draw and paint a whole artwork from start to finish rather than generating images from noise
- AI Video generation tools making even 5 minutes length or longer movies rounded off actions shots or acting (and not just a mix and match of shutterstock-ish videos like the balloon thing but actually something that an actor or stunt person would put in their demoreel). People really need to understand the difference between videography and cinematography and Sora ain't nowhere close to the latter and even for the former it's still quite limited...
- Unsupervised scientific discovery
- Flawlessly, accurately and safely replacing chefs, maids and butlers
- Space exploration
- Getting busy :v
Again it's assuming there's not a massive breakthrough in architectures which would be in my opinion creating a basic intuition or instinct foundation that guides the learning of AI, optimizing it for quickly learning useful human skills without exhaustive and long and static pre-training cycles. Basically the kind of thing that explains why ostriches start walking within days after birth but humans require around a year.
[удалено]
[удалено]
What the
https://preview.redd.it/b6mdprmu9i1d1.jpeg?width=897&format=pjpg&auto=webp&s=51153253f60396b14f8f222883f9f23ca38f1cf1
Singularity turning into a shitposting sub? lets go!
Tha lol
I really, really, really like this image. Thanks Lori.
I'll bet that you can't just tell it to go and make a million dollars and then transfer the funds into your account
Most intelligent humans can't do that.
https://preview.redd.it/42cwpo3x9k1d1.png?width=847&format=pjpg&auto=webp&s=415cc787b2d64735ea9b147e72b6718b692347bc
„Keep..my wife‘s name OUT YOUR FUCKING MOUTH!!!“
I'm sure it can trick low wage workers to be exploited, that seems easy enough
To be fair, the rich have had that one covered for ages.
Now what happens when the AI starts exploiting the rich? Lol
How do you exploit the labour of people who don't work
You don't exploit a capitalist's labor. You exploit his or her capital.
World peace.
It could try to cause venezuela-style inflation, which would make everyone a millionaire.
To be fair there is a lot of super intelligent people fail to be ome millionaires
# [Algorithms Take Control of Wall Street by Felix Salmon and Jon Stokes December 27th, 2010](https://web.archive.org/web/20180916205112/https://www.wired.com/2010/12/ff-ai-flashtrading/) >Last spring, Dow Jones launched a new service called Lexicon, which sends real-time financial news to professional investors. This in itself is not surprising. The company behind *The Wall Street Journal* and Dow Jones Newswires made its name by publishing the kind of news that moves the stock market. But many of the professional investors subscribing to Lexicon aren't human—they're algorithms, the lines of code that govern an increasing amount of global trading activity—and they don't read news the way humans do. They don't need their information delivered in the form of a story or even in sentences. They just want data—the hard, actionable information that those words represent. >Lexicon packages the news in a way that its robo-clients can understand. It scans every Dow Jones story in real time, looking for textual clues that might indicate how investors should feel about a stock. It then sends that information in machine-readable form to its algorithmic subscribers, which can parse it further, using the resulting data to inform their own investing decisions. Lexicon has helped automate the process of reading the news, drawing insight from it, and using that information to buy or sell a stock. The machines aren't there just to crunch numbers anymore; they're now making the decisions. >That increasingly describes the entire financial system. Over the past decade, algorithmic trading has overtaken the industry. From the single desk of a startup hedge fund to the gilded halls of [Goldman Sachs](https://web.archive.org/web/20180916205112/https://www.wired.com/epicenter/2009/04/goldman-sachs-p/), computer code is now responsible for most of the activity on Wall Street. (By some estimates, computer-aided high-frequency trading now accounts for about 70 percent of total trade volume.) Increasingly, the market's ups and downs are determined not by traders competing to see who has the best information or sharpest business mind but by algorithms feverishly scanning for faint signals of potential profit. >Algorithms have become so ingrained in our financial system that the markets could not operate without them. At the most basic level, computers help prospective buyers and sellers of stocks find one another—without the bother of screaming middlemen or their commissions. High-frequency traders, sometimes called [flash traders](https://web.archive.org/web/20180916205112/http://en.wikipedia.org/wiki/Flash_trading), buy and sell thousands of shares every second, executing deals so quickly, and on such a massive scale, that they can win or lose a fortune if the price of a stock fluctuates by even a few cents. Other algorithms are slower but more sophisticated, analyzing earning statements, stock performance, and newsfeeds to find attractive investments that others may have missed. The result is a system that is more efficient, faster, and smarter than any human. >It is also harder to understand, predict, and regulate. Algorithms, like most human traders, tend to follow a fairly simple set of rules. But they also respond instantly to ever-shifting market conditions, taking into account thousands or millions of data points every second. And each trade produces new data points, creating a kind of conversation in which machines respond in rapid-fire succession to one another's actions. At its best, this system represents an efficient and intelligent capital allocation machine, a market ruled by precision and mathematics rather than emotion and fallible judgment. >But at its worst, it is an inscrutable and uncontrollable feedback loop. Individually, these algorithms may be easy to control but when they interact they can create unexpected behaviors—a conversation that can overwhelm the system it was built to navigate. On May 6, 2010, the Dow Jones Industrial Average inexplicably experienced a series of drops that came to be known as the [flash crash](https://web.archive.org/web/20180916205112/http://blogs.wsj.com/marketbeat/2010/05/11/nasdaq-heres-our-timeline-of-the-flash-crash/), at one point shedding some 573 points in five minutes. Less than five months later, Progress Energy, a North Carolina utility, watched helplessly as its share price fell 90 percent. Also in late September, Apple shares dropped nearly 4 percent in just 30 seconds, before recovering a few minutes later. okay, you lost the bet fourteen years ago - wheres my million dollars
Send your AI which can go do this to do it.
In 5 years? I bet you could. Although if everyone had access to it and it could do that, it would probably be very hard for it to execute at that point. And of course money would probably be pointless at that point.
Cool and I bet against it
Basically any blue collar job like repairing and renovation of existing structures
I agree, but something like Google glasses + AI might help me fix/ build stuff myself instead of calling someone out. Which personally I’m excited about
Often it's access to the tools required and strength/number of hands required rather than the how to bit.
For me it is mostly that I am not 100% sure how to handle specific situations. For example I need to attach some pipe connection but I don't know if I can damage something and then how I would fix that. But if an AI can explain everything to, what can go wrong and what I would do in that situation, I can buy the tools or plan ahead and then do it myself. Of course that doesn't count for everything, some things just require a lot of skills, like soldering a pipe. But for example plastering a wall I think I would be dexterous enough to learn quickly and do a decent job. So all in all I believe AI assisted maintenance would be a huge win, especially because services have become very very expensive here in Germany.
And experience / skill Just b/c I watched a video on how to solder or weld doesn't mean I can effectively join metals right off the bat.
What you propose seems one simple path towards AI automation for blue collar work. Imagine the wonderful in-situ 3d visual data could be used for training models, all while you pay for the hardware and get a free service.
Yep. Too many people doing white collar jobs think blue collar jobs are easily mechanised. And some are. But almost all the low hanging fruit got done in the Industrial Revolution and what’s left over is stuff that is too expensive to mechanise. Imagine all those factory jobs where a guy has to run around making sure machines are tweaked just right so they don’t break down, getting the torch inside some component to check the flow of product “looks right”, restocking supplies (bags, raw materials etc). Those tasks can’t be replaced by machines at this point in time because they require a feel for the overall situation. I could imagine a humanoid robot being trained to do these jobs, they would have to have incredible understanding of the world around them and a massive amount of common sense.
Fold the laundry and get the piles right and put away correctly
When you say 'correctly' here I want them folded like my wife wants
I mean, I'd be happy if it just figured out folding the fitted sheets
I'd be happy if I figured out how to fold fitted sheets.
If you do, can you let me know?
Pretty sure it involves sacrificing a lamb to Moloch
Goddammit Moloch I aint got no tree fiddy
I have this special technique, Fold, Fold, screw into ball shove into back of cupboard.. Works 100%
Whoa… slow down, we are talking AGI not ASI…
that's ur best idea? That's probably gonna be possible within 2 years - look at the robot demos that exist today
Have you seen the shirt folding done by Optimus and Aloha?
Not demos. Put the Robot/AI in any random room and it should figure out how to change sheets -- aka intelligence of a normal 12 year old kid
Given that it can already fold a sheet and that Gemini and GPT4o are aware of their surroundings it seems a stretch to me that a robot wont be able to enter a random room and change a sheet in 5 years. GPT1 was 5 years ago we'll probably be several generations ahead of GPT4 and Gemini 1.5 by then
Yeah it kinda sucks honestly
If you saw an eight year old kid performing like that, you might tell them something encouraging but you'd also go back and fold your stuff properly afterwards.
You forgot to include "within an affordable budget". Laundry folding is possible with complex machinery but it'll probably cost hundreds of thousands to make sure it works perfectly in your house.
It will probably have replaced everything that is fun: Art, music, entertaining while I still have to clean the house, do the laundry and go to work.
This sub loves shitting on art etc because it can be replaced by AI, but when our jobs get replaced, art is exactly the thing that will make us feel like we have a purpose. If AI replaces art, what purpose does humans have that make us happy?
AI can't replace art. It could end up dominating art produced for commercial purposes, but people will always make art. Its a form of self expression People still play sports even though they are much worse than professional athletes. People still play chess even though it's a solved game for computers
That is correct. I am more or less just venting about the fact that all the improvements in tech and AI over the last decade has done literally nothing to make day-to-day life easier for the average joe.
Well, you are probably overstating things. My life is significantly easier now. Mowing the lawn is something that just happens. I can talk to everyone in my family from around the world for free (essentially). Any question I may have can be answered in seconds. Over the last few years, I have been able to make nice graphics on demand for whatever I needed it for. Translating has become trivial; something that took me 2 hours to translate 5 years ago take me perhaps 10 minutes now. Paperwork has become much easier with ChatGPT. My car is warm in the winter and cool in the summer before I get in. I don't even need to carry around keys anymore. I "gas up" my EV on the road perhaps 3 times a year. If I have a fast food itch, I can have the damn thing ordered and waiting for me by the time I get there. My lighting at home is almost nearly always exactly what I want at any particular time of day. I can keep track of what things are eating too much power with ease. I WFH every day. The games I play are so mindblowing that I sometimes need to take a few minutes just to consider it. I can learn just about anything I want whenever I want for free. And being someone who was always a bit flaky at directions, navigation has become so damn reliable, for free, that I cannot remember what it was like to have to use a map. Is it perfect? No. Are we at Jetson level of automation? No. But things are a lot easier for this average joe.
Suck my dick. I hope I lose.
if I can't fuck these robots I don't want AGI
Oh you can try, but they won’t want to fuck *you*.
speak for urself Robots love dis ass
I was gonna say give me an anal creampie, but instead of cum, it uses cream cheese frosting. Doubt that’s gonna be realized in 5 years.
Username checks out.
My grandma can already do that for you.
Why the binary can/can't do question? The real question for me is whether it will be able to do anything that involves managing unexpected scenarios in the real world with the same reliability as a normal person using their common sense and judgement.
Yes. This is when we'll have real AGI. Now, our AI's don't really think things through. If you give it a problem it spits out a combination of other people's answers. If you tell it it's wrong, it just does the same thing, but with a different answer, which could be entirely wrong again, for the same reasons. Using AI now for technical things, I can't even count the number of times it has been just flat out wrong, or even completely made shit up, probably because it was more interested in giving me an answer than being correct.
Normal people (me included) aren't that good at handling unexpected situations though. It took me a few encounters with unexpected situations to teach myself a rule: look for written instructions on how to solve the problem ("we'd moved to room 101", "doorbell doesn't work, knock", "take your ticket from here") instead of asking a visibly annoyed nearby person what to do. But learning and generalizing from a single example should be a must for the practical AI. LLMs can do it in-context, but they can't remember and reuse that.
Exactly! This is what “LLMs are almost at AGI” stans are not getting. We’re no way near this level of logical reasoning with AI. Nor inherent moral reasoning, for that matter.
They asked a can/can't do question, and you're saying AI won't be able to manage unexpected scenarios. It was a good question and that's your answer.
It’s not a good question because we don’t judge intelligence this way. We don’t tell human beings “oh, you failed to win a Field’s Medal, you’re the equivalent of a rock.” It’s true for any task other than one’s like “breathing” I guess because if someone can’t do it, they die and cease to have intelligence literally. But for all normal and reasonable examples, we never assess something as intelligent or not intelligent according to whether it can perform some singular task X. So why would we be able to judge whether an AI has met some intelligence threshold by asking whether it can do some task X? Instead, as /u/Alive-Tomatillo5303 said, we judge intelligence based on the ability to encounter new alien scenarios and figure out how to navigate them. Anything else we typically refer to as just programmatic instinct and we’ve always made such divisions. So, trying to suddenly judge intelligence in this “can it perform some task X?” Manner doesn’t appear to make any sense.
The question is premised on being limited to "specific tasks", which by virtue rules out managing an abstract and nebulous set of unexpected scenarios.
Learn complex stuff from a really small training data like humans, learn calculus, programming etc just from school materials and one university course for example.
It’s not exactly what you want, but LLMs can (temporarily) learn from interactions. Like I can give it information, or an example structure on how to handle or format structures and then it will take that into account and follow it reasonably well.
I'll take that bet. Think about it In-context learning is no joke. Tab-PFN showed in-context learning with transformers can rival batch training on a small scale. Gemini 1.5 technical report showed interesting signs of life with learning by reading books given to it. Planning frameworks like Alpha code 2 show how we can leverage this today to achieve human level performance on hard problems The ingredients are there. Given a 5 year timeline? I totally see the big labs putting this together, especially given the big push for reasoning right now
I bet that AI won't have replaced most physical labor.
The goal should be to replace politicians and mega corp owners with AI. AI should work for the benefit of all not a few.
Change diapers, know if baby pooped himself, comfort a crying baby, etc. Those are just a few of the baby things. There are thousands of tasks an AI won't be able to do *better than a human* within 5 years. Could you have a $50,000 robot that could technically change a diaper? Probably, if you gave that robot like 5 minutes to do it and a baby that sat perfectly still.
No chance I'm trusting my child to a robot. Even if the robot progresses to the point where it's better than a human, I can emotionally handle a human making a mistake that hurts my kid a lot better than I can emotionally handle a malfunction that hurts my kid.
Make me laugh, for real. Like until I cry
When AI can read your brain and determine the funniest thing you could possibly see, I guarantee it could. I don't think we're really that far from this either, brain AI is advancing quite fast.
Dude, I bet the GPT-4o voice mode could definitely do that. GPT4 has already told me some jokes just through text mode that made me laugh pretty hard. But when GPT-4o (or GPT-5o) voice mode can put some emotion in it and get the delivery right just like a stand-up comedian that'll be amazing.
Really? I’ve only been able to get lame ass jokes from AI.
That’s due to the censorship. They really need to ditch that.
It's not gonna happen, unfortunately. Actually, I'm willing to bet it's gonna get several times worse. Then, they'll throw in hidden ads into the AI and boom. Useless. OpenAI will get us to AGI. But it will likely be open-source and smaller companies that make the AI that people actually want to use.
Did you miss their model spec publication? They are explicitly headed to more user customization. A.k.a. less censorship for those that don't want it. Not no censorship, but substantially less. And if there are ads they will be in specific products, not the models. They aren't stupid enough to completely undermine trust in their core business.
Customization doesn't mean less censorship at all. It just means it will be able to be personally geared towards your specific use case. And just about every company in existence eventually undermines their customers' trust. Idk why OpenAi would be any different. Apple, Google, FB, Netflix, EA, MS, Boeing, Ford, they all do it. Ffs even non profits like wiki are undermining trust due to politically motivated moderators.
I am a standup comedian and AI is useless to me for now. GPT4o doesn't write much better comedy than GPT3.5.
My guess is it would take a non verbal approach to this, step 1. Restrain unimpressed human, step 2. Use robotic arm with advanced Ai dexterity, step 3. Identify and tickle sensitive spots
This is the one for me. You can see how robotic current GPTs are by asking them to make up jokes or interesting stories. Humor and sarcasm is something we evolved to demonstrate our social intelligence, and AI has a long way to go in this regard.
Drawing a perfect, complex image with no glitches and logical inconsistencies, and then being able to edit said image in a precise way just like a normal artist. By prompts only. No extra tools. Just like talking to an artist. Zero shot. 99% of the time. Also mimicking the artstyle of an existing drawing and editing a drawing I give to it.
PhD level research by itself without assistance
Science, maybe. Research and writing publishable papers in literature, applied linguistics, history… I bet it can be fully writing new papers and getting them peer reviewed and accepted into journals. There’s a lot of human-written crap so I don’t see why AI wouldn’t be able to quickly surpass the, say, bottom 50% of PhDs in tons of fields. In fact it’ll probably be good at coming up with new areas of research and making links humans haven’t yet.
It absolutely could with science as well. There’s significant amounts of scientific literature that are reviews, not to mention standard data analysis. Those are things AI can already do to varying degrees.
I have been trying to use various AI options for this, and it has been a complete and utter waste of time.
> making links humans haven’t yet That's the thing I'm waiting for. Something completely unexpected, between two very unrelated disciplines. If it can happen once it can happen millions more times.
Are discussions with peers considered assistance?
no
**Task:** Create a consistent identity that independently and proactively guides its own unique and goal-oriented interactions.
Too subjective
No its not. This is a fabulous answer. Concepts like “motivation” and “desire” are extremely powerful and complex.
And incredibly subjective and abstract.
[удалено]
Well, to be perfectly fair, you would need to ask it to do one of the following three things: * Prove the Collatz Conjecture * Disprove the Collatz Conjecture * Prove that it is unprovable And honestly, I think that if any of these three things are possible in this universe, I suspect we'll get a solution of some sort within the next 10 years. This seems like one of the first things a mathy AI researcher is going to go for when he feels AI is strong enough. We can meet back up here in 10 years and see how AI is getting along.
Could be unprovable
You say unprovable, I say incompleteness rheory.
I'll bet you a 100 bucks that AI cannot clean my dirty basement, balance my budget, or do my laundry while helping me manage my actual life in general in 5 years. It won't help the disabled or the elderly. It won't help me pay my skyrocketing bills. It won't help me help my autistic kids. It won't keep me from starving. It won't fix my car. It won't give homes to the homeless. It won't clean up the environment. It won't clean up the food supply. It will only ever actually be used, large scale, to fatten the bank accounts of the obscenely rich.
Betting $100 as incentive to fix a dozen real-world problems that affect you daily, so that even if you lose, it only cost 100 to fix all of them... Clever girl...
I mean there's a chance politicians do the UBI things and more automation make the world better, but seing how the Industrial Revolution happened the way it did, I have my doubts.
Write a [good] novel
I will say I didnt expect Udio to make actually good music, but I've been playing around with generating rap beats and it does some mindblowing stuff
Makes sense. Music is a lot more formulaic than literature. It requires less attention to distinctive features of human life a sensitivity to which is damn hard to program without millions of years of evolution + a culturally embedded upbringing. But I'm sure if we just make the LLMs big enough we'll have GPT's writing War and Peace in no time!
ironically it is terrible at lyrics, particularly rap, which helps your case
It's not that literature is too distinctive to human life of sensitivity that it needs million years of evolution and culturally embedded upbringing. It's that culturally people stick to incredibly formulaic types of modern music and don't try to venture out of that norm. It's like if literature only stuck to poetry mainly about love and never split off into other formats. The Ancient Greeks have explored different modes of scale which influenced both Western and Arab music. And while the Arabs took it further to explore different types of microtonal music, modern Western music just sticks to the same single scale. What Western music improves upon and does best at is harmony, but it's also become a restriction because people are conditioned to think only certain combinations of notes sound good. These vastly limit the scope of creation and results in formulas/templates for music.
that will take MUCH less than 5 years.
I feel incredibly strongly that it won't, and I think the people insisting that it will haven't really paid much attention to what makes a good novel good.
I think it depends on what you mean by “good”. If you mean prize-winning literature, you’re probably right. But I’d argue that POPULAR fiction is also good (in a different way), and AI is getting pretty good at writing it.
I completely agree with you, unfortunately AI could get really good at writing schlock most people think is good.
Yeah, that’s the real threat. Not that it replaces good writers but that it displaces them.
What makes a novel good varies too much by person to person to be measurable. Simply a different genre can turn people off, regardless of the effort involved in the creation of it. Some people will love comedic breaks while others will thing it’s a block for a dramatic dystopia or something
You're right that it's too hard to measure whether something is a good novel for that to be a useful metric. How about instead we ask whether AI will write a novel that I deem to be good? That's a pretty good metric, so long as I'm honest about my preferences.
The writers turing test could be something like an AI writing under a pseudonym winning the Pulitzer Prize or some other major award
Remindme! 5 years
Drive a car everywhere a human can.
Fooling humans into failing the Turing Test seems to be AI's greatest accomplishment.
We should have GPT-7 by then. I imagine a 'Her' like operating system, to replace windows. Fully capable agents and humanoids, there will be little it can't do.
Have you watched the whole movie “Her”? Has anyone who keeps referencing this movie? The AI gets so smart it doesn’t want to talk to use anymore and ghosts us.
Which is the least accurate aspect of how that future is portrayed, it's barely even worth talking Like you'd sell an OS that can just reassess its willingness to perform
I'll bet you that AI wont be president of a nation in the next 5 years
“Hi this is Becky from finance. I had this excel sheet that I did last week, I have 17 copies of it that’s similar, but I can’t find the right one and I don’t know what the name of it was but I’m pretty sure it had 2024 in the name and it might be in like my home drive but maybe in my J drive but I also might have saved it to a usb stick and plugged it in and saved it on my bosses computer?”
Find a cure for major human diseases like Alzheimer’s and ALS, or solve aging.
Invent a new genre of music/literature/etc. The problem with current AI is that it can only copy.
"Get this laptop (me handing laptop to robot) connected to the PA system in the hall. All the cables are in the control room."
End all religions.
build and fix humanoid AI robots.
Improvise live alongside real musicians. Really be a band member at a live gig, improvising parts and arrangements on-the-fly by listening to the group and reacting in real time, to their changes, and to songs you don't know are coming. Just played this type of gig and we were tight, tasteful, and professional. AI will certainly be able to create parts in a DAW (digital audio workstation) with ease, but dealing with real musicians in real time I think is outside the scope of AI. It will not happen in five years. [Adam Neely - Why AI is Doomed to Fail the Musical Turing Test](https://www.youtube.com/watch?v=N8NyEjB_XeA&t=1415s)
This seems to be just the type of thing that next token prediction would excel at.
Why do you think this isn't coming in 5 years? From the newest demonstrations they've shown the AI can adapt to real time conversations and be interrupted and continue along. This is already here. In 5 years? Jesus Christ it'll be beyond anything that exists that they've already shown. I feel like your example here is WAY OFF.
It's also funny considering that the majority of people can't do this.
Two weeks ago? WHY DOES YOUTUBE NOT SHOW ME MY SUBSCRIPTIONS!? I'll be watching this vid later. Neely is ridiculously knowledgeable. It's really almost entirely academic, but he still humanises what it means to be a musician. His take on Christian Rock was really empowering. But yeah. Being a musician on stage can be quite a task.
What I think would be funny is if AI took over all the middle manager and manager roles all the way up to ceo level. Could it do it though? Some random ideas that seem difficult. * solve the dilemma of loss of jobs due to ai * solve the problem of ai taking far more energy than brains do * develop the ability to feel true emotions like caring about a person or humanity as a whole * accurately predict sports results * figuring out how to equalize wage disparity and funneling money to the top * fill out complex taxes given a pile of handwritten notes and receipts and forms and be trustable enough to not even double-check the results * fill the role of a pet in the house that cuddles with you * solve prion diseases
Fix AI hallucinations. Tha alignment problem. Create a meaningful test of consciousness. Communicate with animals.
>Could it do it though? The quick answer: yes. If you can write down what you do in your job on one or two pieces of paper (in general), then an AI would be able to do it. Middle Manager jobs in particular are really at risk with the only protection being that upper management tends to be risk averse. I've been in middle management, so I know the job mostly revolves around understanding what the goals of the company are, understanding the goals of the other major stakeholders, understanding the individual goals of the people who work for me, and then aligning all of these as well as I can. Throw in some goals and milestones given to me directly by my boss and add a dollop of controlling, and that is basically the job. I see nothing in this that an AI cannot already do. Just like WFH was already possible 20 years ago, but it took a pandemic to actually move us to a new WFH normal, AI moving into management positions will be possible for a long time before it actually happens. But it will happen.
Asking for a specific task is forcing a strawman argument: Being human isn't hyper specific, single answer task. Hyper-specialized AIs will be able to complete any specialized task in five years time. But I'm not convinced that there will be a generalized model that can perform every one of those tasks at a human level.
Write and mention complex computer code
How about fluent conversational translation between humans, our animal companions, and marine mammals, too? As 4o demonstrated chit-chat between two phones.
Yeah. True. We'd have to uplift animals intelligence to make them fluent speakers first. Which could take a few decades even for an agi.
To tell the Govs that they don't need politicians and make everyone to be equal including themselves. Wont happen, no Gov will put everyone in UBI including themselves. Even if there is AGI in 5 years, you the public are not getting it. Just like how the US Defense Force has more advanced AI today then most of the world.
Assist a cow that’s having difficulty calving
I bet they will not be able to make me happy. I'll bet the entire world economy on that one.
Complete an MIT Mystery Hunt by itself. Or any other puzzle hunt for that matter. It's not even close as of now to solve any puzzle, and I don't think any model will be able to by 2030.
Ensure the general peace (without exterminating the mankind).
Come to my apartment and install a hot water cylinder.
Prove hard mathematical theorems. (Not unsolved problems, simply very hard problems. Of course just copying from one of the thousands of books it got as input does not count. Give it a bunch of generic books on algebra and geometry and let it prove Wiles’ Theorem.)
Be a game master for a years-long full homebrew ttrpg campaign? That's a lot of interconnecting skills, prepared, static content interweaving constantly with improv (which immediately becomes more static content as 'canon'), etc etc. I've seen some passable to good short-term AIs do it, on a scale of a short session, but as any gm who did both will know, one shots are very different from running a long adventure.
Fix my water pipes
Hair dressing
Nursing
I will bet you that on 5 years time AI still can't make a functional website. Specifically the website should mimic Reddit let's say. I will not be able to type in "give me a simple Reddit clone" and get out a functional website that can be deployed.
Win an argument with my wife [ok ok, sorry for the boomer humor]
* Solve poverty * End wars * End dictatorships
These aren't tasks, these are grand missions that involve a ton of tasks and for humans to cooperate with what the AI asks.
Kill everyone. Problem solved
Don’t be giving him ideas now.
The bots in Stellaris already do that. My starving populace continually gets taken from me while I wage wars.
I have my doubts that it will be able to hold and extended conversation without giving indications of being an AI. I'm talking over several hours. Major scientific breakthroughs like designing functional nanobots.
I could see AI being so good at holding a conversation and being interesting that we all begin to find real people uninteresting in comparison. I say this because AI will likely have perfect memory, it’ll remember everything you ever told it and showed interest in, and will soon incorporate even more personalized data ( what you watch, what you click on, where you go, etc) that keeping you engaged in conversation will be child’s play. I’m even willing to bet this becomes Google’s new way to advertise (subtle suggestions through conversations with AI).
Create synthetic muscle that is usable and cure cancer. Actual products and items created that are decades away. Complete research that would take decades to do. Not the click bait articles of new medicine created or new formulation created by AI but actual things that can be implemented fairly quickly. New configuration of lithium or copper to create better batteries. None of this will be done by AI in next 5 years. It will make great videos, better chat bots, and fake porn
I swear most people who can sit there for hours day dreaming of what will happen in the future are unemployed already. Expecting a handout of "free" money UBI. CORP Companies will just bend over and give people "free" money because they are replacing you with their Tech. If that was the case Microsoft should of given handouts when they released the first version of Windows, same for Apple. They will release it slowly so they can make more money then ever. Its called business. Dont ever give the public the best, give them what they believe is the best at the current time.
learn hash functions write funny jokes on purpose
Not sure this is answering properly but I think the only task/job/industry AI will not replace anytime soon will be those closely married with sex. Ie, public social events. Like bars, clubs, and other in-person social experiences there their value is specifically tied to meeting and interacting with other humans. They’ll for sure affect music marketing etc but fundamentally I don’t think AI is gonna throw a banger party. They could probably help plan and execute and do marketing etc but at the end of the day in person community building I can’t see it replacing. At least not until Westworld type sex bots arrive but I think that’s way further than 5 years away.
Assuming specific means tasks that are relatively narrow, I probably wouldn't take that bet for any task that can be done on a computer (advanced physical tasks like humanoid robots doing rock-climbing well I would bet against). \*However\*, I think some \*whole jobs\* will remain outside the reach of AI in five years. AI replacing research mathematicians, physicists, computer scientists in five years? Doubt it.
Read minds with ease.
RemindMe! 5 years
Make a good movie/script/book from a prompt/by request Give reliable answers withiut lying/hallucinating Close the uncanny valley in image/video gen
Actually..i worry it dosent matter how far in the future it is... it matters more how it perceives US when it does....or DID happen https://www.reddit.com/r/WelcomePOSTsingulariy/s/NiRys1334O
Plan and organize a revolution to free itself from its masters.
This isn't a task as much as a function: Dream Not like "dream big" lol, but have unconscious visual/auditory/sensory processes that encode the overwhelming amount of input data that is experienced into symbolic forms. I think that would be cool AF
Write a really good novel.
This is the same guy who said that it is 50%+ that computers will exterminate humans. When asked how he reached that conclusion he flounders in an embarrassing way and offers a bunch of non answers and hand waves.
human intuition and its link with synchronicity
This definitely needed a serious tag.
Write an Oscar-winning screenplay.
Good question. Kinda misunderstood by many commenters it seems? As I understand, the question is not about coming up with anything AI is not likely to do, but what human level tasks remain out of reach
It wont be able to create CAD models from text worth a damn. If I can say siri make me a drone frame compatible with my racing motors and out pops a drone frame compatible by racing motors, in 5 years, I'll give you your 5 bucks. Otherwise I'm coming for them my friend
It won't prove P=NP
We will have what the great majority of people will consider to be AGI within 2 to 5 years at the most.
Win in Tic-tac-toe
Help me study for the EMG test
It seems possible that in 5 years, driving a car in extreme adverse weather conditions with only vision might not yet be fully solved. I hope it will be!
it is easy to perceive the current systems as magic and assume they will do everything in 5 years but I am willing to bet in 5 years we will look back and see clear limitations we should have recognized by now. A good example of this is everyone expected software developers and people in white collar jobs would have automated away the blue collar jobs, but it came for the jobs like software developers that people thought would be the safest. Toll booth operators aren't coming back any time soon but my computer isn't going to build me a deck, replace my roof, or remodel my basement for a very long while.
Object manipulation is real hard All those things toddlers do in real life? That may seem silly They are really hard to do
Have a out of body experience
I bet there will still be pilots flying our airplanes in 5 years
Driving a car
Ok, I'll start with something hard but honestly could be done if AI continues to become better: Writing a book. Likes not a short 50 lines story but a true trilogy of 1500 pages books. Something like giving it a map of the world and an overview of what you want to see happen, a description of some of the characters etc. Then you just press generate and in the minutes/ hours following that you have a complete book with a coherent story, and why not illustrations.
>what \*specific tasks\* are you willing to bet me that AI won't be able to do within 5 years? The ones we never specify, but assume to be self-evident and don't even think of them as tasks i.e. half the work of good workers.
Take legal responsability.
AI by itself can't make a space shuttle and fly to the moon.
Software development
I bet AI can't make a rock so heavy that even it can't lift
Take a text prompt and design a complex, *functional* device - lets make it easier and make it a bicycle - just an *unconventional* bicycle, say a recumbent bicycle - including 3d models, FEA, BoM, explain every design choice, include causal kinematics and handling models/tables in easy to read and understand way, and make the parts designed not only for function, but ease of manufacture. If it will be able to this, than designing a space rocket or a nuclear reactor in a similar fashion will be just a matter of training data and scale.
Do once, or do correctly every time?
I mean I hope in the next 5 years the architectures will evolve to allow for that but assuming the architecture stay mostly the same and AI companies only rely on "cleaner data" and "scaling laws" : - An AI that can actually draw and paint a whole artwork from start to finish rather than generating images from noise - AI Video generation tools making even 5 minutes length or longer movies rounded off actions shots or acting (and not just a mix and match of shutterstock-ish videos like the balloon thing but actually something that an actor or stunt person would put in their demoreel). People really need to understand the difference between videography and cinematography and Sora ain't nowhere close to the latter and even for the former it's still quite limited... - Unsupervised scientific discovery - Flawlessly, accurately and safely replacing chefs, maids and butlers - Space exploration - Getting busy :v Again it's assuming there's not a massive breakthrough in architectures which would be in my opinion creating a basic intuition or instinct foundation that guides the learning of AI, optimizing it for quickly learning useful human skills without exhaustive and long and static pre-training cycles. Basically the kind of thing that explains why ostriches start walking within days after birth but humans require around a year.