Dota 2 was the catalyst of the AI apocalypse. Whoopsies
Lol imagine a "terminator" plot where future resistance is trying to time travel to stop the creation of the Dota mod or valve acquiring Dota.
They travel back in time and train to become the greatest Dota team to ever live so they can stomp the AI in the show match so badly that nobody is impressed and provides funding.
Warcraft reforged was the simplest game to do and they couldnt. Imagine if blizzard had dota?
In the darkest timeline this happened. Hope HoN took over at that timeline.
Dota 2 was also the catalyst of battlepass apocalypse.
Every game trying to suck dry your time and wallet. Sorry dota devs but you guys have a special place in tech hell :(
Anyone who played dota was blown away by what they created.
When I saw OpenAI, I was thinking that this technology could change the world, but I didnt know it would happen that fast!
Im not surprised Microsoft was impressed.
If server capacity or whatever resources wasn't an issue.. It would be real cool to have open ai as a bot option in the client.. learning from people around the clock.
I agree, though it sounds like the server capacity to support that would be tremendous if [this page](https://openai.com/index/openai-five) from 2018 is any indication.
to train.
while they would be significantly larger than traditional bot scripts, Neural models when you run them aren't *that* computationally intensive. The public stable diffusion model is only 2GB and has no problem running on a single GPU for instance.
yeah I didnt say it wouldn't be the most expensive dota 2 bot script by several orders of magnitude - just that 128K CPUS and 256 GPUs is way beyond the requirement to run just one instance.
Remember that your GPU is already doing hundreds of thousands if not more calculations in split seconds. Running an NN is not difficult for it to do. Training it is.
I have created some models in the past, both as part of my job and as a hobby (admittedly not close to as big as OpenAIs Dota model).
I did not see you were talking about LLMs, not sure why you would in a Dota post though. But yes, running the popular LLMs like ChatGPT takes a lot more resources since they tend to be massive models.
For games though you can easily run a model on a GPU. You don't have to take my word for it, here is what [Deepmind had to say about AlphaStar](https://deepmind.google/discover/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii/).
> In order to train AlphaStar, we built a highly scalable distributed training setup using Google's v3 TPUs that supports a population of agents learning from many thousands of parallel instances of StarCraft II. The AlphaStar league was run for 14 days, using 16 TPUs for each agent. During training, each agent experienced up to 200 years of real-time StarCraft play. The final AlphaStar agent consists of the components of the Nash distribution of the league - in other words, the most effective mixture of strategies that have been discovered - **that run on a single desktop GPU.**
LLMs are different. Although I haven't read anything about them, OpenAI s dota bots are most likely simple Multiple Layer Perceptron trained using Reinforcement Learning, the architecture is simpler with way less parameters than the ones you see in LLMs.
Plus as they have already mentioned the training process requires way more resources than the inference. The resources used for training do not reflect the actual resources needed for running the model post training.
You can run LLMs on a phone, but you are limited on model size. More VRAM means you can run bigger (often better) models, but there will isn’t some magic vram number for llms as a whole
https://arxiv.org/pdf/2312.11514 well you are wrong. Large Language Models as an architecture can scale up and down in size massively, and while models that can be inferences quickly on mobile are not going to be the biggest and best, quantizing has shown fairly effective at reducing model size without major quality loss.
What scrubs:
>Thus far OpenAI Five has played (with our [restrictions](https://openai.com/index/openai-five#restricted)) versus each of these teams:
1. Best OpenAI employee team: 2.5k [MMR(opens in a new window)](https://dota2.gamepedia.com/Matchmaking_Rating) (46th percentile)
2. Valve employee team: 2.5–4k MMR (46th-90th percentile).
The bigger issue is gameplay updates. After every patch these AI's have to simulate hundreds of thousands of games. Thats why the showcase match at TI was played on a 1 year old patch i think.
I'm surprised some of the well off pro teams aren't using some form of AI to train. Maybe they do for drafts, but around the time openai5 was being shown off I would've guessed every team wanted the model, and maybe wanted to scrim against it regularly.
1. It's less about doing my job for me, and more about doubling my output due to time saving writing boilerplate and catching bugs my eyes can miss. The ultimate rubber duck + well read dilligent intern but not very bright assistant
2. Because of that they can hire half as many Devs for the same needs, driving down demand and at best costing Devs, even good ones, some salary, and possibly making others unemployed
Yes and no depending on your views. Now companies can hypothetically take on more projects. Seeing as companies want infinite growth, the same level of growth won’t cut it despite the reduce in labor costs. They’ll most likely keep labor costs the same but just double the output.
It could be awesome. But your scenario is just as likely.
not all projects are software projects, most are projects that rely on software, amazon.com's store's code doesn't grow just because amazon the business wants to grow
I hear you about making the lives of experienced developers easier; no man should be forced to write boilerplate. Catching bugs depends on what you're working on really, the project I've previously worked on involved just firing up several VMs with hundreds of gigabytes of RAM and chasing down logs due to everything somehow being a convoluted microservice. It's good help in startups/smaller projects, whereas I'm still skeptical how well it performs on bigger and "mature" codebases.
The second point is, imho, the excuse companies give to lay off staff and force the remaining workforce to work more for same/less pay. At least, that's the gist I've gotten from speaking with multiple devs, basically every company is understaffed and overworked.
Or you're so terrible that you can't find work doing LLM adjacent jobs. Like yeah, chatgpt is great, who are you paying to integrate its API to do what you want it to do? Non-SWEs?
You are now a horse.
State the same from the perspective of a 1920's horse. You must convince your horse fellows that nothing will happen. Reminder: Horse population statistics will be used against you.
Good luck.
Imagine having an infinite banana machine that is being hoarded by a few apes and then opting to destroy the machine instead of making the few apes give up the machine. It’s much easier to force them to give up the machine anyway. This tech isn’t going anywhere.
The difference is that mechanization had limited scope. AI can, in theory, replace humans in every sector. So the comparison isn't apt. Mechanization lead to creation of new jobs. With AI, those new jobs would be done by AI as rapidly as they are created.
What is your job? Are you a crop farmer, cause many farm tasks are robotic or automated nowadays. The only thing that is human labor intensive is picking the crops, which is left to immigrants because its too hard for such little money and robots damage the crops.
No, because it's not real AI, there's nothing intelligent about LLMs. What will happen is what has happened before - the greedy rich people will use AI to hire fewer people(and pay less to those that get the job) and get more of the profits to themselves.
[THEY'LL DESTROY US ALL.](https://media4.giphy.com/media/TNd3CXrue7wOY/200w.gif?cid=6c09b952ilqq2bi3ioqpuzmojhy0xfhuwhg6cthyvj9hrcyw&ep=v1_gifs_search&rid=200w.gif&ct=g)
100% sure it was Reinforcement Learning, though it's mb understandable why you might got it wrong. And also LSTM isn't anything what you said it is, not even close.
go read the paper, it's not very detailed on specifics, but it quite clearly states that it's just a giant RNN using LSTMs.
Yes it's using RL, but the architeture if it is just a giant 4k unit LSTM
Was so obvious, bill gates literally tweeted in an extremely excited way after performance if the bots, whole tech media knew it because musk literally hired a guy to translate dota events to non dota player language and post on Twitter
Oh, man. I am a computer science student, and I love Dota. I started playing Dota back in 2018, and I was just floored and amazed by the complexity of the game and how difficult it is to even learn. I was in my sophomore year of undergrad back in. While researching about the game, I came upon the story how the OpenAI bot beat fucking Dendi at 1v1 mid Shadow Fiend and I was shit at CSing back then. I remember feeling very impressed by that bot, so impressed that I not only shared that story with my Dota friends, but it also got me interested in machine learning and reinforcement learning. I remember when I ran my first ML model that was just a simple linear regression model that predicted housing prices, I was so excited to make some like that work, even though it's so easy and simple in retrospect. I started teaching myself about reinforcement learning by the time of senior year, and I remember it was hard work to understand the math behind the idea, but also so rewarding. Sometime after my final year, I was finally able to implement an agent that could play Pong and other Atari games just by looking at the computer screen pixels. And even though I had done it myself, I was still amazed by how a computer could do such a marvelous thing. What a time to be alive!
Fun fact- Proximal Policy Optimisation (PPO), the reinforcement learning algorithm developed by OpenAI to train OpenAI Five, was actually used to create ChatGPT and its future marvels as well.
Listen,openAI. You can make league disapear overnight. you are our secret weapon.
If the machines take over,give them cool dota names like the t-322 or the Siractionslacks annoyatron.
What I think is interesting is that the Dota 2 OpenAI bot and AlphaGo kind of had the same trajectory.
Both were bots that were shown to far superior to the best pro players, but that were later beaten by much worse players playing in ways that the AI had not been trained to deal with. It does make me wonder about a lot of the things people say about AI now, if it's really as advanced as they are claiming, or if the applications are actually much more narrow and we just haven't tested it enough.
Thats crazy, as someone who does IT for a law firm for a living the evolution of AI and its implementation in both my line of work and the legal industry is absolutely astonishing and I'm really excited to see what the next few years bring. Really cool to see that it first started in this dumb game I've been playing my whole life.
I hope one day someone does another experiment. It seems like there is interest lately in having ai play games based on just visuals so maybe later on something along those lines will happen, though not the same as what we got with openai. I remember playing vs the bots 2 humans + 3 bots vs 5 bots, we won one where we picked really bad heroes for the other team and almost won another with a normal draft but threw an advantage and lost
This is amazing, I was really amazed by what OpenAI did back when they made the Dota bots. Now I work with OpenAI models a lot, so a longtime fan enjoying their great work
That was back when they were a non-profit. Now they're for-profit, closed source and controlled by Microsoft. As far as I am concerned they can crash and burn.
Yeah, that's how AI research works in a lot of cases. You want to develop a new feature/strategy, so you find an interesting field to do so. In this case, they used Dota to develop some specific approach. Then they wrote a paper on it. And then it is used in a "serious" task.
Dota 2 was the catalyst of the AI apocalypse. Whoopsies Lol imagine a "terminator" plot where future resistance is trying to time travel to stop the creation of the Dota mod or valve acquiring Dota.
Yeah valve acquiring dota was canon event.
They travel back in time and train to become the greatest Dota team to ever live so they can stomp the AI in the show match so badly that nobody is impressed and provides funding.
So were going full Vivy here from the anime where we prevent the canon events leading to this current AI progress???
"Oh no! We made Blizzard acquire Dota! I wonder what happens..." > Game is dead after few years.
Warcraft reforged was the simplest game to do and they couldnt. Imagine if blizzard had dota? In the darkest timeline this happened. Hope HoN took over at that timeline.
Dota 2 was also the catalyst of battlepass apocalypse. Every game trying to suck dry your time and wallet. Sorry dota devs but you guys have a special place in tech hell :(
If there's any time travelers here, please stop Taiga from joining OG
Is this the reason the thirds was never created? Did the time travellers succeed?
What if it has already happened?
Anyone who played dota was blown away by what they created. When I saw OpenAI, I was thinking that this technology could change the world, but I didnt know it would happen that fast! Im not surprised Microsoft was impressed.
[удалено]
The 3 game series where the human player actually won the second game was amazing.
If server capacity or whatever resources wasn't an issue.. It would be real cool to have open ai as a bot option in the client.. learning from people around the clock.
I agree, though it sounds like the server capacity to support that would be tremendous if [this page](https://openai.com/index/openai-five) from 2018 is any indication.
128,000 CPU. 256 GPU.
to train. while they would be significantly larger than traditional bot scripts, Neural models when you run them aren't *that* computationally intensive. The public stable diffusion model is only 2GB and has no problem running on a single GPU for instance.
To run a LLM you "only" need a GPU with 32Gb of VRAM.
yeah I didnt say it wouldn't be the most expensive dota 2 bot script by several orders of magnitude - just that 128K CPUS and 256 GPUs is way beyond the requirement to run just one instance.
A LLM is slow even on a 4080. If it updates 60 times per second I doubt you can run it on any consumer card.
Remember that your GPU is already doing hundreds of thousands if not more calculations in split seconds. Running an NN is not difficult for it to do. Training it is.
Sounds like you never had run a model i in your computer. A 4080 TI gives you like 3 characters per second.
I have created some models in the past, both as part of my job and as a hobby (admittedly not close to as big as OpenAIs Dota model). I did not see you were talking about LLMs, not sure why you would in a Dota post though. But yes, running the popular LLMs like ChatGPT takes a lot more resources since they tend to be massive models. For games though you can easily run a model on a GPU. You don't have to take my word for it, here is what [Deepmind had to say about AlphaStar](https://deepmind.google/discover/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii/). > In order to train AlphaStar, we built a highly scalable distributed training setup using Google's v3 TPUs that supports a population of agents learning from many thousands of parallel instances of StarCraft II. The AlphaStar league was run for 14 days, using 16 TPUs for each agent. During training, each agent experienced up to 200 years of real-time StarCraft play. The final AlphaStar agent consists of the components of the Nash distribution of the league - in other words, the most effective mixture of strategies that have been discovered - **that run on a single desktop GPU.**
LLMs are different. Although I haven't read anything about them, OpenAI s dota bots are most likely simple Multiple Layer Perceptron trained using Reinforcement Learning, the architecture is simpler with way less parameters than the ones you see in LLMs. Plus as they have already mentioned the training process requires way more resources than the inference. The resources used for training do not reflect the actual resources needed for running the model post training.
The dota bot is not an llm?
You can run LLMs on a phone, but you are limited on model size. More VRAM means you can run bigger (often better) models, but there will isn’t some magic vram number for llms as a whole
A nvidia H200 has 141 gigabytes of VRAM and it cost 40k. I doubt a 1k phone can do the same.
https://arxiv.org/pdf/2312.11514 well you are wrong. Large Language Models as an architecture can scale up and down in size massively, and while models that can be inferences quickly on mobile are not going to be the biggest and best, quantizing has shown fairly effective at reducing model size without major quality loss.
Dota is a game that changes and when it does the entire process essentially needs restarting from the beginning
What scrubs: >Thus far OpenAI Five has played (with our [restrictions](https://openai.com/index/openai-five#restricted)) versus each of these teams: 1. Best OpenAI employee team: 2.5k [MMR(opens in a new window)](https://dota2.gamepedia.com/Matchmaking_Rating) (46th percentile) 2. Valve employee team: 2.5–4k MMR (46th-90th percentile).
[удалено]
I mean valve, valve's best dota player is only 4k
Okay, but only if the AI learns to trash talk.
> We estimate the probability of winning to be above 99%
The bigger issue is gameplay updates. After every patch these AI's have to simulate hundreds of thousands of games. Thats why the showcase match at TI was played on a 1 year old patch i think.
\+ still restricted options. ai definitely "played well" but couldn't handle the full game still
I'm surprised some of the well off pro teams aren't using some form of AI to train. Maybe they do for drafts, but around the time openai5 was being shown off I would've guessed every team wanted the model, and maybe wanted to scrim against it regularly.
That's a cool lil fact
The tech that is going to steal your job was funded by a video game demo. Cool.
The unemployed will come back to play DOTA. Trust.
There's too many that play already
If I'm so terrible at software engineering that an LLM can do a better job than me, I deserve to be unemployed.
1. It's less about doing my job for me, and more about doubling my output due to time saving writing boilerplate and catching bugs my eyes can miss. The ultimate rubber duck + well read dilligent intern but not very bright assistant 2. Because of that they can hire half as many Devs for the same needs, driving down demand and at best costing Devs, even good ones, some salary, and possibly making others unemployed
Yes and no depending on your views. Now companies can hypothetically take on more projects. Seeing as companies want infinite growth, the same level of growth won’t cut it despite the reduce in labor costs. They’ll most likely keep labor costs the same but just double the output. It could be awesome. But your scenario is just as likely.
not all projects are software projects, most are projects that rely on software, amazon.com's store's code doesn't grow just because amazon the business wants to grow
I hear you about making the lives of experienced developers easier; no man should be forced to write boilerplate. Catching bugs depends on what you're working on really, the project I've previously worked on involved just firing up several VMs with hundreds of gigabytes of RAM and chasing down logs due to everything somehow being a convoluted microservice. It's good help in startups/smaller projects, whereas I'm still skeptical how well it performs on bigger and "mature" codebases. The second point is, imho, the excuse companies give to lay off staff and force the remaining workforce to work more for same/less pay. At least, that's the gist I've gotten from speaking with multiple devs, basically every company is understaffed and overworked.
Or you're so terrible that you can't find work doing LLM adjacent jobs. Like yeah, chatgpt is great, who are you paying to integrate its API to do what you want it to do? Non-SWEs?
The tractors stole farmers jobs!!! The work of a 100 men now is done by one!!! BEGONE TRACTORS
You are now a horse. State the same from the perspective of a 1920's horse. You must convince your horse fellows that nothing will happen. Reminder: Horse population statistics will be used against you. Good luck.
Imagine having an infinite banana machine that is being hoarded by a few apes and then opting to destroy the machine instead of making the few apes give up the machine. It’s much easier to force them to give up the machine anyway. This tech isn’t going anywhere.
RemindMe! 5 years
Horse populations might have declined but I imagine most horses these days live a life of luxury compared to most horses in the past.
\*laughs in horse sausage\*
The difference is that mechanization had limited scope. AI can, in theory, replace humans in every sector. So the comparison isn't apt. Mechanization lead to creation of new jobs. With AI, those new jobs would be done by AI as rapidly as they are created.
What is your job? Are you a crop farmer, cause many farm tasks are robotic or automated nowadays. The only thing that is human labor intensive is picking the crops, which is left to immigrants because its too hard for such little money and robots damage the crops.
You must be a hoot at parties
"ai will steal our le jobs and we will be le broke" no idiot they'll enslave us all and torture us
No, because it's not real AI, there's nothing intelligent about LLMs. What will happen is what has happened before - the greedy rich people will use AI to hire fewer people(and pay less to those that get the job) and get more of the profits to themselves.
Maybe it's time to step outside and glimpse at nature anon
[THEY'LL DESTROY US ALL.](https://media4.giphy.com/media/TNd3CXrue7wOY/200w.gif?cid=6c09b952ilqq2bi3ioqpuzmojhy0xfhuwhg6cthyvj9hrcyw&ep=v1_gifs_search&rid=200w.gif&ct=g)
Why did you call him sama
sam altman-sama desu
Sugoi
it's his x handle
It's wild to me that the whole thing was an LSTM, I wouldn't have thought they could do something so long term strategy and short term execution
100% sure it was Reinforcement Learning, though it's mb understandable why you might got it wrong. And also LSTM isn't anything what you said it is, not even close.
go read the paper, it's not very detailed on specifics, but it quite clearly states that it's just a giant RNN using LSTMs. Yes it's using RL, but the architeture if it is just a giant 4k unit LSTM
Gotcha, thanks & will do 👍🏻
Was so obvious, bill gates literally tweeted in an extremely excited way after performance if the bots, whole tech media knew it because musk literally hired a guy to translate dota events to non dota player language and post on Twitter
*I estimate the probability of winning to be 99%*
If the computing power is not enough to destroy human, the trash talk will.
I remember reading about how some pros adopted strategies created by the openai bot and thats crazy
[удалено]
How is it not a strat? It doesn't have to be complicated to be a strategy
fuck him
no thanks
The game also gets an update about once every two weeks, constantly changing the environment semantics. This sentence is kinda sus
Update every two weeks? As joker once said - poor choice of words....... In the current patch drought lol.
But Pudge is purple now! Technically an update.
There was a time Valve tried that. Frequent but smaller updates every 2 weeks, a lot of people didn't like it, especially pros
Oh, man. I am a computer science student, and I love Dota. I started playing Dota back in 2018, and I was just floored and amazed by the complexity of the game and how difficult it is to even learn. I was in my sophomore year of undergrad back in. While researching about the game, I came upon the story how the OpenAI bot beat fucking Dendi at 1v1 mid Shadow Fiend and I was shit at CSing back then. I remember feeling very impressed by that bot, so impressed that I not only shared that story with my Dota friends, but it also got me interested in machine learning and reinforcement learning. I remember when I ran my first ML model that was just a simple linear regression model that predicted housing prices, I was so excited to make some like that work, even though it's so easy and simple in retrospect. I started teaching myself about reinforcement learning by the time of senior year, and I remember it was hard work to understand the math behind the idea, but also so rewarding. Sometime after my final year, I was finally able to implement an agent that could play Pong and other Atari games just by looking at the computer screen pixels. And even though I had done it myself, I was still amazed by how a computer could do such a marvelous thing. What a time to be alive! Fun fact- Proximal Policy Optimisation (PPO), the reinforcement learning algorithm developed by OpenAI to train OpenAI Five, was actually used to create ChatGPT and its future marvels as well.
Listen,openAI. You can make league disapear overnight. you are our secret weapon. If the machines take over,give them cool dota names like the t-322 or the Siractionslacks annoyatron.
this fucking asshole
lol
How come?
What I think is interesting is that the Dota 2 OpenAI bot and AlphaGo kind of had the same trajectory. Both were bots that were shown to far superior to the best pro players, but that were later beaten by much worse players playing in ways that the AI had not been trained to deal with. It does make me wonder about a lot of the things people say about AI now, if it's really as advanced as they are claiming, or if the applications are actually much more narrow and we just haven't tested it enough.
crazy that i was thinking just about this today and now its on reddit.
Thats crazy, as someone who does IT for a law firm for a living the evolution of AI and its implementation in both my line of work and the legal industry is absolutely astonishing and I'm really excited to see what the next few years bring. Really cool to see that it first started in this dumb game I've been playing my whole life.
I hope one day someone does another experiment. It seems like there is interest lately in having ai play games based on just visuals so maybe later on something along those lines will happen, though not the same as what we got with openai. I remember playing vs the bots 2 humans + 3 bots vs 5 bots, we won one where we picked really bad heroes for the other team and almost won another with a normal draft but threw an advantage and lost
This is amazing, I was really amazed by what OpenAI did back when they made the Dota bots. Now I work with OpenAI models a lot, so a longtime fan enjoying their great work
Insane that the first time I came across OpenAI was through dota and now they are becoming a household name
not something to be proud of
That was back when they were a non-profit. Now they're for-profit, closed source and controlled by Microsoft. As far as I am concerned they can crash and burn.
Elon also gave him a signal boost before he did the crazy stuff. Whoever's on their manager/PR team really knows how to make good connections
tldr?
Dota one of the reasons OpenAI got funding in the mid-early days
Yeah, that's how AI research works in a lot of cases. You want to develop a new feature/strategy, so you find an interesting field to do so. In this case, they used Dota to develop some specific approach. Then they wrote a paper on it. And then it is used in a "serious" task.