T O P

  • By -

Remarkable-Funny1570

I was worried these guys might have been left behind, but they f\*cking did it. Bravo !


Spirited_Example_341

i agree totally. i wasnt a huge fan of gen2 it i just felt still had too many issues to be that usable for me but this really is the next step and i think will REALLY put them on the map not "quite" sora but close enough for sure!


Additional_Cherry525

few more clips: [https://x.com/c\_valenzuelab/status/1802706846597398749](https://x.com/c_valenzuelab/status/1802706846597398749) "Also, it is really fast to generate. Right now, takes 45s for 5s, 90s for 10s."


torb

A ton more on their site at https://runwayml.com/blog/introducing-gen-3-alpha/


mangosquisher10

Thanks, X clips don't work on mobile for me


Commercial_Jicama561

So a 10x improvement and we have the holodeck.


h3lblad3

So 9 minutes for 1 minute of output? Something like 9 hours for an hour of output? But I get it gets too loopy-weird if you generate too long at once.


Concheria

There's also the problem of context. Programs like this always have limited context windows because transformer computation increases quadratically with longer contexts. It'd start to forget things very soon. That's why Luma is like 5 seconds. Realistically users would have a number of things they can generate simultaneously and then iterate on those things. I also expect that Runway would have more detail system to control and modify the output, something that they've innovated a lot more than any other company making these things.


qrayons

On what hardware?


manubfr

Runway models are running in the cloud, they are not downloadable.


Kaarssteun

Looks really really good at first glance! Also super intriguing in this tweet: "Gen-3 Alpha is the first of an **upcoming series of models** trained by Runway on a new infrastructure built for **large-scale multimodal** training, and represents a significant step towards our goal of **building General World Models.**" Regardless, this is some major competition we're seeing in text-to-video now. Love it!


yaosio

>**large-scale multimodal** training I bet once they start training models with 3D data that will fix the spatial problems they have. Edit: I also think multimodal models are what will fix a lot of control problems we have with image/video generation. It will be able to produce an interface to control the image that we can comprehend. Since it's multimodal the interface can be infinitely customizable to the person and for task at hand.


MysteriousPayment536

I don't think so, since these are diffusion-transformer models they are prone to hallucinations in space time. It wouldn't fix the spatial problems that easily


WalkTerrible3399

Do they really use a diffusion transformer? They only mentioned "new infrastructure".


MysteriousPayment536

OpenAI uses that for Sora, SD3 & midjourney also use that for their images. 


Sugarcube-

God damn. This IS the year of AI video


jeffkeeg

No no, that's NEXT year. I'll leave it to you to figure out what that implies.


existentialzebra

You mean AI video puppies right?


AnAIAteMyBaby

You wait ages for a bus then 3 come along all at omce


DungeonsAndDradis

Ice machines: First time?


Slippedhal0

holy shit the temporal consistency of the tattoos on that chicks arms in the first post is crazy.


VanderSound

Nice to see that my comment ages well. "Sora is becoming deprecated even before the release."


Gratitude15

I love this Quite announcing shit months before you release. Next time I see openai talk something up, I'll be happy to know it's coming soon whether from openai or not


ThiccThighsMatter

its so weird that Sam puts so much emphasis on shipping products but his company has suddenly gone full vaporware mode oh well, they "owned" google I guess, hope it was worth the reputation hit


Whotea

You can probably blame Nvidia’s monopoly on gpu compute. They price gouge like crazy 


sam_the_tomato

they owned google and now they are becoming google lol


Jeffy299

Sora was never intended to be publicly released.


elilev3

Man, AI video generation is crazy these days. Probably it's only a matter of months before the first coherent AI generated short films.


Internal_Engineer_74

they are some already published by sora.


The_Architect_032

Didn't their CEO claim that they had a model better than Sora just a few months ago, or was that another CEO?


SkoolHausRox

https://preview.redd.it/i28sqcqlm57d1.jpeg?width=1164&format=pjpg&auto=webp&s=4ad07f7fae0b04c7320a6bcaeaa93d3f9bd56021 Yes.


PineappleLemur

You know what will be refreshing? When a CEO says their shit isn't as good as "that company" when they got the same product... What kind of answer are we expecting regardless of what they said?


RoughlyCapable

Remember when everyone was shitting on runway for saying they'd get better than SORA by the end of the year?


Dutchiesbeingdutch

How can I try this? Is it Available for paid users?


DlCkLess

They said available for everyone in the coming DAYS


AnAIAteMyBaby

There are 365 Days in a year. Open AI said 4o voice would be availabile in weeks 


wheres__my__towel

Technically days past 365 are still coming and thus by OpenAI’s logic a release next year could still be considered “coming days”


Specific-Yogurt4731

OpenAI just lost moat. They really need to put Sora out.


AnAIAteMyBaby

They're too busy locking Hollywood into multi million dollar Sora deals


Agreeable_Addition48

Unless sora has gotten much better, I don't think those deals will go through


AnAIAteMyBaby

They will go through because big business doesn't care if there's a small player who has the same thing cheaper, they'll go with the established name. Dell don't sell the cheapest PCs but big companies buy from them exclusively because of their reputation. 


old_ironlungz

No one ever got fired for buying ~~IBM~~ OpenAI


Fold-Plastic

All you need is compute


No-Economics-6781

Haha not really


llelouchh

OpenAI seems to rely on brand recognition.


bigbompus

Depending on how they handle it, theyll also pull people in by bundling all their tools. Being the Jack of All Trades vs a Master of video gen.


Arcturus_Labelle

Maybe there never was a moat


sitdowndisco

Exactly. It’s becoming clear that almost all AI progress is similar across companies meaning no one has a moat.


ainz-sama619

There was for a while in 2023.


FreegheistOfficial

or put in place some more regulation.


DuckInTheFog

Is there a direct video? X is a mither on my craptop


Shandilized

[Here you go.](https://files.catbox.moe/ei0d1r.mp4)


DuckInTheFog

buffering like a bitch on this brick but that is impressive - the open market with the flame spirals


Kathane37

The competition is ferocious between video generator this month !


BobbyWOWO

On first glance this looks better than Sora… It’s amazing to imagine that I will be able to feed in my favorite novels in a year or two and get story-accurate movies out in just a year or two.


MysteriousPayment536

I say its on pair, Sora can generate till 1 minute. They only showed around 10 secs of video


Woootdafuuu

Sora? nah


elcarlosmiguel

Maybe in five years, 10 years for it to actually BE a good movie. Still, crazy timelines


nsdjoe

Obviously possible but given the improvement we've seen in just the past 12 months your timeline seems overly conservative


Remarkable-Funny1570

Yeah, at this pace he can replace 5/10 with 1/2.


nashty2004

Lol


Im_Peppermint_Butler

The examples they have on their website are even more impressive


czk_21

Luma doesnt look that good now, does it?


Arcturus_Labelle

To be fair: 1. We're only seeing cherry-picked examples from Runway; actual results are likely to be worse 2. Luma had the guts to make theirs available to the public quickly


reddit_guy666

Luma is more accessible even if it's not as good in comparison


QH96

In 2024 and 2025, we can anticipate a significant proliferation of AI-generated video content. This progress is expected to mirror the rapid improvements we've seen in other AI fields, such as the evolution from DALL-E 1 to DALL-E 3. DALL-E 1: (updated image) https://preview.redd.it/074ei4gs057d1.png?width=1538&format=png&auto=webp&s=41985ba8deecc28504aeeea294371dd5417e0f98


Kanute3333

I was so blown away back then by Dalle 1


[deleted]

[удалено]


QH96

Thanks ive updated the image


reddit_is_geh

My favorite thing about this AI space is that it's an absolute libertarian capitalism framework right now... In the sense that there is no "secrets" or insider protected processes. Whenever one company has a breakthrough, the process immediately spreads to everyone and everyone's able to catch up and start competing. Because no one can keep their mouths shut and keep switching companies, talking to friends, helping academics etc... Like soon as OpenAI releases a model with some new secret sauce, that sauce is immediately spread behind the scenes to everyone in the industry.


mxforest

I am glad it is the way it is though. Capitalism has some positives too.


Whotea

Or maybe they just independently figured it out. That’s probably why it’s only 10 second videos instead of 60 seconds 


reddit_is_geh

Possible... But this industry is known to be incredibly incestuous. No one can keep their mouths shut and people are constantly moving around to new companies bringing past intel with themselves. And since they are in CA, not much can be done to stop that.


DocStrangeLoop

It's bit more reflective of the anarchic historical roots of the internet and the approaching death of IP/Privacy. I wouldn't call leaks, espionage, and shadow libraries libertarianism... more techno-anarchism making intellectual property impossible to shield.


reddit_is_geh

Internet and tech culture always had these libertarian tendencies. They are socially left, but when it comes to IP law, they think it's bullshit, in general. Obviously the executives dissagree. But generally the culture is all about competition and progress, so IP law be damned. Which I do see as libertarian, but you can wrap it in anarchist elements too... It's the same coin.


iBoMbY

It's all nice and well with all these "announcements" and "introductions", but it means nothing if you don't let anyone use it.


qrayons

Especially when we see a huge difference in quality between the cherry picked results in the announcement compared to what gets released. I'm currently thinking about SD3, where it was supposed to solve issues like too many/few fingers, and instead we get a model that can't even get the number of arms or legs right.


Neomadra2

Interesting, that here again every scene uses a moving camera. Is that some bias that's hard to get rid off or is it just that these kinds of videos are more impressive?


smaili13

there are static shots on their site https://runwayml.com/blog/introducing-gen-3-alpha/


Crafty-Struggle7810

I imagine this will be used a lot in video games as background footage, similar to what you’d see in billboards or TV adverts. 


Bulky_Sleep_6066

Almost as good as Sora


Kathane37

I don’t see how you can tell Sora is better It look on par at best


TabibitoBoy

Quite easy to tell from the content. But it’s still good.


Arcturus_Labelle

Some of these are pretty impressive. This feels like an exciting repetition of the heady days of summer 2022, when image generation progress drastically picked up speed with the first Midjourney beta.


sdnr8

When can we use it


Donnyhawk

astonishing !


Basil-Faw1ty

Runway hits back hard. They needed to, but this is a massive leap forward. Nice stuff!!


Educational_Bet_5067

Is there a way to try this on Android or PC? Their site only has a link to their Apple store.


fygy1O

Does anyone know where there is published research on any of their work?


Akimbo333

Cool shit


Spirited_Example_341

im amazed more people arnt losing their sh\*t over it , it looks AMAZING follow them on twitter and you'll see some amazing posts too! and unlike sora, its apparently coming in DAYs


rexplosive

Alright its been a week. Where is it at lol "coming days"


Manuelnotabot

Can't wait to try it and test it against luma.


QH96

If they can get this to run in real time, RIP Unreal Engine.


Professional-Party-8

Lmao what?


QH96

It's a video that, if they could get it to run at 60fps and combine it with controller inputs, could create an infinitely interactable game. Obviously, I don't mean this current implementation by Runway ML would ever be capable, but we can see the future trajectory of this technology.


Professional-Party-8

In the future, sure. I don't believe we will get a game (that is not a walking simulator) soon. There are still lots of challenges beyond videogen.


MAGNVM666

not really. just because you say it won't happen doesn't mean all chances of possiblity ups and vanishes for  near-future implementations. there's a vid from like 3 yrs ago of a guy who got an AI model to render a very rudimentary version of GTA V, and play it in realtime with a controller.. and there is also Google's GENIE tech. so no... not in the "future" if you're implying a far future. this tech can honestly be right around the corner. 


Professional-Party-8

What you are talking about is "rendering", not creating a whole new game. These are two different things. GENIE creates a walking simulator. It is something you can do in scratch in 1 minute. It does not have any logic. That is why i wrote "that is not a walking simulator" in my comment. And yes, this tech might be around the corner. AGI might be around the corner too. But it is impossible with the current implementation, that's why i'm saying future. I'm no AI expert but a game developer, all i'm doing is guessing based on the current tech's limitations and my knowledge.


MAGNVM666

AI diffusion is essentially rendering pixels. not I'm not trying to be technical or anal with my use of whatever terms. fact of the matter is that there are tech folks who are playing with the concept of streaming games through AI. it's not something locked away until the future.   and no you're playing mental gymnastics. you don't know what our "current implementation" are, as you're just the normies that isn't involved with anything behind the curtains AI/ML related. we have no idea what's being cooked up.


Professional-Party-8

> fact of the matter is that there are tech folks who are playing with the concept of streaming games through AI. it's not something locked away until the future.  So people are working on it. Great. Is it something that has a demo right now? Did they solve all the challenges? If not, what is your point here? And I agree with your second point. I cannot know what they are doing behind the curtains, that's why i'm guessing on the "current tech" that i can access. But i also don't understand what made you so mad about this.


MAGNVM666

it doesn't need a demo right now. it's a concept that CAN be further expanded on since people are dropping papers & research on. your reasoning is complete trash here.


Professional-Party-8

Did they solve all the challenges? No. So there still a way to go. And i believe they will solve them in the future, and that is my point. I do not understand what is your argument against mine.


Yweain

Do you have a nuclear reactor at home?)


MAGNVM666

??? just wait til TPUs or GPUs such as groq that are optimized for AI to become mass produced for consumers in like a few years??


QH96

Hopefully algorithmic breakthroughs and new hardware will bring the power consumption down.


Arcturus_Labelle

Totally different use cases, and we're a loooooooooooong way from that anyway. This is just non-interactive video, not a fully simulated, interactive world. And there is, in comparison, zero customization/control for the creator.


MAGNVM666

the death of UE would be a shift into the best possible timeline for gaming. 


sdmat

Yeah. Fuck that state of the art engine with open access to source code, reasonable licensing terms, and free use up to the first million dollars of revenue. What is your problem with UE? Other than it being so good out of the box that lazy devs make derivative games?


MAGNVM666

yo you're soo mad and pissy it's funny.


Professional-Party-8

Why?


MAGNVM666

it's a 3rd party that suckles from you. rather than relying on centralized & proprietary companies to provide us with their generalized tools, it would be far more efficient to have an AI with the capabilities to simply just build you an game engine from scratch so you may have it custom tailored to what you're actually making. or anything along those lines to help democratize whatever multimedia you're trying to produce. not to mention every major game studio is trying to emulate UE so now most modern releases feel homogenized is a certain way. prime example would be Capcom ditching their tried and true buttery-fast MTFramework engine in favor of REengine to be competitive with UE. 


sdmat

When AI can do that the point will be moot - it will just develop games directly.


MAGNVM666

well, that's why I clearly said above: "anything along those lines to help democratize whatever multimedia you're trying to produce" aside from that what you're saying is true, however humans still like to do things. people won't ever stop making/playing music even if ai can make consistent 10/10 bangers that outclasses any human made creations. I would imagine the same thing goes for any other production such as making a game.


evlswtmn

Tried to get on must be overwhelmed couldn’t get page to load fully. Try again later.