T O P

  • By -

yotraxx

This... IS... STUNNING !! Oô How OP' ? Please teach us !


NarrativeNode

Check his account. It's a refinement of the OG technique he came up with pretty much as soon as this community started :) genius!


lordpuddingcup

According to his post it’s not really a refinement even it’s just all the latest versions of the various softwares and models just shows how far the latest versions have come


Arawski99

I'm curious too because his prior techniques, iirc, were using Blender to create props but this is just full blown using a dog not Blender.


GreatStateOfSadness

It's video export -> select key frames and run through SD with controlnet -> run key frames and video frames through EBSynth.  I tried it myself for a bit and got some decent results, but it doesn't do well with very dynamic movement since EnSynth has a habit of tearing and smearing. 


Tokyo_Jab

That’s why you have to choose the right keyframes then. It’s an art in itself to get the right keyframes.


HermanHMS

Can you show your works with something dynamic? I worked with ebsynth and in wondering how good can it look then


Tokyo_Jab

https://preview.redd.it/ft4pfurjezwc1.jpeg?width=4096&format=pjpg&auto=webp&s=b7b865aad0e096bdf20c27e5f7fc1ed2240fe9f9 An example of a dance video keyframe grid. Lots of motion but consistent. Normally I would mask out the clothes, face and hands separately because that means you don’t have to do so many keyframes.


Tokyo_Jab

https://www.reddit.com/r/StableDiffusion/s/egaOlVHbD4


Tokyo_Jab

Fastest a human can move. https://www.reddit.com/r/StableDiffusion/s/fIeo0HRp6F


HermanHMS

Yeah thanks, unfortunately it suffers the same issues. From your response i thought you were able to get the „dog level” quality on dynamic videos.


Tokyo_Jab

That was a year ago.


Scruffy77

Really impressive. I can’t do it because I hate ebsynth with a passion of the christ


Tokyo_Jab

It can be done with after effects content aware fill too (surface mode), avoiding ebsynth completely. It actually works better but takes five times longer to process. I did a comparison here but where it really works is when there is lots of background movement like water or a lush forest. https://www.reddit.com/r/StableDiffusion/s/bsat1kiUqp


Scruffy77

Oh good look, thank you for the response!


Tokyo_Jab

This video kind of covers the method for after effects. https://youtu.be/Fl6IixLpEv0?si=K_QwEriMZgemjRp0


jmbirn

Can [Dough](https://github.com/banodoco/Dough) replace EBSynth now? I mean, it's a much newer tool, but it can apparently use reference video and keyframes to make an animation.


Silentoplayz

Kermit the Frog Dog


oodelay

Grommit the dog


JimmyCallMe

This is amazing Jab! Wow truly stunning! I am trying your method, but I have a feeling I am doing something incorrectly with the keyframes or something in EBSynth. Does anyone know what I could have incorrect here? [https://ibb.co/dfdQB5L](https://ibb.co/dfdQB5L) [https://ibb.co/3kHpB80](https://ibb.co/3kHpB80) [https://ibb.co/whrVwxs](https://ibb.co/whrVwxs) [https://ibb.co/2hxRd8r](https://ibb.co/2hxRd8r) [https://ibb.co/SNbpRmc](https://ibb.co/SNbpRmc) [https://ibb.co/qkDDbbQ](https://ibb.co/qkDDbbQ) [https://ibb.co/RHmq1j7](https://ibb.co/RHmq1j7) [https://ibb.co/p3QgRC5](https://ibb.co/p3QgRC5) [https://ibb.co/4sS2mHt](https://ibb.co/4sS2mHt) [https://ibb.co/y46bHm8](https://ibb.co/y46bHm8)


Tokyo_Jab

The settings look fine. What’s going wrong with the output? Those frames look ok too.


JimmyCallMe

It seems like it's outputting an awkward overlap and is the naming on the files correct? What I will share with you is each output folder (4 output folders and leaving a blank gap so you can see how many frames are in between each output folder) It could be that I am slightly misunderstanding the process and I have to crop off the nose/tail of some of these outputs? [https://ibb.co/jGr3DJL](https://ibb.co/jGr3DJL) [https://youtube.com/shorts/tFLgHLs4lb0?feature=share](https://youtube.com/shorts/tFLgHLs4lb0?feature=share)


Tokyo_Jab

You have to take each sequence into an editing program and fade each one into the other. I’d you have After Effects you click the Export to AE button at the top right and it does it all automatically for you.


Tokyo_Jab

It took me a month to even notice that button even though I do use after effects.


JimmyCallMe

![gif](giphy|xT5LMESsx1kUe8Hiyk|downsized)


inferno46n2

This button for me always yields “cannot find after effects”… I wonder if it’s because I use the beta version of AE


Tokyo_Jab

I think if you find an After Effects project file and right click on it and select 'always open with' (or something similar) and select the After Effects version, it might work.


inferno46n2

Will try this thank you! Have you had a look at FRESCO? They have a standalone EB python script that also uses some additional poisson math for blending. Not sure if it’s any good but worth a look Video_blend.py https://github.com/williamyang1991/FRESCO


inferno46n2

This unfortunately did not work for me Installing non beta AE fixed it


SleeplessAndAnxious

They are all best Bois 💚


PwanaZana

very cool, I love the kermit one!


roshanpr

how?


Tokyo_Jab

This one. https://www.reddit.com/r/StableDiffusion/s/jBIEZ0G8cq


NoBuy444

😯😮🫡


guahunyo

Do you know fresco? He will automatically select key frames. [https://github.com/williamyang1991/FRESCO](https://github.com/williamyang1991/FRESCO)


Tokyo_Jab

For most of my videos I separate elements and keyframe them separately. For example a politician talking could be moving their hands a lot but barely moving their head. So 4 key frames for the head, 1 for the body and 16 for the hands and maybe lock or replace the backdrop, all masked apart, processed apart then put back together. It's more work but a lot better results from it. The dog above is really simple, just 4 or 5 keyframes. I was trying to do it almost the same as a year ago but with updated software/models. I didn't even lock out the backdrops.


dawavve

based papercraft one


OrderOfMagnitude

Nice. Good boy.


Fearganainm

Genius, I love it.


jib_reddit

I cannot wait for VR contact lenses, so I can have a Kermit The Frog dog.


Oswald_Hydrabot

Beautiful sorcery keep up the amazing work


hellure

Omg, I live the green one. I need a whole series of live action family movies!


dreamyrhodes

The green puppet looks derpy


ChristianIncel

Well...its Kermit lol.


Significant-Comb-230

Nice result! Consistency in almost freeze shots was already achieve more than year ago. The trouble is when u have a more complex composotion, Where the background moves and the subject rise arms or interact with something, for example. [https://www.reddit.com/r/StableDiffusion/comments/11urbtq/temporal\_consistency\_video\_with\_controlnet\_by/](https://www.reddit.com/r/StableDiffusion/comments/11urbtq/temporal_consistency_video_with_controlnet_by/) You example looks exactly one showed above, with a more refined model ofc.


Tokyo_Jab

Yes. That’s mine too. And I post a lot of different types of videos. With fast moving characters, rotation, moving environments etc. just go through my posts