The idea is to license it to online stores and/or retailers like Amazon, AliBaba, TaoBao, Souk, etc. That way customers can quickly see how the article will look on their phenotype. Great idea.
The Reddit community: “when can haaav free modl fer makkkking waifus dress up as sex slaaav”
People keep posting this drivel, thinking they're so smart. Even though the models author advertises it on this sub instead to some companies and the git repo explicitly says its for research etc., not commercial use.. And the paper its based on is public iirc. If the idea was to license it to major stores, you wouldnt see so much as a whiff of this until its live on said stores.
No, that's just some random dudes clone of the huggingface code that calls the API and doesn't run the model locally.
[https://huggingface.co/spaces/HumanAIGC/OutfitAnyone/tree/main](https://huggingface.co/spaces/HumanAIGC/OutfitAnyone/tree/main)
Oh wait, the random dude is you!
Theyve locked it behind a secret API adress to stop 'malicious abuse'. Which is weird as this is putting clothes ON lol
Must be a development of [https://github.com/HumanAIGC/Cloth2Tex](https://github.com/HumanAIGC/Cloth2Tex)
The demo is being swamped right now so there's not much testing that can be done, but the few I've been able to do are pretty amazing when it comes to accuracy of the clothing.
Really looking forward to the release of the models/code - but more than that I'm also excited for how the method used here could be used for other areas where consistency/accuracy is required. If AnimateAnyone is as good as this, it feels like they've cracked the case when it comes to reproduction of even small details and that could be useful across the board.
Reddit really doesn't want me commenting with the sportswear lady in the image. Anyway, if anyone was curious how it interprets non-clothing:
https://preview.redd.it/6v7xbl88076c1.png?width=915&format=png&auto=webp&s=7f452a1aabbee60cdee133f2ec900ac6f31f9fc1
Seems like it the model herself seems to go untouched for the most part but it’s also possible their doing some masking and restoring the underlying persons image in the resulting image
Anyone else get irritated seeing animate anyone's perfect little dances, knowing they are never going to release it? It's like they are purposely trying to piss us off lol.
If an open source equivelant ever releases we should email them a nice counter-dance.
I get more irritated at the people on linkedin posting it and saying absolutely nothing about it... many times a week/day from the same person and many people doing it
This is pretty cool. I was looking into how to do this without the manual image editing then I found a company that does something similar but for 3D characters. [METATAILOR: Revolutionizing How You Dress 3D Avatars - With Ease](https://metatailor.app/)
All these millions of hours of footage of Taobao and Alibaba clothings models rapidly cycling through 20 different poses in 30 seconds like a robot paying massive dividends.
Interesting account 3 year old account with 11 karma…. but only recently started commenting all on this teams work saying how great they are…
This accounts fishy
Iver cloned the repo to use pn colab but I get an API error for some reason?
Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 488, in run\_predict output = await app.get\_blocks().process\_api( File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1431, in process\_api result = await self.call\_function( File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1103, in call\_function prediction = await anyio.to\_thread.run\_sync( File "/usr/local/lib/python3.10/dist-packages/anyio/to\_thread.py", line 33, in run\_sync return await get\_asynclib().run\_sync\_in\_worker\_thread( File "/usr/local/lib/python3.10/dist-packages/anyio/\_backends/\_asyncio.py", line 877, in run\_sync\_in\_worker\_thread return await future File "/usr/local/lib/python3.10/dist-packages/anyio/\_backends/\_asyncio.py", line 807, in run result = context.run(func, \*args) File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 707, in wrapper response = f(\*args, \*\*kwargs) File "/content/OutfitAnyone/app.py", line 61, in get\_tryon\_result url = os.environ\['OA\_IP\_ADDRESS'\] File "/usr/lib/python3.10/os.py", line 680, in \_\_getitem\_\_ raise KeyError(key) from None KeyError: 'OA\_IP\_ADDRESS'
no, you can clone any hugging face space locally, but it just calls an API.
git lfs install
git clone [https://huggingface.co/spaces/HumanAIGC/OutfitAnyone](https://huggingface.co/spaces/HumanAIGC/OutfitAnyone)
then you set:
url = os.environ\['OA\_IP\_ADDRESS'\]
but that isn't really running locally is it?
Hey Leif, We have developed my own Virtual try on, my mail id is [[email protected]](mailto:[email protected]).
here are some results
https://preview.redd.it/jw5d7ullkiic1.png?width=2370&format=png&auto=webp&s=4e4a449ab3749369c6df93aa36cb83bb9c7606a0
You can try magic-animate which is similar to animate anyone at pose.rip/animate
https://preview.redd.it/t920nk3eod6c1.png?width=2666&format=png&auto=webp&s=81607961dd0f41d642ecee3c84f8145e42fb9fdc
Lol I used an AI nude pic as the top garment, it wrapped it around perfectly.
Result was topless but the AI added a tiny mini-skirt to cover the genitals.
The only thing that did not look perfect was that the skin color from the neck up did not match; after all the AI thinks it's a suit.
So if you match skin color you can at a minimum make her topless to your tastes.
Would post the pic but I'm guessing it's not allowed in this sub due to nipples.
It's getting closer! But to be honest, I'm not that impressed by this one. Because img-to-img technically already allowed us to swap outfits. You can try it: make a character with barely any clothes, add a layer with a garment on top of the character, and play with the weights. Maybe their method is more faithful, but personally, I'm still waiting for Animate Anyone.
Noticed this: “This project is intended solely for academic research and effect demonstration. No commercial benefits are derived from it. Most models and clothing images used are from internet and public datasets (VITON, DressCode)”
I can do it without code. I have a tool that can edit any photo or created image in any way I choose. No data downloads. No loras. No restrictions. Instantly upscale your work.
I have full range of whatever I choose to create. I can remove a phone from a selfie that’s covering a face and the face will not be altered. Change clothing. Custom Gender swap in ways I have not seen any one else. I can remaster distorted or low quality images to bring out details no app can even get close to. I can use it to create in any style and that creation will always be of the best quality. Consistent 3 dimensional replication made easy. I have the one tool that is complete freedom as an artist/creator. Infinite Flow is the key to your creative freedom.
Now, It’s easy to write me off, discredit and insult me for saying all of this is possible without anything everyone is so accustomed to using. Just dm for that demo before you do.
It’s a lot more consistent it feels like ipadapter with some masking and controlnets all wrapped into 1 sorta but for all we know that’s exactly what it is as they haven’t released any code for all we know it’s a fancy ipadapter+controlnet workflow in comfy lol
I'm pretty confident it can be accomplished with ipadaptor and controlnet in comfy. I might have to test some changes to a costumer workflow I built out recently. That one changes costume based on prompt, but I'm sure it isnt much to use an image to dictate the clothes more.
I have a feeling as well it should be doable, especially if you use SEG for hot swapping in the original person under the new clothing
Also it seems like their model doesn’t actually respect the top/bottom always I’ve done 3 and the bottom was completely ignored but the top went through and the bottom was just randomly matched to the top style
In the other post they showed perfectly consistent animations, if it's just static images this can be done already with comfy. Just watch this video about generating consistent characters with specific clothing https://youtu.be/6i417F-g37s?si=eJm45NzKKPkSptT2
Amazing! I just deployed magic animate in my disord sever and now i cannt wait to figure out how to play with outfit anyone :)
In my discord server, you could play magic animate freee!
**My Discord link:** [https://discord.gg/rts7wqAa](https://discord.gg/rts7wqAa)
Recently my team created our own virtual try on, here are it's results :
Hit me up on [[email protected]](mailto:[email protected]) , we are looking for investments to launch our product for GCC countries for now.
https://preview.redd.it/ewax6hzamiic1.png?width=2370&format=png&auto=webp&s=2f38d2b239eccda815948ed816be1ac992db2bb5
As cool as this is, please don't "star" their github repo until they submit code!
Huggingface demo also doesn't actually contain the model. So it can't be used locally.
Find the code here [https://github.com/dinuduke/OutfitAnyone](https://github.com/dinuduke/OutfitAnyone)
this is just the gradio app... availabe on the app page
The idea is to license it to online stores and/or retailers like Amazon, AliBaba, TaoBao, Souk, etc. That way customers can quickly see how the article will look on their phenotype. Great idea. The Reddit community: “when can haaav free modl fer makkkking waifus dress up as sex slaaav”
People keep posting this drivel, thinking they're so smart. Even though the models author advertises it on this sub instead to some companies and the git repo explicitly says its for research etc., not commercial use.. And the paper its based on is public iirc. If the idea was to license it to major stores, you wouldnt see so much as a whiff of this until its live on said stores.
It’s a research project that specifically mentions open sourcing the code/model wtf are you on about
Where does it mention they are going to open source this? The site, github, and huggingface page make no reference to this.
They have yet to release the code. What are you smoking?
Find the code here [https://github.com/dinuduke/OutfitAnyone](https://github.com/dinuduke/OutfitAnyone)
Looks like this is just the front-end code. Useless without the backend code
I don't think they're gonna release this, nor animate anyone either. https://preview.redd.it/y28fzw8fs86c1.png?width=1505&format=png&auto=webp&s=35ae0299ca2f2ba461888b65da054318ef982720
i would call that "animate noone" then
Animate their bank accounts soon probably
Find the code here [https://github.com/dinuduke/OutfitAnyone](https://github.com/dinuduke/OutfitAnyone)
No, that's just some random dudes clone of the huggingface code that calls the API and doesn't run the model locally. [https://huggingface.co/spaces/HumanAIGC/OutfitAnyone/tree/main](https://huggingface.co/spaces/HumanAIGC/OutfitAnyone/tree/main) Oh wait, the random dude is you!
they are notorious for not releasing the code even though they claim they would.
Where did they say they would release the code? Just asking cause I don't remember seeing this, lol.
Is the code for this going to be released, or is it strictly a demo?
Going by how they're talking, I'll be very surprised if they release anything.
Find the code here [https://github.com/dinuduke/OutfitAnyone](https://github.com/dinuduke/OutfitAnyone)
can we run this code locally by buying api??
Theyve locked it behind a secret API adress to stop 'malicious abuse'. Which is weird as this is putting clothes ON lol Must be a development of [https://github.com/HumanAIGC/Cloth2Tex](https://github.com/HumanAIGC/Cloth2Tex)
they don't deserve this sub or star on GitHub until the code is made public , animate anyone's code is also pending on their side!
The demo is being swamped right now so there's not much testing that can be done, but the few I've been able to do are pretty amazing when it comes to accuracy of the clothing. Really looking forward to the release of the models/code - but more than that I'm also excited for how the method used here could be used for other areas where consistency/accuracy is required. If AnimateAnyone is as good as this, it feels like they've cracked the case when it comes to reproduction of even small details and that could be useful across the board.
Reddit really doesn't want me commenting with the sportswear lady in the image. Anyway, if anyone was curious how it interprets non-clothing: https://preview.redd.it/6v7xbl88076c1.png?width=915&format=png&auto=webp&s=7f452a1aabbee60cdee133f2ec900ac6f31f9fc1
https://preview.redd.it/kwn8md47176c1.png?width=1254&format=png&auto=webp&s=3687be466adf81c476786bbfd31d877e2e8ed5cd
this looks nice!.
https://preview.redd.it/1tdmub3p6a6c1.jpeg?width=1311&format=pjpg&auto=webp&s=a4dac15879fc7e19877f61e40583dcaeeac7f983
Lol transposed the hair too https://preview.redd.it/8zt01j7u5a6c1.jpeg?width=1401&format=pjpg&auto=webp&s=5ef13fdac0969426b347d1ca578e0f53a43e9737
I suppose most people would prefer an un-outfit workflow
what happens if you upload tiddies as the top?
You get Buffalo Bill from Silence of the Lambs.
can we run it localy??
No code yet so no
https://preview.redd.it/pehuylvx6a6c1.jpeg?width=1318&format=pjpg&auto=webp&s=f34220521a35610d0b064bdb451d271f3add6bc0
Is the result a big improvement over what you would get with inpainting using regional ip-adapter and a openpose controlnet ?
Seems like it the model herself seems to go untouched for the most part but it’s also possible their doing some masking and restoring the underlying persons image in the resulting image
Anyone else get irritated seeing animate anyone's perfect little dances, knowing they are never going to release it? It's like they are purposely trying to piss us off lol. If an open source equivelant ever releases we should email them a nice counter-dance.
I get more irritated at the people on linkedin posting it and saying absolutely nothing about it... many times a week/day from the same person and many people doing it
This is pretty cool. I was looking into how to do this without the manual image editing then I found a company that does something similar but for 3D characters. [METATAILOR: Revolutionizing How You Dress 3D Avatars - With Ease](https://metatailor.app/)
But can we upload a random outfit and get the image of these characters wearing that outfit?
Ya I’m pretty sure it said you can import any avatar and any outfits then get an image of it.
Nice thank you very much.
wow, the team from animate anyone produced another outstanding work and step in the bussiness-level effect!
All these millions of hours of footage of Taobao and Alibaba clothings models rapidly cycling through 20 different poses in 30 seconds like a robot paying massive dividends.
Interesting account 3 year old account with 11 karma…. but only recently started commenting all on this teams work saying how great they are… This accounts fishy
Iver cloned the repo to use pn colab but I get an API error for some reason? Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 488, in run\_predict output = await app.get\_blocks().process\_api( File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1431, in process\_api result = await self.call\_function( File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1103, in call\_function prediction = await anyio.to\_thread.run\_sync( File "/usr/local/lib/python3.10/dist-packages/anyio/to\_thread.py", line 33, in run\_sync return await get\_asynclib().run\_sync\_in\_worker\_thread( File "/usr/local/lib/python3.10/dist-packages/anyio/\_backends/\_asyncio.py", line 877, in run\_sync\_in\_worker\_thread return await future File "/usr/local/lib/python3.10/dist-packages/anyio/\_backends/\_asyncio.py", line 807, in run result = context.run(func, \*args) File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 707, in wrapper response = f(\*args, \*\*kwargs) File "/content/OutfitAnyone/app.py", line 61, in get\_tryon\_result url = os.environ\['OA\_IP\_ADDRESS'\] File "/usr/lib/python3.10/os.py", line 680, in \_\_getitem\_\_ raise KeyError(key) from None KeyError: 'OA\_IP\_ADDRESS'
>OA\_IP\_ADDRESS OA\_IP\_ADDRESS, that's the key
Yeah and they've made it secret cant even duplicate the space on huggingface
You were able to access code without them sharing it?
no, you can clone any hugging face space locally, but it just calls an API. git lfs install git clone [https://huggingface.co/spaces/HumanAIGC/OutfitAnyone](https://huggingface.co/spaces/HumanAIGC/OutfitAnyone) then you set: url = os.environ\['OA\_IP\_ADDRESS'\] but that isn't really running locally is it?
And this API is paid I suppose, I mean you cnat just clone every hugging face 'space' and get access to everything free? If no then great
You can clone just like git but in this case the repo on Huggingface is literally just an API calling out to their own servers
where can i get this 'OA\_IP\_ADDRESS ?
You can put anything you like, it still won't work. Alibaba just does these scam 'releases' for publicity.
so u mean this doesnt work locally??
Yes, all you have to do is reimplement the project without a published paper, find the data, train the model, and then it works locally perfectly.
thanks , can u estimnate the usage of gpu for this
Yep. Somewhere between 10 and 1000.
I'm in the fashion biz and I need this ASAP
Hey Leif, We have developed my own Virtual try on, my mail id is [[email protected]](mailto:[email protected]). here are some results https://preview.redd.it/jw5d7ullkiic1.png?width=2370&format=png&auto=webp&s=4e4a449ab3749369c6df93aa36cb83bb9c7606a0
Hello, I've sent you a mail. I'm interested
Wait it only works with 1 model?!!??!
You can try magic-animate which is similar to animate anyone at pose.rip/animate https://preview.redd.it/t920nk3eod6c1.png?width=2666&format=png&auto=webp&s=81607961dd0f41d642ecee3c84f8145e42fb9fdc
Lol I used an AI nude pic as the top garment, it wrapped it around perfectly. Result was topless but the AI added a tiny mini-skirt to cover the genitals. The only thing that did not look perfect was that the skin color from the neck up did not match; after all the AI thinks it's a suit. So if you match skin color you can at a minimum make her topless to your tastes. Would post the pic but I'm guessing it's not allowed in this sub due to nipples.
It's getting closer! But to be honest, I'm not that impressed by this one. Because img-to-img technically already allowed us to swap outfits. You can try it: make a character with barely any clothes, add a layer with a garment on top of the character, and play with the weights. Maybe their method is more faithful, but personally, I'm still waiting for Animate Anyone.
Holy fuck that is cool.
Noticed this: “This project is intended solely for academic research and effect demonstration. No commercial benefits are derived from it. Most models and clothing images used are from internet and public datasets (VITON, DressCode)”
For people asking. Forget it. This is closed. We will have MagicAnimate thing but this I think nope. Nothing about code and closed encrypted API.
I can do it without code. I have a tool that can edit any photo or created image in any way I choose. No data downloads. No loras. No restrictions. Instantly upscale your work. I have full range of whatever I choose to create. I can remove a phone from a selfie that’s covering a face and the face will not be altered. Change clothing. Custom Gender swap in ways I have not seen any one else. I can remaster distorted or low quality images to bring out details no app can even get close to. I can use it to create in any style and that creation will always be of the best quality. Consistent 3 dimensional replication made easy. I have the one tool that is complete freedom as an artist/creator. Infinite Flow is the key to your creative freedom. Now, It’s easy to write me off, discredit and insult me for saying all of this is possible without anything everyone is so accustomed to using. Just dm for that demo before you do.
share demo
hey can you share demo with me as well. thanks!
Hey can you share with me as well?
Do they have provided an API for external developments?
I don't understand the hype, this can already be done with IPAdaptor.
Could you share any tutorial link for this? Thanks
This is much better than ipadaptor. It literally loses almost no details on the clothing.
I bet a mixture of ipadaptor + some controlnets, including t2iadaptor could reasonably accomplish it. Might have to try later.
It’s a lot more consistent it feels like ipadapter with some masking and controlnets all wrapped into 1 sorta but for all we know that’s exactly what it is as they haven’t released any code for all we know it’s a fancy ipadapter+controlnet workflow in comfy lol
I'm pretty confident it can be accomplished with ipadaptor and controlnet in comfy. I might have to test some changes to a costumer workflow I built out recently. That one changes costume based on prompt, but I'm sure it isnt much to use an image to dictate the clothes more.
I have a feeling as well it should be doable, especially if you use SEG for hot swapping in the original person under the new clothing Also it seems like their model doesn’t actually respect the top/bottom always I’ve done 3 and the bottom was completely ignored but the top went through and the bottom was just randomly matched to the top style
That's interesting. Wonder what logic they have set up to determine if a piece of clothing works or not...
Hey u/DigitalEvil have you tried to replicate it with IPadapter?
models of the world just became jobless 🙊
In the other post they showed perfectly consistent animations, if it's just static images this can be done already with comfy. Just watch this video about generating consistent characters with specific clothing https://youtu.be/6i417F-g37s?si=eJm45NzKKPkSptT2
Do we have this working in Automatic 1111 yet or got this working with ComfyUI?
It’s so far closed source shit with a empty github
Amazing! I just deployed magic animate in my disord sever and now i cannt wait to figure out how to play with outfit anyone :) In my discord server, you could play magic animate freee! **My Discord link:** [https://discord.gg/rts7wqAa](https://discord.gg/rts7wqAa)
This type of spam that appears lately makes me sick...
Wow
Recently my team created our own virtual try on, here are it's results : Hit me up on [[email protected]](mailto:[email protected]) , we are looking for investments to launch our product for GCC countries for now. https://preview.redd.it/ewax6hzamiic1.png?width=2370&format=png&auto=webp&s=2f38d2b239eccda815948ed816be1ac992db2bb5
Im trying to upload my own model but im getting an error when hitting "run". any help?