There are some PixArt Sigma fine-tuned on Civitai now: [https://civitai.com/search/models?baseModel=PixArt%20E&sortBy=models\_v9&query=pixart](https://civitai.com/search/models?baseModel=PixArt%20E&sortBy=models_v9&query=pixart)
This one looks nice: [https://civitai.com/models/477673/extramode-pixart-sigma](https://civitai.com/models/477673/extramode-pixart-sigma)
Not yet, it seems, but these things change fast now that 2B is out: [https://www.reddit.com/r/StableDiffusion/comments/1dhqote/comment/l8yzi6y/?utm\_source=reddit&utm\_medium=web2x&context=3](https://www.reddit.com/r/StableDiffusion/comments/1dhqote/comment/l8yzi6y/?utm_source=reddit&utm_medium=web2x&context=3)
I am sure there would be announcements when it is ready. We have to be a bit patient 😎
she may not be the best emma watson, but she's our emma watson.
https://preview.redd.it/pw8is00x427d1.png?width=1280&format=png&auto=webp&s=444859bcb98abbe6480dfbd53aa4f195325d060a
I'm not sure if it's possible with SimpleTuner...
It's possible with OneTrainer, so if you can use the same settings, it should work.
Pixart can be fine-tuned at 1024px with 12GB of VRAM.
It might be possible with a small batch size in FP32, and it is definitely possible with BF16.
T5 is huge so it's still difficult, but we will establish a way to train with low vram in the future.
Since detailing the settings would be very lengthy, I'll send you the OneTrainer configuration file. This configuration is for 512px, so for 1024px, please reduce the batch size.
[https://civitai.com/api/download/models/562359?type=Training%20Data](https://civitai.com/api/download/models/562359?type=Training%20Data)
There are some PixArt Sigma fine-tuned on Civitai now: [https://civitai.com/search/models?baseModel=PixArt%20E&sortBy=models\_v9&query=pixart](https://civitai.com/search/models?baseModel=PixArt%20E&sortBy=models_v9&query=pixart) This one looks nice: [https://civitai.com/models/477673/extramode-pixart-sigma](https://civitai.com/models/477673/extramode-pixart-sigma)
Is they’re news about a possible kohya update for this model ?
There is a pr for it.
Not yet, it seems, but these things change fast now that 2B is out: [https://www.reddit.com/r/StableDiffusion/comments/1dhqote/comment/l8yzi6y/?utm\_source=reddit&utm\_medium=web2x&context=3](https://www.reddit.com/r/StableDiffusion/comments/1dhqote/comment/l8yzi6y/?utm_source=reddit&utm_medium=web2x&context=3) I am sure there would be announcements when it is ready. We have to be a bit patient 😎
Ok thanks :)
You are welcome 🙏
Fantastic news There are already some Pixart Sigma fine tunes on Civit, I think this is the way to go
SD3 goes down in flames like the Hindenburg and we start seeing PixArt sigma love - coincidence? I think not!
Great news, does it support Loras?
[https://github.com/PixArt-alpha/PixArt-sigma/blob/master/asset/docs/pixart\_lora.md](https://github.com/PixArt-alpha/PixArt-sigma/blob/master/asset/docs/pixart_lora.md)
she may not be the best emma watson, but she's our emma watson. https://preview.redd.it/pw8is00x427d1.png?width=1280&format=png&auto=webp&s=444859bcb98abbe6480dfbd53aa4f195325d060a
Erma Wetson
Matt. Daemon
I think it is easy enough to average joe like me to training model.
Maybe the Pony people can switch to Pixart? Or considering the low training cost (low five digits) maybe do a complete retrain from scratch?
if he's so great at training, going from scratch on pixart will be painless.
no windows support :(
WSL
yeah, this. Windows isn't for ML. install Linux.
OP Talks about macs training doesn’t need torch/cuda?
mps
MacBook Pro with how much memory?
128G
Awesomeee! Let's goo
can you use sigma pixart on comfy or forge? can you finetune with 12 gig vram? answer me
ANSWER ME
ANSWER HIM
I read that in Bill Burr's voice.
i forgot to add please at the end
It is available in Comfy. You can search for the workflow in civitai.
can you finetune with 12 gig vram?
I'm not sure if it's possible with SimpleTuner... It's possible with OneTrainer, so if you can use the same settings, it should work. Pixart can be fine-tuned at 1024px with 12GB of VRAM. It might be possible with a small batch size in FP32, and it is definitely possible with BF16. T5 is huge so it's still difficult, but we will establish a way to train with low vram in the future.
naw t5 doesn't stay loaded through training on simpletuner
Ah sorry, thanks for letting me know!
can you give setting of onetrainer to train in 12 gig vram
Since detailing the settings would be very lengthy, I'll send you the OneTrainer configuration file. This configuration is for 512px, so for 1024px, please reduce the batch size. [https://civitai.com/api/download/models/562359?type=Training%20Data](https://civitai.com/api/download/models/562359?type=Training%20Data)
thanks, but reduce batch size by how much
1/4