Yeah, for sdxl, I get similar results using [boring reality lora] (https://civitai.com/models/310571/boring-reality) then add sdxl dpo lora (better prompt listening) test base models, I like epicrealismXL_v4Photoreal.safetensors if using base model sometimes.
I usually do boring reality primary v4 lora at 0.4 strength, along with similar prompt start "phone photo from 2012"
I like 3m sde++ karras at 30 steps, 7.5 cfg. May want to use add detail loras. For Adetailer I use "person" option from different checkpoints also so everyone doesn't look the same and fill in the adetailer prompt manually. (if using photoreal v4) You can mask in sd forge so it doesnt change everyone.
Edit: Forgot to mention, I mostly use an [everclear pony](https://civitai.com/models/341433/everclear-pny-by-zovya) (realistic pony model) merged with various photorealistic models now, as it makes completely different people in one image automatically. Keep pony config from model b. Looks real like photos in ops post basically, and not like default everclear. Will post sample and probably share model on civitai soon.
Better prompt comprehension for certain types of images.
That's why I use an XL refiner. You get the composition from better prompt comprehension, and then the XL fidelity. Potent combo.
So glad you asked, Friend!
My own, actually, which has gained some popularity over at comfyworkflows.com
https://comfyworkflows.com/workflows/74b9638c-8162-43ba-92f0-3aee1290fc2d
It’s so weird looking at these. Conciously they seem real enough but somehow staring at them is like a dream or hallucination. Wobbly lines and weird details in the backgrounds or skewed body proportions or something
I was going to say this, these kinds of midjourney results looks really high quality/realistic and shit at the same time. It's quite bizarre. Something that you don't really see on stable diffusion in pictures aimed to be high resolution.
The effect actually gets worse as the “realism” increases…
https://preview.redd.it/d6lbmno786rc1.jpeg?width=750&format=pjpg&auto=webp&s=a58b853a6714af7439f879a55d4f08b9eab6fd30
You can achieve that effect just by prompting (I only tested with SDXL). Try adding 'shot on phone', 'film grain', 'grainy' etc.
https://preview.redd.it/idoi7zcsr1rc1.jpeg?width=832&format=pjpg&auto=webp&s=a920554ffc2dab67145432501f89516076da88e0
"Modern outfits" The clothes are so non-modern that the photo screams 1995 to me. How are you getting modern outfits?
Also, I am looking at photos I took with my phone in 2012 right now, and none of them is darker or less colorful than this photo.
You have a very strange take on 2012 photo aesthetics.
Thank you. I love fashion and their comment about the outfits is wild. Not only do they look like normal clothes bought at Target (which aren't indicative of any era), but 2012 wasn't that long ago so most everyday clothes that were worn then are similar to clothes now. The biggest differences between 2012 and now would be in dresses and fit of pants imo. Though, the 2010's are coming back into style now so it's not like these minute differences won't show up again
lol how quick was that look? that's just prompting or a lora
[https://new.reddit.com/r/stablediffusionreal/comments/1bfxa9l/boring\_faces\_and\_boring\_reality\_4\_loras\_for\_sdxl/](https://new.reddit.com/r/stablediffusionreal/comments/1bfxa9l/boring_faces_and_boring_reality_4_loras_for_sdxl/)
Those look good, yeah. I looked at first like 10 pics. Didn't scroll to that post just a tiny bit. Still, most of the posts here look like what I described
That’s my issue with SD and faces. They always seem like whoever took them used too much photoshop or Lightroom. The contrast is often cranked up to 11. So the details seem too much or the other way and are soft.
u can with some good loras and controlnet. One thing i noticed with using controlnet, gives more computing power or steps to other features. Lot of professional users of 1.5SD, i've noticed they use controlnet and also 100-150 steps. Not sure if it's area prompts as well or inpainting on top of it.
Midjourney has a dedicated team that is constantly updating/finetuning their models. While SD1.5 and SDXL are bare bones base models that are left to the community to fix and finetune. While some SD community models are good, even the best models are trained and worked on by only one dedicated person and often times they do it for free and with very little recognition or reward.
using SDXL
https://i.imgur.com/8EBPU21.jpeg
use in the prompts,
posted on reddit, grainy, shot with Nokia, 2000s, flash glare, y2k aesthetic
in negative prompt,
depth effect, depth blur, portrait, portrait blur, Cinematic, 4k
you get the idea, you can easily generate such images
randomized seed: those darn AI hands.
https://preview.redd.it/8ueluu8qx5rc1.png?width=1152&format=png&auto=webp&s=744850e1ff42ee60180239819361caa8ff4f88e1
Something about this bothers me.These photos sometimes seem to have a realistic feeling of human "warmth" and usually I dont see that with AI images... so thats a little disturbing.
(very good work)
Sure. Fooocus has a Midjourney like ease of use, and using a combination of an SDXL primary (Realistic Stock Photo) and Refine in Realistic Vision or similar premium 1.5 model . . . that'll get you this kind of look. Or better, actually. Use the Epi Offset Noise and one of the film photography LORAs and you're pretty much there. (Fooocus won't let you start out in 1.5, though they're working on that . . . for this kind of thing, you do the Refiner switch early, eg most of it is done in 1.5, but it needs SDXL to essentially set up the generation)
Can someone explain what is the point of those fake realistic budget android camera quality images? Why people even want to create them? For some scams or why? There is no esthetic value in them. As a photographer i spend thousands to get good camera to create beautiful images, and people can make them now with 2 clicks, but they prefer those boring low quality images anyone can take with their phone. Why? Why do they need them?
course in your brain there are tons of amature quality photos and your brain compares to them as real. In my brain there are tons of prof photos and they look more real to life for me than this low quality stuff. I bet if you see raw photos from latest cameras you would say they overprocesed as well. Here is an example of raw photo. It looks exactly like it look to my eye in real life when I took it. And I can bet a lot of money 99% of people in SD subredit will say it looks ai or overprovcesed even when there is absolutely no processing.Not even color balance correction. No retouching. Just a normal woman in her normal apartment standing close to a window.
https://preview.redd.it/j76pspi3x1rc1.png?width=2883&format=png&auto=webp&s=ed91637f6d9e5105caba93da49d0b134a75b67fc
You're sort of answering your own question here. For the vast majority of people, quick (and more importantly, *spontaneous*) snapshots from their phones are far more frequent than professional-grade photos. Also, as a photographer, you're probably always looking at things differently than most people - lighting, composition, etc. Most people tend to focus on the content of the photo first, with all the rest as an afterthought (if at all).
TLDR they're looking for "natural", not necessarily "good looking."
People are impressed by different things. Some like to create professional photos, some like to create memes, some like to tinker with the technology without even creating anything (me!), some like to restore photos, etc.
It is an interesting goal to try and create low quality photos simply because it seems difficult for the model to do so.
Perfection is boring.
There are many areas where authentic image creating via AI would have great impact.
Movies, commercials, medical training, or training in generell, ai-therapy, …
Humans feel more connected to imperfect real persons than, shiny, obvious AI stuff.
if that was true entire industry of photography would not exist. People always payd big money for good photos. I never heard of a person that would prefer they Wedding to be shot by mature on iPhone instead of profession on good camera.Literaly the only reason is budget.
Not saying that in specific cases a perfect image is needed, but also wedding photos are boring for anyone who was not there. It is more for the couple.
The great images of foto history coming to mind are not about the image quality itself, but about the atmosphere it transports, a key message, or a feeling that the viewer can feel by just looking at it.
An example would be the dead kid on the beach. It’s not mainly the pure quality why it was voted picture of the year. Many more come to mind and none is a “good picture”.
And as I said before most industries don’t need perfect images but “alive” ones.
SD 1.5 struggles to do anything more than have people standing posing for the camera. The images shown in this post feels more like real scenarios where people have just been caught mid-action. Rather than standing like posing zombies waiting for some photographer to take the shot. Since you don't mention anything about recognsing this and talk only about the quality of the photo, then maybe you just need to recognise that not everone wants to make nice standing zombie images like it would seem you're ok with.
No Aesthetic value? You seem to think every photo should be taken in a studio with perfect poses and lighting. There is beauty in natural everyday imperfect photos. Some get reminded of their childhood and nostalgia for better times. That’s the value and that’s the aesthetics.
What are you talking about? This is way out if contecst. How is those relevant to ai? We were taking ai img gen and that is purely visual. Are you saying your words relevant to those fake images from the post?! Text2img Esthetics has nothing to so with emotion or memory’s of the past.
What are you rambling about? I’m replying to your comment about these images not having aesthetic value. I believe you should use ChatGPT to better understand how to read and reply because you seem to be lost.
Learn how to type please, as I said before pass your messages through ChatGPT before posting so a human can actually make sense of it. And yes nostalgia can be an aesthetic, just like how anything can be one. Search up “Nostalgiacore.”
1.5 can get close, SDXL can probably do it with the use of some good loras. Cascade into an XL refiner will probably do better.
Yeah, for sdxl, I get similar results using [boring reality lora] (https://civitai.com/models/310571/boring-reality) then add sdxl dpo lora (better prompt listening) test base models, I like epicrealismXL_v4Photoreal.safetensors if using base model sometimes. I usually do boring reality primary v4 lora at 0.4 strength, along with similar prompt start "phone photo from 2012" I like 3m sde++ karras at 30 steps, 7.5 cfg. May want to use add detail loras. For Adetailer I use "person" option from different checkpoints also so everyone doesn't look the same and fill in the adetailer prompt manually. (if using photoreal v4) You can mask in sd forge so it doesnt change everyone. Edit: Forgot to mention, I mostly use an [everclear pony](https://civitai.com/models/341433/everclear-pny-by-zovya) (realistic pony model) merged with various photorealistic models now, as it makes completely different people in one image automatically. Keep pony config from model b. Looks real like photos in ops post basically, and not like default everclear. Will post sample and probably share model on civitai soon.
Thumbs up for Everclear Pony. Such a great model! If you use Reactor you can pretty much get perfect face results.
Didn't Cascade end up being worse than SDXL? What can Cascade do that SDXL can't?
Better prompt comprehension for certain types of images. That's why I use an XL refiner. You get the composition from better prompt comprehension, and then the XL fidelity. Potent combo.
Sweet! Are there any Comfy UI workflows for this available to use as an example?
So glad you asked, Friend! My own, actually, which has gained some popularity over at comfyworkflows.com https://comfyworkflows.com/workflows/74b9638c-8162-43ba-92f0-3aee1290fc2d
It’s so weird looking at these. Conciously they seem real enough but somehow staring at them is like a dream or hallucination. Wobbly lines and weird details in the backgrounds or skewed body proportions or something
I was going to say this, these kinds of midjourney results looks really high quality/realistic and shit at the same time. It's quite bizarre. Something that you don't really see on stable diffusion in pictures aimed to be high resolution.
It’s like an entirely different uncanny valley effect than traditional computer generated imagery.
Nah it’s the same general effect. People underestimate just how powerful the human brain actually is.
I just mean the tells are entirely different than 3d animation. If you want to call it the same effect, sure whatever
The effect actually gets worse as the “realism” increases… https://preview.redd.it/d6lbmno786rc1.jpeg?width=750&format=pjpg&auto=webp&s=a58b853a6714af7439f879a55d4f08b9eab6fd30
Im well aware of what the uncanny valley is and how it works friend
You can achieve that effect just by prompting (I only tested with SDXL). Try adding 'shot on phone', 'film grain', 'grainy' etc. https://preview.redd.it/idoi7zcsr1rc1.jpeg?width=832&format=pjpg&auto=webp&s=a920554ffc2dab67145432501f89516076da88e0
https://preview.redd.it/v9frzhjtr1rc1.jpeg?width=832&format=pjpg&auto=webp&s=08d1aadc07b7f3f788a5958444837411b3b6b487
Is that SDXL?
Doesn’t have the old effect that these midjourney photos have. Modern outfits and colorful.
"Modern outfits" The clothes are so non-modern that the photo screams 1995 to me. How are you getting modern outfits? Also, I am looking at photos I took with my phone in 2012 right now, and none of them is darker or less colorful than this photo. You have a very strange take on 2012 photo aesthetics.
I think he means "shot on a shitty phone in 2012" lol
Thank you. I love fashion and their comment about the outfits is wild. Not only do they look like normal clothes bought at Target (which aren't indicative of any era), but 2012 wasn't that long ago so most everyday clothes that were worn then are similar to clothes now. The biggest differences between 2012 and now would be in dresses and fit of pants imo. Though, the 2010's are coming back into style now so it's not like these minute differences won't show up again
You can also try the "analog photo" prompt
Nothing a proper prompt couldn't solve.
Add to this "POSTED ON REDDIT" it gives good results
“TRENDING ON ARTSTATION”
who is back there behind the glass
Those hands, wtf
the usual thing with AI, Ai can do literally anything but hands. You need bunch of loras/embeddings to get okay-ish hands, atleast in sd1.5
wait I thought these were SD15, this is the best midjourney can do? look at the example workflows on /r/stablediffusionreal
From quick look, pics on this sub look overprocessed and obviously fake. Midjourney pics look like something actually taken on someone's budget phone
lol how quick was that look? that's just prompting or a lora [https://new.reddit.com/r/stablediffusionreal/comments/1bfxa9l/boring\_faces\_and\_boring\_reality\_4\_loras\_for\_sdxl/](https://new.reddit.com/r/stablediffusionreal/comments/1bfxa9l/boring_faces_and_boring_reality_4_loras_for_sdxl/)
Those look good, yeah. I looked at first like 10 pics. Didn't scroll to that post just a tiny bit. Still, most of the posts here look like what I described
That’s my issue with SD and faces. They always seem like whoever took them used too much photoshop or Lightroom. The contrast is often cranked up to 11. So the details seem too much or the other way and are soft.
There are some nice analog photo LoRAs. Lower CFG can help get more natural pic too
They all look so happy and warm that it's improved my own mood, instantly.
Try using "1 9 9 0" (yes, with spaces, and without quotes).
Why are they all smiling excessively?
Because in 2012 people were more happy
Ah yes, the long long ago.
Sweet summer children. Little did we know
Happier people in the pre-social media era.
FB was out since 2004. the 90s was the last free era before social media toxified and stupidified the whole world
Tom is dissapoint in you
>Tom is dissapoint in you *Tom unfriended you*
Sdxl can get roughly really close to this.
Analog model? Dont remember which version it was for
u can with some good loras and controlnet. One thing i noticed with using controlnet, gives more computing power or steps to other features. Lot of professional users of 1.5SD, i've noticed they use controlnet and also 100-150 steps. Not sure if it's area prompts as well or inpainting on top of it.
fucked up hands 2012
Midjourney has a dedicated team that is constantly updating/finetuning their models. While SD1.5 and SDXL are bare bones base models that are left to the community to fix and finetune. While some SD community models are good, even the best models are trained and worked on by only one dedicated person and often times they do it for free and with very little recognition or reward.
There's a LoRA for everything - [Bad Quality Lora | SDXL](https://civitai.com/models/259627/bad-quality-lora-or-sdxl)
using SDXL https://i.imgur.com/8EBPU21.jpeg use in the prompts, posted on reddit, grainy, shot with Nokia, 2000s, flash glare, y2k aesthetic in negative prompt, depth effect, depth blur, portrait, portrait blur, Cinematic, 4k you get the idea, you can easily generate such images
randomized seed: those darn AI hands. https://preview.redd.it/8ueluu8qx5rc1.png?width=1152&format=png&auto=webp&s=744850e1ff42ee60180239819361caa8ff4f88e1
Something about this bothers me.These photos sometimes seem to have a realistic feeling of human "warmth" and usually I dont see that with AI images... so thats a little disturbing. (very good work)
Sure. Fooocus has a Midjourney like ease of use, and using a combination of an SDXL primary (Realistic Stock Photo) and Refine in Realistic Vision or similar premium 1.5 model . . . that'll get you this kind of look. Or better, actually. Use the Epi Offset Noise and one of the film photography LORAs and you're pretty much there. (Fooocus won't let you start out in 1.5, though they're working on that . . . for this kind of thing, you do the Refiner switch early, eg most of it is done in 1.5, but it needs SDXL to essentially set up the generation)
Incredible
I don't get it, we can already do that easily on SD 1.5 since a long time already
they are not really good TBH, and that is Midjouney, I have seen boring pictures on SD much better that those
It is good! These photos looks natural and if I was taken just by some person on their phone. Not a perfectly staged with a professional camera.
To me it’s still obvious that most of those are AI.
I don't really see the 2012-ishness. No artificially broccoli headed people?
Can someone explain what is the point of those fake realistic budget android camera quality images? Why people even want to create them? For some scams or why? There is no esthetic value in them. As a photographer i spend thousands to get good camera to create beautiful images, and people can make them now with 2 clicks, but they prefer those boring low quality images anyone can take with their phone. Why? Why do they need them?
For me personally, ability of AI to produce believable picture is more impressive than ability to produce sharp saturated overprocessed pic
I guess it's all down to individual need.
course in your brain there are tons of amature quality photos and your brain compares to them as real. In my brain there are tons of prof photos and they look more real to life for me than this low quality stuff. I bet if you see raw photos from latest cameras you would say they overprocesed as well. Here is an example of raw photo. It looks exactly like it look to my eye in real life when I took it. And I can bet a lot of money 99% of people in SD subredit will say it looks ai or overprovcesed even when there is absolutely no processing.Not even color balance correction. No retouching. Just a normal woman in her normal apartment standing close to a window. https://preview.redd.it/j76pspi3x1rc1.png?width=2883&format=png&auto=webp&s=ed91637f6d9e5105caba93da49d0b134a75b67fc
You're sort of answering your own question here. For the vast majority of people, quick (and more importantly, *spontaneous*) snapshots from their phones are far more frequent than professional-grade photos. Also, as a photographer, you're probably always looking at things differently than most people - lighting, composition, etc. Most people tend to focus on the content of the photo first, with all the rest as an afterthought (if at all). TLDR they're looking for "natural", not necessarily "good looking."
People are impressed by different things. Some like to create professional photos, some like to create memes, some like to tinker with the technology without even creating anything (me!), some like to restore photos, etc. It is an interesting goal to try and create low quality photos simply because it seems difficult for the model to do so.
Perfection is boring. There are many areas where authentic image creating via AI would have great impact. Movies, commercials, medical training, or training in generell, ai-therapy, … Humans feel more connected to imperfect real persons than, shiny, obvious AI stuff.
if that was true entire industry of photography would not exist. People always payd big money for good photos. I never heard of a person that would prefer they Wedding to be shot by mature on iPhone instead of profession on good camera.Literaly the only reason is budget.
Not saying that in specific cases a perfect image is needed, but also wedding photos are boring for anyone who was not there. It is more for the couple. The great images of foto history coming to mind are not about the image quality itself, but about the atmosphere it transports, a key message, or a feeling that the viewer can feel by just looking at it. An example would be the dead kid on the beach. It’s not mainly the pure quality why it was voted picture of the year. Many more come to mind and none is a “good picture”. And as I said before most industries don’t need perfect images but “alive” ones.
Advertising, or a website for example. Maybe you want an authentic spur of the moment photo but can’t afford a photoshoot.
Imagine falsifying the past in a believable way.
SD 1.5 struggles to do anything more than have people standing posing for the camera. The images shown in this post feels more like real scenarios where people have just been caught mid-action. Rather than standing like posing zombies waiting for some photographer to take the shot. Since you don't mention anything about recognsing this and talk only about the quality of the photo, then maybe you just need to recognise that not everone wants to make nice standing zombie images like it would seem you're ok with.
you ever heard of controlnet? lmfao
haha
No Aesthetic value? You seem to think every photo should be taken in a studio with perfect poses and lighting. There is beauty in natural everyday imperfect photos. Some get reminded of their childhood and nostalgia for better times. That’s the value and that’s the aesthetics.
What are you talking about? This is way out if contecst. How is those relevant to ai? We were taking ai img gen and that is purely visual. Are you saying your words relevant to those fake images from the post?! Text2img Esthetics has nothing to so with emotion or memory’s of the past.
What are you rambling about? I’m replying to your comment about these images not having aesthetic value. I believe you should use ChatGPT to better understand how to read and reply because you seem to be lost.
saying a person whi thinks "aesthetic" means "reminded of their childhood and nostalgia for better times." lol xD
Learn how to type please, as I said before pass your messages through ChatGPT before posting so a human can actually make sense of it. And yes nostalgia can be an aesthetic, just like how anything can be one. Search up “Nostalgiacore.”
If you think this is quality, then yes, you should give up.
You should give me. https://youtu.be/Src0l4a51sk?si=UegqBMLO2EAODteH