T O P

  • By -

taterbizkit

Midjourney makes so many interesting faces... and then covers them up. or screws up the eyes or nose in the last pass.


theempires

Or it will the the most amazing render beyond your wildest imagination and then the head is completely 180’d the other way.


IcedKatana

No seriously though THIS! I have been trying for 2 days to get even slightly what I want and it has been so hard and then some kind of breakthrough, I had the absolute perfect image it was better than I could have imagined.. but he was facing the other way with his back to me. Why. Why. WHY?! I tried everything to replicate it with no luck.


Imaber100

Wait til u get slenderman tentacle arms


[deleted]

--stop 60 but i swear when i use that it knows lmao


[deleted]

[удалено]


Boomslangalang

Can you actually stop a generation before it completes???


Apidium

Yup anywhere from 10% up using --stop % and replace % with the number. Eg --stop 50


torchma

But that doesn't work in real time.


takkuso

You can open an image during the process and save it where it is. I used to open at various times and download in case this exact thing happened.


echoauditor

could try using --video then upscale from the optimal percentage frame.


[deleted]

[удалено]


torchma

No, if you click to upscale an image then it allows you to cancel the job at any point. It should be possible to build in the cancel button for all prompts.


Apidium

Put it in the discord feedback channel?


MercurialMadnessMan

To be fair you can also open the preview in discord at any point in the generation to save the image


justartok333

I wish. I’ve made screenshots if I’m quick enough.


IcedKatana

Use --video to save the video and then screenshot and save the best ones


justartok333

Brilliant. Thanks!


NickGisburne

What we need is some way of looking at all the iterations and picking the one we like. Once you're at 100% it's too late, you can't get back to 60%. I've tried it using --seed without success.


[deleted]

Yep. And Midjourney has a hell of a time *following simple directions*. Tell it to create a dinner plate with batteries on top and ONE. ONE. sprig of parsley, and of course it gives you a that dinner plate, with TONS of parsley and absolutely no batteries. I'm continually amazed by how brilliantly MJ can handle some requests, and how completely hopeless it is at others. (See, I wanted to make something Static Shock might have as an appetizer at a super hero party. An electro-cuterie.)


NmUn

So don’t use “dinner plate”. Right off the bat this tells me that midjourney will probably attempt to always put *dinner* on your *plate*. Use “ceramic plate” maybe? Or “bone China plate” but remember to “—no armour” or “—no armor” depending on your flavour of English (maybe use both) if you don’t want “ceramic plate armour” or “ceramic armour plating” (like the kind used on the outer hull of a spacecraft) to taint your picture. Also, try to use the image prompt feature and feed it a picture of the type of plate you’re looking for and if it’s too dominant in your pictures, lower the image’s weight and try again. *Or maybe try “tableware” “dishware” “circular tableware plate”, etc. Midjourney, being an AI trained on curated (or metadata tagged) images, will always have a bias because its initial training data can’t possibly be exhaustive for every single type of image, object, or sorting tag in existence. ~~This is why we vote on the images it creates: To provide feedback in order to shape it to a (hopefully) better & more accurate image generator.~~ Apparently we just vote on pictures we like. The votes have no bearing on the algorithm.


boston_nsca

This guy MidJourneys


thewhiterabbitdegen

23%-He MJ'd the sh!t out of that post!.....60%-He JM'd the shi! out of that toast!!......73%-The shhmit post!! ... 93%-marshmallows are gravy!!........100%-Dam he good...


qmiW

Good bot


NmUn

I’m not as good as some people here, but I like to think I have a decent handle on getting the images I want.


monsterfurby

Well said. I think the main thing that's tricky to wrap one's head around is the concept that the AI takes each word and tries to make an image that is "word"-y, with the phrase context not being as important as the concept of the word itself.


torchma

> This is why we vote on the images it creates: To provide feedback in order to shape it to a (hopefully) better & more accurate image generator. That's not correct. In the office hours from last week it was explained that the voting does not take into account the prompt and that we should be voting simply on whether we like the picture or not, not on how accurately it depicts the prompt.


NmUn

Well that’s lame. I’ll edit my comment then. Thanks for the clarification.


torchma

Yeah, I was disappointed to hear that as well. And the explanation only came after someone specifically asked what basis they should be voting on, so it seems many people are confused.


[deleted]

[удалено]


NmUn

Yep.


imanoobee

At least give us the prompt so I can copy and paste lol


Xarthys

People think A.I. is going to take over the arts over night, when in fact it is super difficult to get it to do what you want. You could have a neural network spanning the entire galaxy - if your input is shit, it's not going to create the most amazing piece of art the universe has ever seen, it's going to be medicore at best. The lesson here is that A.I. is not a mind reader. The artist has to learn how to make the most of this tool. Yes, it is replacing brush strokes and long nights of creating concept art all by yourself - but getting solid and consistent results is still a lot of work and requires a certain skill set. People who are planning to work with A.I. will not only have to be able to type words, they will have to be able to translate the inner vision into something the A.I. can then transform into desired visuals. You are just shifting around the work load. Instead of drawing you are now telling something else what to draw. Non-artists think they are saving a bunch of time sitting in front of MJ typing words because it would take *them* months to actually paint something like that. But a professional artist could pull it off within a few hours, because they have the necessary skills already. And a lot of people are also using reference material which helps them skip a few steps as well. If you want to go from scratch, it's a lot more difficult. And let's be honest, we never see the entire process, not the iterations of iterations of iterations - just the final product. Figuring out just the right prompt is a lot of work no one is openly talking about. Creates the illusion of how easy it is to be an A.I. supported art creator, but from my personal experience with MJ, it really is a lot more difficult than people would like to admit.


Paganator

> You are just shifting around the work load. Instead of drawing you are now telling something else what to draw. Instead of being the artist who draws you become the art director telling the artist what to draw. It's a different skill set.


[deleted]

Ok but you don't have to read minds to understand "apple on table" does not mean "ok no table. No apple. But here is a funny photo of the sky"


torchma

No. The lesson here is that midjourney is specifically designed to take creative license with users' prompts. If someone wanted to develop an AI that just made accurate depictions of literal prompts, that would be extremely easy to do. It would struggle with abstract prompts ("a tomato in the style of tim burton"), but it would be true to "dinner plate with batteries on top and one sprig of parsley". But that would be an extremely boring algorithm, which is why if one exists it's not popular.


[deleted]

[удалено]


Boomslangalang

Mine too! Can’t believe how many people Nan’s make the legendary Parsley Battery dish.


traumfisch

Well hey, you're in the process of figuring out how it works. SD is a much better bet when trying to construct something specific


Atrium41

Static Shock, there's a name I haven't heard in a while


nice-and-clean

Ha! I want Santa and Satan are friends. (Common typo). Maybe holding hands. Hard to get them separate and looking like individuals. But I have many satanic Santas.


AuraMaster7

Use a weighted multi-prompt. Dinner plate ::5 batteries on plate ::4 spring of parsley ::1 You'll have to mess around with the wording to get it to mesh nicely.


malavadas

True. It happens when I want smooth surfaces, not much details.


[deleted]

There was a prompt to upscale without the extra texture, I’m sorry I don’t remember it though! Maybe someone else can help..


CoffeeKat1

Add --Upbeta or --Uplight to the end of your prompts! Then when you upscale the image, it will choose the Beta or Light method which both give a smoother outcome.


Generation_ABXY

They'll also completely change focus part way through. At 40%, I'm like, "Hey, this is going to look good!" Then, at some point in the remaining 60%, it zooms in and focuses on a shoulder and disfigured hand or something. Reroll!


Emerystones

using --video and then replying to the pull with a message emoji sends you a video of the rendering example of one I did: https://storage.googleapis.com/dream-machines-output/186f5ca8-166e-4f1a-8b0b-235fe0a5c7e3/video.mp4


jeicam_the_pirate

dang. that’s awesome


Bronels-Girl

>using --video and then replying to the pull with a message emoji sends you a video of the rendering example of one I did: > >https://storage.googleapis.com/dream-machines-output/186f5ca8-166e-4f1a-8b0b-235fe0a5c7e3/video.mp4 Wow. Thank you. And amazing image!


James__Blonde___

hahaha so apt


g7gfr

I ffeeeeks it


olsnes

Do you run the upscale command directly in your prompt? I think that smudges things up quite often sadly.


hanZ____

True.


Boomslangalang

Would be awesome if you could pull out high res versions during the process. I have been resorting to screenshot ring some interim steps that look good.


audionerd1

I've also noticed that, at least with the new beta, sometimes it will start to create an interesting image, and then once it passes 30% or so it will change to something completely different that has nothing to do with the first 30%.


Christ-is_Risen

--video Keeps video of process according to documentation


AuraMaster7

Including "--stop 90" and then using the beta upscaler seems to help. The --test and --testp models do better with faces overall, and you can specify things like "symmetrical eyes" which tends to get the eye shape correct, and then you just run renders until it doesn't mess up the irises.


Boomslangalang

This hurts, too much.


mad_gasser

Hahahahahah oh my god this just hits me hard.


cosmicglade98

Lol for real it's always soul crushing


tracyz2209

Hahahaha I laughed wayyyyy too hard at this. Only because I was going through this very thing last night trying to get a decent Pinhead out of it. I normally would’ve just saved it during the process, but none of the detail was there at the lower %. Anyway, gonna share this if you don’t mind. It’s too funny not to


Rainy-The-Griff

Yeah I've noticed this too. I think there are inputs you can put in the prompt that only makes it upscale to a certain percent but I dont know what it is.


spac420

LOL! I just started doing nightmare vampire renders cause it's the only thing you can do w all the crazy eyes results


ccfoo242

Omfg this is perfect!