Hey Christian, thanks for all your work. A few notes: I'm working with realistic models, not illustrations, so I"m not sure it's the same, but I suspect it is, so here goes: If you want a very consistent character, 4 URLs isn't nearly enough. I'm using up to 16 URLs...the characters, well, they look like the same person in a photograph. If you want full body images, it's helpful to find a photograph of a full body image of a celebrity or whatever, use a cutout if you can find one, or remove the background yourself. Then add that URL to the prompt. If you search online for images, most photographs don't include the feet, so the references MJ has in its data base tends to have the character or person either as a portrait or cut off at the waist. Using a full body image in the prompting gives it a reference to use. Once you have a very consistent character, i.e. all four images generated look basically identical, THEN you can add to the word prompt things like "put her in Paris" or "on a shooting range with a Glock" or whatever. MJ still messes up the way a character holds a gun or any object -- -But the character will be identical and you can reroll a few times and eventually get a good one. I'd say one in 8 or 12 have my homicide detectives holding a glock reasonably well, but regardless, the characters are the same. I pulled some of the images I liked (the URLs) from MJ v 4 and added them to photographs of celebrities I liked, mixing a somewhat cartoon/illustration with a real photograph can then be massaged towards the style you like, just remember to keep the character consistent, you need these 12-16 URLs in EVERY prompt....then MJ is forced to create a composite image and, viola, they end up being the same person or character. Of course, like everyone, I'm still learning and I'm sure in two months, with V 6, we'll have to relearn a bunch of things...haha... OH. one more thing, instead of using "Marvel Comics" you'd probably get a much more consistent style if you tell MJ to adopt the style of a particular illustrator (or character) as "Marvel" is too wide, or at least, that's what it seems to me. As I said, I'm working more with things that require a photorealistic model (so I was so happy with v 5 and had to redo a bunch of stuff) but I suspect that if you tell it to use the style of it'll get closer to what you're looking for. Well, thanks again, I"ve been watching your channel with every new video and you've helped me so much.
i agree. I wouldnt classify carla caruso as a consistent character via his methods. Much more input is needed and in a different style of prompt to create consistent characters. To truly get a customized character you need to utilize midjourney and stable diffusion models. Also 16:9 doesnt mean its a close up shot it means its 16:9 and you're prompt generated and understanding on a close up shot, despite entering wide angle or any of that in your prompt.
@@ywueeee I wish I had time...but I don't. I'll leave it to Christian to experiement and maybe he'll follow up with his views on the methodology. What I can tell you is that if you follow Christian's video on "emotion" the one from version 4, you'll see the basics, i.e. he was using like 8 URLs or more, it's the same thing, and you don't need the seed, just the URL of images you like. So, you do a prompt, pick out the image or images you like, run it again and again, only keeping the URLs of the images that match closely what you want. Then, finally, you'll run a prompt with 8-16 URLs and you'll get 4 outputs that are similar (or exactly the same). That's when you've got it...the AI will give you the same basic person over and over and over because you're forcing it to mix 16 images into one and it has to be very close each time because you've given it such a narrow range. Only then, after you've got the consistent image, do you then start adding things like smile, sad, rainy day with an umbrella, or at a firing range with a glock, or in Paris... Don't worry about the situation until you have a perfect character over and over... Also, it helps to start off with telling MJ to use a white or transparent background to eliminate noise...then, if you start with those images, the new prompt, like "on a city street at night in the rain" will work better.
Hello Paleoism, I agree about what you said, but if I try to use 16 pictures I have this error: "Please check that your URL is a direct link to an image, not a webpage." Sure all are direct link to pictures. How to skip this error?
Can we use the same image 16 times in our prompts? V5 allows us to reuse the same image unlike V4. Otherwise it is difficult to get 16 times a very similar image
We love the current MidJourney V5 Pro version with all of it fine adjustments within prompts but when the non pro version becomes the default I have been told we can still access the Pro version. Keep creating your changing the world at the speed of light.
Hi @ChrisHeidorn, first thank you for sharing your hard work with us, and second there is no other UA-camr that understand Midjourney better than you, and yes we all learn a lot with your videos.
This is one of the most comprehensive videos for character building. Thank you so much - when I finish what I am working on ( still a secret:) I’ll send you a copy! Because it will be thanks to your comprehensive videos that I have learned how to compile a proper image of a character in action. ❤
SPECIFIC QUESTION: How do I include an image of something ( title of a book, with author, illustration from a book, etc., etc.) in a mid journey white space designated for that purpose? I know I’m asking too much. This is not a cat system neither it is a photo shop, but I am almost positive. There is a way to do that.. do you have a video that covers that just put the link below please. Thank you. I.e. Let’s say the character is holding a book and we can see the title of the book. If we cannot achieve that then there’s always PHOTOSHOP.….:)
Thanks to Olivia guy who was using your prompts and let us know that there is a channel by your name where we can understand the work in more details. You are doing a fantastic job
Great video series. I have been using Runway AI in an attempt to create reliable AI models but hit a wall and started playing with MidJourney. Your videos have been a big help trying to navigate this other style of AI. Still running into walls but the walls get pushed back a little more each time.
This has been a really interesting experiment. I've been trying it out with a character that I got from MJ earlier. One thing I've noticed is that you really have to battle Midjourney to stop it making your character younger and more attractive with every iteration - I'm trying to make a somewhat normal looking guy, and the bot really wants to turn him into a supermodel 😂. I've been putting in age prompts (E.g. "25-year old man") and even then, I'm noticing you have to age the prompt up by a few years as it still wants to make the character younger.
For you to point out the bias that when you asked for "beautiful" and the AI model seldom depicted anyone of color makes me a fan of yours for life. I with I would subscribe 1000 times. Thanks for that. 👍
Some of these tasks like switching pieces of clothing, adding background or showing more of the character are better suited for inpainting and outpainting. Using inpainting, you can regenerate parts of the image with a prompt that focuses more on that specific part of the image. Using outpainting, you can extend your image like zooming out with a camera. I think these are coming if not already available to Midjourney v5.
@@TokenizedAI I like to combine them. For instance have mj give me something to work from then use advanced features elsewhere for fine control. sometimes even cycle that process.
Ah, another thing about POC. In creating a consistent white character, it seemed to me that it took less images -- why, not sure. When I needed to build a Black male and an Asian female, both as homicide detectives, it took many more images to get the character consistent. BUT, once done, once I had a prompt with like 20 URLs, the images came out consistent, pretty much every roll. Then, adding, "in dress blues" or "at her desk on a computer" or "with a glock on the shooting range" all worked AND kept the detective as the same person. The minor variations are all what we'd expect with a real photo session with a live model, sure, some variations depending on lighting, mood, etc., but recogniable the same person.
Thanks so much for putting in the work, Christian. It's still so frustrating how much variation MJ introduces each time. I see that you did get more consistency, but still, the clothes vary, and even the face varies. When will they simply let you create a character and stick to it?!
10:55 Miss Carla here have different cup size! While her eyes, face structure and hairstyle are consistent ... the chest area and body shape is completely inconsistent (well, to an expert eye 😉) ... lovely video as usual, thanks Christian 🙂
I guess my focus isn't so much on cup sizes 😉 You probably also realized that this exercise primarily focused on her face. The consistencies from before, which you are referring to, were mainly because v4 had very heavy default stylization and also, because my image references included full body images. So you're not really making a fair comparison.
Care to share what you learned? :) I’ve been getting decent coloring pages with “black and white vector coloring page” as the first part of the prompt, but it still struggles with varying detail, ie for kids books vs adult books.
The v5 also doesn't understand wat is "boy" and "girl" and if you not sayed directly that you want male or female character it will blend and mix it into image in differetnt proportions some times to the point when you just want to "unsee this". This version of a product making us use more words in description, that's for sure.
So basically, you might as well choose a version and start from scratch and complete the character creation only with the chosen version. I am very new to midjourney and trying to create a children's illustrated book, yesterday I worked with v4, but the hands always turned out deformed,so I tried referencing v4 img and wording prompt v5..now I understand why am I NOT getting consistency. Thank you so much🙏
It took me a while to find out the importance of the order of the prompts. "scifi fantasy armor" will give you quite a different result to "fantasy scifi armor".
Hey, really great video! Im new to Midjourney prompt engineering and i was wondering what about text prompt weights? like ::5 couldn't you, for example give the color of the pants the weight of ::3 to make Midjourney care more about it? or do they don't really work with v5?
If that's what you think then you didn't pay attention to the MJ release notes for the describe feature. The MJ team has made it very clear that the /describe command does NOT output prompts that can be considered "good" or "well-formed". Quite the contrary. It's primarily useful for discovering words that have power.
@@TokenizedAI you are right I didn't read the release notes. I just discovered the describe command and made up my own mind about it. I noticed that its output is probably not of the quality of a complete prompt but it helps understanding what the right words for describing a certain style or object are. So you still learn how to write better prompts.
So the rules have changed. What do you recommend using V5 - is Parts 1-5 still working prompt wise or is it so different that I'd have to do it totally different? Thanks for explaining all the -- additions (--ar and so on)
Thank you so much for publishing these videos.They're very helpful. I am curious, have you found a successful prompt for Full-Body, low profile shot? I have found that version 5 doesn't produce consistent results using that parameter. Any suggestions?
@10:21 the image on the right truly does look straight from a high quality comic book. You can visibly see single hatch line strokes to help build form, a nice line weight outline and nice lighting/shading overall. How can you keep that style consistent across all the images? Any suggestions or thoughts?
That's still very difficult to achieve without further editing. To be honest, all of the artists and designers I've spoken to so far don't do their entire workflow in Midjourney. They literally use it to get "half way" (mid-journey) and then use other tools to finish it off and polish it. Trying to do everything 100% in Midjourney isn't a pragmatic approach.
Also how do you feel now that bookmarks are gone inside the midjourney app? The app continues to change. When I first started back in August of last year you could click on environments, characters, abstract, all sorts of categories. Now any image you bookmark is conglomerated into the liked or loved images. Bummer... k, im done now. Thanks for sharing!
You said "stylize" when you mentioned the "--s" part of the prompt, but I was under the impression that in v5, we have the original "--s" flag and now we also have the "--stylize" flag too (which is different from --s). Is "--s" just the shortened version of "--stylize", or are they different. I thought "--s" was for the amount of closeness MJ would stick to your prompt text (higher numbers giving more artistic hallucination), and "--stylize" (in v5 only) was about getting MJ to add more v4 style creative flair (rather than realism). Am I missing something?
Wonderful video, and a great learning resource. How did you get the location of the image prompts? I know how to generate the seed, but I don't know how to get the URL for the generated images you want to use as a prompt?
Hey Christian, thanks for all your work! I watched your series of creating consistent characters and there must be a part you said how to make the URLs but I just can't find it🤣Could you please to repeat yourself one more time😅Thx!!!
Im curious... I see that you are putting --v 5 at the end of your prompts. So does that mean in your settings you are on Version 4 or Version 3??? I stay in Verson 3 and only upscale to the others when I'm not getting what I want. Also, version 3 tends to give too much detail, so I put --stop 90 at the end. Its a whole midjourney vibe that I just love midjourney for. Have you heard of anyone else staying in the artistic version 3 and only upscaling to newer version using the --v 4 or --v 5????
This video is great. It has gotten me further along with what I am trying to do than any training I've used so far. I'm stuck on one thing and cannot seem to get around it. I am working on creating realistic characters from the Underdog cartoon show. For now I'm just working on headshots to get the characters facial features down. I can generate the character's upper body exactly the way I want it style. But the character needs to end up with a canine face because, well, Underdog and his love interest Polly Purebred are canines. Any way I configure the prompts gives me a perfect depiction of Polly Purebred but with a human face. And/or a human Polly Purebred with a dog off to the side in the image. Any ideas how to create a prompt that will morph a dog-like face onto an otherwise human head would be greatly appreciated.
Anthropomorphic canine. I added that to my prompt and it is now generating characters with the facial characteristics I was after. Not perfect yet, but gives me what I need to build on. Just passing it along in case anyone else had similar issues.
@@TokenizedAI Thanks for the reply back. I literally just took a shot in the dark that anthropomorphic might work. I didn't know at the time there was an "anthro" keyword I could have been using. 🙃
@@TokenizedAI I still have one problem which, according to the video here you have this problem. Underdog wears a red suit and a blue cape. Superman has a blue suit and a red cape. I have a feeling this is my problem. I keep getting a "Superman" color preference versus an "Underdog" color preference. No matter how I configure my prompt I almost alway end up with Underdog in a BLUE SUIT with a RED CAPE. Like Superman not like Underdog. I know that sounds silly but that is what I am trying and needing to accomplish. I did already try to configure my prompt to tell MidJourney prompt that I want my Underdog character To have a RED CAPE and a BLUE SUIT. The opposite of what i want to see if it would make a difference. It made a difference in one out of four of the grids that I received back from MidJourney, but still not quite what I am looking for. Any further insight in how to "force" a color preference? Again, I know from reviewing your videos and documentation so far you are seeing the same issue. (A character with a certain color preference for their pants ended up with a different color, in my recollection.) Just checking to see if you have come up with anything else that can be used for enforce a clothing option for characters. Thanks for any help you can provide and I'll forward over anything else if I can figure it out first. Thank you, sir.
my bro, thanks for all the tutos. Small suggestions, I think you can condense more the videos by less face shots of you talking and more screenshot of Midjourney (maybe put you talking in a small circle on top of you screensharing)..But overall very good content, keep it up
To be honest, video length is crucial for monetization, which is the only compensation I get for these. So, as much as I understand your point, my recommendation would be to just watch at 1.5x speed. I talk fairly slowly, so you should be fine 😅
As in "facial expressions"? I showed this in this exact video. Alternatively, check out my Character Design series. There's a dedicated video on that in there.
I'm about to watch this video, hoping it will help. For some reason, MJ is ignoring my seed instructions and I get lovely images but with a different face. Even if I specify the seed number, the images turn out with a different seed.
Did you noticed v5 often adds some kind of watermark on images? eg. image 3 @10:45 and there are few other examples during this video. Its not problem if image is not your best result but I got some images that would be great but MJ gives them watermark :/
You are far too kind Christian. v5 has killed Carla...and they have a LOT of work to do to bring her back to life. Imma gonna give Midjourney a break until they get their sh*t figured out.
Version 5 requires new prompt structures compared to version 4. To improve results, move the style part to the front of the prompt and experiment with combining various styles, such as Marvel comic illustration, 3D animation, and anime. Embrace the learning experience and continue experimenting with different prompt structures and styles to achieve your desired outcome.
I am brand new to mid journey and trying to figure it all out which is harder than I thought :) but where did you get the image prompts from? I thought it was the browser link of the image when you open in browser but then it did something weird to my picture (I might have nightmares about that one :)).
Hey Christian, first thank you for the amazing content. Please let us know how we can support you to have more content like this. So my questoin what about the seeds for characters that we used on Mj4, I am trying to get consisten anime style character, I was happy to see you add anime, and it very very amazing, but still I am struggling to get the result that I am looking for. If you can please can you help us with different styles for anime and how we can have consistent, because it keeps changing all the time. Thank you
Hello! Super useful video, I wonder - My wife is an artist and we thought of using a fairytale character she would draw and we would use MJ to create various other scenes, backgrounds etc, while keeping same visual style etc of the character. But I am currently unable to recreate the exact/almost exact style of the same character in different settings. Would that be even possible now? Thanks!
Is it possible to produce two or more consistent characters across multiple frames? For example if I have generated character A in a series of images using your above method; and do the same for character B -- but now I want to generate a new series where character A and B are interacting with one another -- it always seems to blend them together -- making this impossible. I'd be fascinated to hear if you've experimented with this process and made any headway. Thank you.
Thanks. I noticed you struggled with getting Midjourney to draw Carla in her full body pose, I wonder if there are any tricks to force MJ to show the full body, perhaps pose style names
I am trying to base a character off of myself, but this is impossible if I describe the image in any way. Evidently, midjourney does not like my face, and if I say woman in the prompt it tries to make me better looking. I have gotten some success with using the images only, but when I try to add descriptions of the scene, it changes so much I have to do just as many re rolls as I did to perfect the character from the start. Any tips?
@@TokenizedAI Sorry I used the rang word. I mean at 6:02 in the video there are 3 Image Prompts of ech one of your Images . How do I fined my Image Prompts for my Image .I tryed copy link but its not the same and character image is never consistent. its all ways giving me random desins of my charater
what about niji 5, prompts for style is it the same logic? I used same prompts and got different results relating to style, and I was very specific, and also I liked one of the styles, and used function describe, but just couldnt replicate that specific style :(. What I would like to know is how to extract just the style, but not the character, because if I use images the character also gets inspired by it and I don't want that cause my characters will start look all the same face features But overall I liked this video very interesting, informative, simple, good examples of actions and so
@@TokenizedAI in niji 5, I generated one style that I like, but using the same prompts I would get different results, if I would use image as reference points then the style transfers but the character, also is inspired by the reference, when I want totally different features and just maintaint the style from specific image
I‘ve experienced that midjourney is only bad when it comes to specific clothes. White pants will not work well while a red hat is no problem. Seems to be more of a filter than a capability issue.
Hey Christian, thanks for all your work.
A few notes: I'm working with realistic models, not illustrations, so I"m not sure it's the same, but I suspect it is, so here goes:
If you want a very consistent character, 4 URLs isn't nearly enough. I'm using up to 16 URLs...the characters, well, they look like the same person in a photograph.
If you want full body images, it's helpful to find a photograph of a full body image of a celebrity or whatever, use a cutout if you can find one, or remove the background yourself. Then add that URL to the prompt. If you search online for images, most photographs don't include the feet, so the references MJ has in its data base tends to have the character or person either as a portrait or cut off at the waist. Using a full body image in the prompting gives it a reference to use.
Once you have a very consistent character, i.e. all four images generated look basically identical, THEN you can add to the word prompt things like "put her in Paris" or "on a shooting range with a Glock" or whatever. MJ still messes up the way a character holds a gun or any object -- -But the character will be identical and you can reroll a few times and eventually get a good one. I'd say one in 8 or 12 have my homicide detectives holding a glock reasonably well, but regardless, the characters are the same.
I pulled some of the images I liked (the URLs) from MJ v 4 and added them to photographs of celebrities I liked, mixing a somewhat cartoon/illustration with a real photograph can then be massaged towards the style you like, just remember to keep the character consistent, you need these 12-16 URLs in EVERY prompt....then MJ is forced to create a composite image and, viola, they end up being the same person or character.
Of course, like everyone, I'm still learning and I'm sure in two months, with V 6, we'll have to relearn a bunch of things...haha...
OH. one more thing, instead of using "Marvel Comics" you'd probably get a much more consistent style if you tell MJ to adopt the style of a particular illustrator (or character) as "Marvel" is too wide, or at least, that's what it seems to me. As I said, I'm working more with things that require a photorealistic model (so I was so happy with v 5 and had to redo a bunch of stuff) but I suspect that if you tell it to use the style of it'll get closer to what you're looking for.
Well, thanks again, I"ve been watching your channel with every new video and you've helped me so much.
this is very detailed, can you make a video? ;)
i agree. I wouldnt classify carla caruso as a consistent character via his methods. Much more input is needed and in a different style of prompt to create consistent characters. To truly get a customized character you need to utilize midjourney and stable diffusion models. Also 16:9 doesnt mean its a close up shot it means its 16:9 and you're prompt generated and understanding on a close up shot, despite entering wide angle or any of that in your prompt.
@@ywueeee I wish I had time...but I don't. I'll leave it to Christian to experiement and maybe he'll follow up with his views on the methodology. What I can tell you is that if you follow Christian's video on "emotion" the one from version 4, you'll see the basics, i.e. he was using like 8 URLs or more, it's the same thing, and you don't need the seed, just the URL of images you like.
So, you do a prompt, pick out the image or images you like, run it again and again, only keeping the URLs of the images that match closely what you want. Then, finally, you'll run a prompt with 8-16 URLs and you'll get 4 outputs that are similar (or exactly the same).
That's when you've got it...the AI will give you the same basic person over and over and over because you're forcing it to mix 16 images into one and it has to be very close each time because you've given it such a narrow range.
Only then, after you've got the consistent image, do you then start adding things like smile, sad, rainy day with an umbrella, or at a firing range with a glock, or in Paris...
Don't worry about the situation until you have a perfect character over and over...
Also, it helps to start off with telling MJ to use a white or transparent background to eliminate noise...then, if you start with those images, the new prompt, like "on a city street at night in the rain" will work better.
Hello Paleoism, I agree about what you said, but if I try to use 16 pictures I have this error:
"Please check that your URL is a direct link to an image, not a webpage." Sure all are direct link to pictures.
How to skip this error?
Can we use the same image 16 times in our prompts? V5 allows us to reuse the same image unlike V4. Otherwise it is difficult to get 16 times a very similar image
very useful. thank you.
You are welcome!
I watched it 2 times, it was very useful
Ich bin gestern auf deinen Kanal gestoßen, herzlichen Dank die Videos sind super lehrreich!
Dankeschön! 🙂
EXACTLY what I wanted to know! Thanks, Christian!
Awesome
The fact that I can follow along based on prior knowledge rules! The results of your experiments are priceless!!! Thanks again for the lessons.
Pleasure, as always.
Thanks
youre the kingggg my brotherrr..ive learned a lot on your channel,. greetings from berlin
Happy to hear that!
You, sir, have the highest quality, most straightforward lessons on Midjourney.
Thank you for your kind words 🤗
Brilliant, thank you Christian. Been struggling with my characters in V5 until you dropped this. Thanks for being such a damn great MJ guru 👊
Pleasure!
u can still use v4 tho
Thank you very much for your video!From your video, I'm not only learned the method, but also the most important thing is the way you think.
Which is much more important than any particular prompt! 👍🏻🙂
Christian, you're priceless. Thank you.
🙏🏻
Best MJ tutorial channel as always
Best viewers on UA-cam, as always! Everyone's so civil around here 🙂 Thanks!
I always learn somehing new in your channel! TYSM
Awesome!
Love your videos!!! Having a VERY hard time with different camera angles in V5 with consistent characters
If you're using image prompts, then your camera angles mostly go out the window.
I finally found the style I wanted fornmy character concept art. Thanks!
Pleasure!
Thank you for making this!
My pleasure!
We love the current MidJourney V5 Pro version with all of it fine adjustments within prompts but when the non pro version becomes the default I have been told we can still access the Pro version. Keep creating your changing the world at the speed of light.
Freaking awesome. Thanks so much!
You're very welcome!
That's very useful. Thanks, sir !
You are welcome!
VERY HELPFUL
Thanks for sharing these tips
My pleasure 😊
Hi @ChrisHeidorn, first thank you for sharing your hard work with us, and second there is no other UA-camr that understand Midjourney better than you, and yes we all learn a lot with your videos.
Much appreciated!
Brilliant work! As always, thanks for all you do!
Pleasure!
Great work! Thanks Christian!
Thanks!
I'm so happy I found this channel 🤟. Subscribed 🤜🤛
This is one of the most comprehensive videos for character building. Thank you so much - when I finish what I am working on ( still a secret:) I’ll send you a copy! Because it will be thanks to your comprehensive videos that I have learned how to compile a proper image of a character in action. ❤
SPECIFIC QUESTION:
How do I include an image of something ( title of a book, with author, illustration from a book, etc., etc.) in a mid journey white space designated for that purpose?
I know I’m asking too much. This is not a cat system neither it is a photo shop, but I am almost positive. There is a way to do that.. do you have a video that covers that just put the link below please. Thank you.
I.e. Let’s say the character is holding a book and we can see the title of the book. If we cannot achieve that then there’s always PHOTOSHOP.….:)
I'm afraid you can't do that with Midjourney. You'll need good old Photoshop for that.
Mindbending hacks. Thank you for sharing your time with us out here.
Pleasure
Enjoyed your walkthrough 👏
🙏🏻
Blend is your friend for bringing images from past versions to v5
/blend is just the lazy man's image prompt.
And yes, as you probably saw in the video, I did just that at the end of the video.
Thank you. Very clear and helpful!
Thanks to Olivia guy who was using your prompts and let us know that there is a channel by your name where we can understand the work in more details. You are doing a fantastic job
You mean Olivio Sarikas? 🙂🙏🏻
Very nice tutorial. I'm about to dive into a comic using v5 and this was super helpful.
Glad to hear!
Learned a lot! Liked and sub'd! Thanks for the detailed info.👍
Thanks for the sub!
You're an amazing instructor.
Thank you 🙏🏻🤗
Great video series. I have been using Runway AI in an attempt to create reliable AI models but hit a wall and started playing with MidJourney. Your videos have been a big help trying to navigate this other style of AI. Still running into walls but the walls get pushed back a little more each time.
I run into walls myself all the time. Comes with the territory 🤣
Thanks so much! This will help a ton for my next couple videos 🙏
Glad it was helpful!
This has been a really interesting experiment. I've been trying it out with a character that I got from MJ earlier. One thing I've noticed is that you really have to battle Midjourney to stop it making your character younger and more attractive with every iteration - I'm trying to make a somewhat normal looking guy, and the bot really wants to turn him into a supermodel 😂. I've been putting in age prompts (E.g. "25-year old man") and even then, I'm noticing you have to age the prompt up by a few years as it still wants to make the character younger.
If you're using --stylize, this might be causing the Benjamin Button effect.
Great video - thanks!!
Great vid! Been struggling also.
Glad it helped!
For you to point out the bias that when you asked for "beautiful" and the AI model seldom depicted anyone of color makes me a fan of yours for life. I with I would subscribe 1000 times. Thanks for that. 👍
To be fair, it's gotten much better recently.
Some of these tasks like switching pieces of clothing, adding background or showing more of the character are better suited for inpainting and outpainting. Using inpainting, you can regenerate parts of the image with a prompt that focuses more on that specific part of the image. Using outpainting, you can extend your image like zooming out with a camera. I think these are coming if not already available to Midjourney v5.
Agree. Which is why we are desperately waiting for MJ to add these features.
@@TokenizedAI Well there are alternatives that already have them
Yes, but they don't always output the same image quality. Sometimes it works, but usually the final result looks weird too.
@@TokenizedAI I like to combine them. For instance have mj give me something to work from then use advanced features elsewhere for fine control. sometimes even cycle that process.
@@cmilkauWhat are your go-to other sources?
Ah, another thing about POC. In creating a consistent white character, it seemed to me that it took less images -- why, not sure.
When I needed to build a Black male and an Asian female, both as homicide detectives, it took many more images to get the character consistent. BUT, once done, once I had a prompt with like 20 URLs, the images came out consistent, pretty much every roll. Then, adding, "in dress blues" or "at her desk on a computer" or "with a glock on the shooting range" all worked AND kept the detective as the same person. The minor variations are all what we'd expect with a real photo session with a live model, sure, some variations depending on lighting, mood, etc., but recogniable the same person.
Is it with V5?
@@umaruly_ai Yes, that's what I found in Version Five.
Great job you have done!! 👍
Thank you! 👍
Awesome midjourney tutorial. I love too see how midjourney gets better with every update
Me too!
Be damned if I can get two bun hair style in profile shots. Even tried prompting Princess Leia.
😆
Thank you!
You're welcome!
Thanks so much for putting in the work, Christian. It's still so frustrating how much variation MJ introduces each time. I see that you did get more consistency, but still, the clothes vary, and even the face varies. When will they simply let you create a character and stick to it?!
Well, I didn't really put much work in the clothes this time, so perhaps it's actually possible
Great video, but V5 is just a clusterf*ck and does whatver the f*ck it wants.
Thanks! This is so helpful.
You're welcome!
awesome tutorial thank you
You're welcome!
Great tutorial! Subscribed!
Thanks for the sub!
10:55 Miss Carla here have different cup size! While her eyes, face structure and hairstyle are consistent ... the chest area and body shape is completely inconsistent (well, to an expert eye 😉) ... lovely video as usual, thanks Christian 🙂
I guess my focus isn't so much on cup sizes 😉
You probably also realized that this exercise primarily focused on her face. The consistencies from before, which you are referring to, were mainly because v4 had very heavy default stylization and also, because my image references included full body images. So you're not really making a fair comparison.
Coloring pages in V5 is way more better, after trial and error 3 days, I managed to "cook" right prompt for my needs 😉
Care to share what you learned? :) I’ve been getting decent coloring pages with “black and white vector coloring page” as the first part of the prompt, but it still struggles with varying detail, ie for kids books vs adult books.
@@tsamb3756 did you learn something new for coloring pages?
The v5 also doesn't understand wat is "boy" and "girl" and if you not sayed directly that you want male or female character it will blend and mix it into image in differetnt proportions some times to the point when you just want to "unsee this". This version of a product making us use more words in description, that's for sure.
great video, very informative
Glad it was helpful!
Agawam Skyline done with Midjourney 5😊
So basically, you might as well choose a version and start from scratch and complete the character creation only with the chosen version. I am very new to midjourney and trying to create a children's illustrated book, yesterday I worked with v4, but the hands always turned out deformed,so I tried referencing v4 img and wording prompt v5..now I understand why am I NOT getting consistency. Thank you so much🙏
You're welcome!
When you use seed with version 5, the characters are very smiliar but you have to use it directly in your prompts with -seed and the number
Using the image link and the same seem it gives very similar results, much better than v4
It took me a while to find out the importance of the order of the prompts. "scifi fantasy armor" will give you quite a different result to "fantasy scifi armor".
Sometimes it matters. Sometimes it doesn. It's weird.
Excellent
Thanks
Thank you this was informative. Are you going to do one for envirments.?
We'll see. I already so many topics to cover 😞
Hey, really great video! Im new to Midjourney prompt engineering and i was wondering what about text prompt weights? like ::5 couldn't you, for example give the color of the pants the weight of ::3 to make Midjourney care more about it? or do they don't really work with v5?
In principle yes, but not with the granularity of control that you might think. Try it and you'll see what I mean 😉
So how do I get full body? I want a News Reporter standing at a crime scene.
"fully body image" or "full body shot". If that doesn't work, then you you're describing too many elements in detail.
@@TokenizedAI Thank You!
Are you creating all new prompts every-time you change something or are you remixing it and adjust the prompt?
Unless I say I'm remixing, I'm always creating new prompts.
Now there is a "describe" feature in midjourney where you put in an image and get out a description to learn how to write better prompts.
If that's what you think then you didn't pay attention to the MJ release notes for the describe feature. The MJ team has made it very clear that the /describe command does NOT output prompts that can be considered "good" or "well-formed". Quite the contrary. It's primarily useful for discovering words that have power.
@@TokenizedAI you are right I didn't read the release notes. I just discovered the describe command and made up my own mind about it. I noticed that its output is probably not of the quality of a complete prompt but it helps understanding what the right words for describing a certain style or object are. So you still learn how to write better prompts.
Hello Christian 😁 Thank you very much for your input! Is it possible to create multiple characters for one story?
Please check the playlist on Character Design. You answer is in there.
Maybe use /describe on the old images to see which terms V5 uses to "see" them? then copy those terms into your new choices for /imagine?
So the rules have changed. What do you recommend using V5 - is Parts 1-5 still working prompt wise or is it so different that I'd have to do it totally different? Thanks for explaining all the -- additions (--ar and so on)
Most of it, yes. Some of it is outdated.
Thank you so much for publishing these videos.They're very helpful. I am curious, have you found a successful prompt for Full-Body, low profile shot? I have found that version 5 doesn't produce consistent results using that parameter. Any suggestions?
Haven't worked on characters much lately, so don't really know 🤷🏻♂️
@10:21 the image on the right truly does look straight from a high quality comic book. You can visibly see single hatch line strokes to help build form, a nice line weight outline and nice lighting/shading overall.
How can you keep that style consistent across all the images? Any suggestions or thoughts?
That's still very difficult to achieve without further editing. To be honest, all of the artists and designers I've spoken to so far don't do their entire workflow in Midjourney. They literally use it to get "half way" (mid-journey) and then use other tools to finish it off and polish it. Trying to do everything 100% in Midjourney isn't a pragmatic approach.
Also how do you feel now that bookmarks are gone inside the midjourney app? The app continues to change. When I first started back in August of last year you could click on environments, characters, abstract, all sorts of categories. Now any image you bookmark is conglomerated into the liked or loved images. Bummer... k, im done now. Thanks for sharing!
I don't use the Web App, so I can't really tell you.
You said "stylize" when you mentioned the "--s" part of the prompt, but I was under the impression that in v5, we have the original "--s" flag and now we also have the "--stylize" flag too (which is different from --s).
Is "--s" just the shortened version of "--stylize", or are they different.
I thought "--s" was for the amount of closeness MJ would stick to your prompt text (higher numbers giving more artistic hallucination), and "--stylize" (in v5 only) was about getting MJ to add more v4 style creative flair (rather than realism).
Am I missing something?
docs.midjourney.com/docs/stylize
Wonderful video, and a great learning resource. How did you get the location of the image prompts? I know how to generate the seed, but I don't know how to get the URL for the generated images you want to use as a prompt?
Right-click and copy link? 🙂
Hey Christian, thanks for all your work!
I watched your series of creating consistent characters and there must be a part you said how to make the URLs but I just can't find it🤣Could you please to repeat yourself one more time😅Thx!!!
Right-click> Copy image link? 😉
Oh, I found it. Don't bother. Thx anyway~
Im curious... I see that you are putting --v 5 at the end of your prompts. So does that mean in your settings you are on
Version 4 or Version 3??? I stay in Verson 3 and only upscale to the others when I'm not getting what I want. Also, version 3 tends to give too much detail, so I put --stop 90 at the end. Its a whole midjourney vibe that I just love midjourney for. Have you heard of anyone else staying in the artistic version 3 and only upscaling to newer version using the --v 4 or --v 5????
I've just added it to clarify that the prompt was used for --v 5
This video is great. It has gotten me further along with what I am trying to do than any training I've used so far. I'm stuck on one thing and cannot seem to get around it. I am working on creating realistic characters from the Underdog cartoon show. For now I'm just working on headshots to get the characters facial features down. I can generate the character's upper body exactly the way I want it style. But the character needs to end up with a canine face because, well, Underdog and his love interest Polly Purebred are canines. Any way I configure the prompts gives me a perfect depiction of Polly Purebred but with a human face. And/or a human Polly Purebred with a dog off to the side in the image. Any ideas how to create a prompt that will morph a dog-like face onto an otherwise human head would be greatly appreciated.
Anthropomorphic canine. I added that to my prompt and it is now generating characters with the facial characteristics I was after. Not perfect yet, but gives me what I need to build on. Just passing it along in case anyone else had similar issues.
I see you've already found the right term for this. Anthropomorphic is indeed the keyword you're after.
@@TokenizedAI Thanks for the reply back. I literally just took a shot in the dark that anthropomorphic might work. I didn't know at the time there was an "anthro" keyword I could have been using. 🙃
@@TokenizedAI I still have one problem which, according to the video here you have this problem. Underdog wears a red suit and a blue cape. Superman has a blue suit and a red cape. I have a feeling this is my problem. I keep getting a "Superman" color preference versus an "Underdog" color preference. No matter how I configure my prompt I almost alway end up with Underdog in a BLUE SUIT with a RED CAPE. Like Superman not like Underdog. I know that sounds silly but that is what I am trying and needing to accomplish. I did already try to configure my prompt to tell MidJourney prompt that I want my Underdog character To have a RED CAPE and a BLUE SUIT. The opposite of what i want to see if it would make a difference. It made a difference in one out of four of the grids that I received back from MidJourney, but still not quite what I am looking for. Any further insight in how to "force" a color preference? Again, I know from reviewing your videos and documentation so far you are seeing the same issue. (A character with a certain color preference for their pants ended up with a different color, in my recollection.) Just checking to see if you have come up with anything else that can be used for enforce a clothing option for characters. Thanks for any help you can provide and I'll forward over anything else if I can figure it out first. Thank you, sir.
my bro, thanks for all the tutos. Small suggestions, I think you can condense more the videos by less face shots of you talking and more screenshot of Midjourney (maybe put you talking in a small circle on top of you screensharing)..But overall very good content, keep it up
To be honest, video length is crucial for monetization, which is the only compensation I get for these. So, as much as I understand your point, my recommendation would be to just watch at 1.5x speed. I talk fairly slowly, so you should be fine 😅
I am thankful for a diversity
Any tips on getting Character emotions?
As in "facial expressions"? I showed this in this exact video. Alternatively, check out my Character Design series. There's a dedicated video on that in there.
Very interesting! Thank you! Would it also be possible to just continue using v4?
Yes, absolutely
I'm trying for days. I have 1 image from a character. How can I get multiple from it? It's like not fully working...
I'm about to watch this video, hoping it will help. For some reason, MJ is ignoring my seed instructions and I get lovely images but with a different face. Even if I specify the seed number, the images turn out with a different seed.
The approach in Part 7 doesn't require any seeds. It uses image references.
@@TokenizedAI thank you Christian - Ill let u know how I got on with that
At 10:27 I think you made a mistake. You say you have changed the aspect ration to "--iw 9:16". Don't you mean "--ar 9:16" ?
Yes
Did you noticed v5 often adds some kind of watermark on images? eg. image 3 @10:45 and there are few other examples during this video. Its not problem if image is not your best result but I got some images that would be great but MJ gives them watermark :/
I'll have to look into that. Have you considered using an AI eraser tool? I believe Google Photos has that integrated for free.
@@TokenizedAI As it was really small watermark I fix it in photoshop. I didn't know for AI eraser tool, will definitely take look, tnx.
You are far too kind Christian. v5 has killed Carla...and they have a LOT of work to do to bring her back to life. Imma gonna give Midjourney a break until they get their sh*t figured out.
You guys are way too picky 🤣
Tks
Version 5 requires new prompt structures compared to version 4. To improve results, move the style part to the front of the prompt and experiment with combining various styles, such as Marvel comic illustration, 3D animation, and anime. Embrace the learning experience and continue experimenting with different prompt structures and styles to achieve your desired outcome.
You can't "learn" something that keeps giving you different shit when given identical prompts.
Did you just paraphrase what I said in the video? 🤣
@@TokenizedAI Just kind of like mentioned key points 😁
I am brand new to mid journey and trying to figure it all out which is harder than I thought :) but where did you get the image prompts from? I thought it was the browser link of the image when you open in browser but then it did something weird to my picture (I might have nightmares about that one :)).
You can use whatever you want as an image prompt.
Hey Christian, first thank you for the amazing content. Please let us know how we can support you to have more content like this. So my questoin what about the seeds for characters that we used on Mj4, I am trying to get consisten anime style character, I was happy to see you add anime, and it very very amazing, but still I am struggling to get the result that I am looking for. If you can please can you help us with different styles for anime and how we can have consistent, because it keeps changing all the time.
Thank you
I haven't experimented with consistent Anime characters to be honest. I also feel I'm not well-versed enough in Anime in general to do so.
Hello! Super useful video, I wonder - My wife is an artist and we thought of using a fairytale character she would draw and we would use MJ to create various other scenes, backgrounds etc, while keeping same visual style etc of the character.
But I am currently unable to recreate the exact/almost exact style of the same character in different settings. Would that be even possible now? Thanks!
Unless it originated form MJ directly, that would be difficult.
@@TokenizedAI Thanks for the reply!
Is it possible to produce two or more consistent characters across multiple frames? For example if I have generated character A in a series of images using your above method; and do the same for character B -- but now I want to generate a new series where character A and B are interacting with one another -- it always seems to blend them together -- making this impossible. I'd be fascinated to hear if you've experimented with this process and made any headway. Thank you.
I suggest you check out the full character design series.
Thanks. I noticed you struggled with getting Midjourney to draw Carla in her full body pose, I wonder if there are any tricks to force MJ to show the full body, perhaps pose style names
I didn't struggle. I explained at the end of video why you can't use a head shot as a reference and expect a full body.
@Tokenized AI by Christian Heidorn oh I see, didnt catch that. Makes sense
How do you obtain the link of the image at 7:54 ? Just with right click and "copi link of images"?
Yes
3:47 Bruh…
I am trying to base a character off of myself, but this is impossible if I describe the image in any way. Evidently, midjourney does not like my face, and if I say woman in the prompt it tries to make me better looking. I have gotten some success with using the images only, but when I try to add descriptions of the scene, it changes so much I have to do just as many re rolls as I did to perfect the character from the start. Any tips?
I would recommend not trying to replicate yourself. It's the quickest path to frustration 😅
super thanks!! If i succed to my project i won't forgive to send you money!!
No more seeds to generate constant characters? I cant even get the discord to give me the seed with V5, any ideas?
You can only get the seed for image grids, not upscaled individual images, because V5 doesn't actually have an upscaler yet
how do i fine or get my Images Reference??? cant fined it
I don't understand your question. Only you know where your image references are.
@@TokenizedAI Sorry I used the rang word. I mean at 6:02 in the video there are 3 Image Prompts of ech one of your Images . How do I fined my Image Prompts for my Image .I tryed copy link but its not the same and character image is never consistent. its all ways giving me random desins of my charater
what about niji 5, prompts for style is it the same logic? I used same prompts and got different results relating to style, and I was very specific, and also I liked one of the styles, and used function describe, but just couldnt replicate that specific style :(. What I would like to know is how to extract just the style, but not the character, because if I use images the character also gets inspired by it and I don't want that cause my characters will start look all the same face features
But overall I liked this video very interesting, informative, simple, good examples of actions and so
What about it?
@@TokenizedAI in niji 5, I generated one style that I like, but using the same prompts I would get different results, if I would use image as reference points then the style transfers but the character, also is inspired by the reference, when I want totally different features and just maintaint the style from specific image
I can't really say if the method works for Niji as well because it's an entirely different diffusion model and works differently from the regular v5.
@@TokenizedAI okay thanks il experiment
Yes🎉
I‘ve experienced that midjourney is only bad when it comes to specific clothes. White pants will not work well while a red hat is no problem. Seems to be more of a filter than a capability issue.
Assigning colors to clothes is such a pain.