Excellent! ...and you mentioned the magic word: Photoshop! For graphic designers this Ai is invaluble...most people will just play around with it! EXAMPLE: A colleague of mine designs and makes bespoke clothing. I can reproduce an exact 'look' in a number of poses without needing to do a location camera shoot! This becomes particularly useful when you use the 'mergeing' of image and background. I found the process frustrating at first, but as you say...once you have the images you need and the suitable prompts 'pasted' the world has no limits! Thank you.
I teach Midjourney full time on Fiverr and I do commissions and prompt building. I just wanted to say I am really impressed with this video it is definitely the best I have seen so far. The rampant misinformation that is spread on UA-cam is highly annoying so this is truly a breath of fresh air. I have subscribed and I’m looking forward to seeing more.
Hey man, just wanted to say thank you. There have been a lot of content grabs around this challenge with really low quality. It was heavy I’ll effecting the use cases for midjourney for me. 🙏🙏
0:30 "Keep your prompt simple." All of this advamced high level tech today...and still the programmers missed - by a mile - interpreting the prompt: "same character, different scene."
Do you have any tips to use two specific characters in same scene, considering I have portraits of both and need them in particular setting doing particular things? Let's say if you want that exact (or very similar) silver-haired model girl sitting on a bench next to that purple hair girl and exchanging a handshake, how would you do that? Also I'm curious if there's any way to make custom variables in Midjourney. I know we can set option list, but it doesn't seem quite the same. What I mean is something like this: _Alice = "(link to portrait) instagram model with pointy ears dressed like a rabbit" _Bob = "(link to portrait) middle aged man, blue hair, colorful clothes" _My_style = "art style blend of photoreal and whatever specific artist" /imagine _Alice and _Bob walking through the modern city, looking surprised, _My_style Something like that could've make creating illustrations a lot less of a pain, especially if set variables would assume that character is supposed to look similar and consistent if there's an image link. Anyway, thanks for this video and your channel overall, I find it very informative, considering I'm just starting to learn Midjourney.
To answer your first question, Midjourney doesn't handle more than one subject very well right now. However, there is something you could try. I would get pictures of both characters I want, then I would 'stitch' them together in photoshop, and then use that image as an image prompt. ^ I don't know if that would work but it came to mind when you asked. And about the custom variables, no that's not a part of midjourney yet. But they are working on something like that for sure. The ability to 'save' characters and recall them in different scenes ... I don't have any inside information, that's just what I understand from hearing them talk about it
@@FutureTechPilot thanks for reply! That's a good idea, I'll try to stitch them in photoshop. Or, actually, I don't even need to hehe - they are 3D models, so I can rougly position them and render in any pose. It's usually getting details that makes it hard, plus DAZ Studio Genesis 8 characters are still on the verge of uncanny valley, so readers don't really like them, unlike Midjourney stuff. And glad to hear there are at least talks in that direction. I'm honestly looking forward to what future brings to Midjourney and it would be nice to learn this tech before it fully got there. Someday maybe I'll be able to visualize my writing myself, whatever way I want, despite not being able to draw pretty much at all.
It will be a lovely day when we can generate consistent characters in Midjourney! I think it will happens eventually. I'm interested to see all the stories people will create once we can generate consistent characters.
Thank you, if you don’t mind I will mention the same in one of my videos and quote you. Cause I feel you nailed it and there is no point in me covering it again, I reached to the same conclusion.
My video is almost ready, should be live on Friday 8:30 eastern time, I will link your channel, as I have managed to solve all issues and gain 100% consistency using another tool and midjourney. Thank you.
I'm receiving this error, how can i get past it: Cannot use --version 5 with only a single image prompt. Please add another image prompt, or a text prompt.
There is one exception to reacting to your images in Midjourney - if you react with the Envelope emoji, it will send you a private message with that image, so you can find out the seed number, and have a link to go back to the image when it was generated it any time you want.
Midjourney views things as archetypes from my understanding and will stick to general things about them. The strength in which Midjourney sticks to them depends on the "version" you're using. Consider re-wording: "model" is an archetype and one of the things models are known for is looking into the camera since they are generally posing for one. Using the term "model" will increase the likely hood of the subject looking at the camera. Try to find a replacement for "model." Here are some examples to further demonstrate what I mean: Example 1 - say I wanted a cat with purple hair. In this case, the archetype is "cat" and cats don't have purple hair. As a result, Midjourney will resist creating this. Again, to what degree depends on the "version" you are using. So, in this case, you might try saying "filinoid" (I believe is the term, like humanoid). Now Midjourney doesn't view the subject as a cat and will be more likely to produce what you want because you have changed the archetype. Example 2 - If I wanted to create an image of She-Hulk (the comic version not the show lol) I would not want to say "A woman with green skin" because women don't have green skin. So instead I might say something like "Female humanoid body-builder" so now Midjourney focuses on the subject being a humanoid instead of a regular human. Again, this changes the archetype. This is how I have come to understand it so far. Results will still vary, because of course they will lol, but it does help in guiding Midjourney closer to what you are looking for. I Hope this helps. love your content, keep up the great work my friend. 😊😊😊
That is a fascinating way of looking at things, thank you for sharing. I knew 'model' came with a 'look' but I never thought of the eye contact. I really like that idea of prompting in terms of broader categories of description ... it's kinda hard to wrap my mind around, in terms of how to teach that to others. But I'll sit with it and try to make a video on it soon. Thanks again for explaining it in clear terms, I'll definitely give you credit when I make the video. Cheers!
@@FutureTechPilot No problem, I am glad to help where I can. I forgot to mention one other thing as well, regarding the part of your video where you mentioned trying to get your subject driving a car. When it comes to different scenes, try the action first, like "driving a car" and see if it works. If it doesn't, then try giving Midjourney a small scenario instead. In this case, a scenario in which a person would logically be driving a car. So basically, try the action you want first and if that fails, switch to a scenario which would contain the action you are looking for. I hope this makes sense. The first subject you describe in the prompt, in this case a woman, is going to be the center point of the image. The second subject, the car, will be given less weight. Try switching the order and make the car the center focus. Also, make sure there are some words between the subjects. I have heard between 4-6. I believe that's what it was: [Subject 1] Descriptive words for subject 1, [Subject 2] Descriptive words for subject 2. The best analogy I have come across is that of a story. Your first subject is the main character and the second subject is the supporting character. If you put a theme or a style before the first subject, it should affect the rest of the prompt. Not always though. Usually, style/theme is the only thing I tend to put before my first subject. For more on that I recommend the following UA-cam channels (I have no affiliation with either one): 1. Making the Photo 2. Thaeyne Also make sure not to use unnecessary words. Every word or special character (like + \ - and so on) adds "noise" to the image. The more noise there is, the less accurate the bot will be. I have heard that anything after 20 or so words begins to loose relevance. I have never figured out an exact word count so results will vary. But there does seem to be something to that. Once again, I hope this helps. We're all learning these things together.
Did you also try keeping the same seed number of a rendered scene you like , and developing that to try and maintain some consistency as you guide the new prompts.
It's a good suggestion! My instinct tells me that locking in a seed helps more for finding differences than keeping similarities, but it's worth exploring more. Thanks
I have found something that may work the the remix feature .. I start with a prompt (say "abstract whimsical cow") .. I choose the option I like best and use the remix feature on it ( to change it to say "abstract whimsical tiger") .. now this usually produces a weird looking cow that is slightly tigerish .. but the trick is to then select the reroll feature and I have found this usually gives me what I was hoping to remix to (i.e abstract whimsical tiger) in the style of the original selected version ... my goodness I hope this makes sense. The main trick is to hit the reroll after the initial remix change
It's a tricky one for sure, 'full body' usually helps with that. Or you could try changing the aspect ratio - (I know that's not super helpful but it could work for sure)
Really nice tutoria!! A question: did you ever tried to put 2 different characters with photo references in the same scene? I have a problem: both characters mix in a single character... Thanks a lot
That's a really common problem in Midjourney. There is no 'solution' for that now, but my best suggestion would be to create the picture you want using another program (like go into photoshop and put your two characters beside eachother, and then save that image and use it as an image prompt) ... otherwise you'll probably get that blending/mixing
@@FutureTechPilot yes, I’ve been searching in other turials and also investigating by my side and that seems to be a good solution. Still a little bit of mixing, but almost there. Thanks!
Thank you at all times. Is it only possible to take pictures from Midjourney? Is there any way to make my photos various poses and facial expressions for example?
You start out by recommending against trying to do animated characters. But what steps might you recommend if you did want to do some kind of animated style character?
I'm also looking for a way to create consistent comic book characters. perhaps if you pick a genre or cartoonist that will help with consistency. And then use some of the tricks from the video.
I'll make a video on this for sure. But I'll tell you my rough notes -- Process - getting the character in a scene will be difficult, so you're probably going to have to make a bunch of different elements (backgrounds, characters) and then composite them together in something like photoshop - you're going to get pictures of your character using a phrase like 'character sheet' and then divide them up in photoshop, and use the individual pictures as image prompts - you can try generating in Niji journey as well for a more consistent style - utilize the 'variation' buttons - whenever you find a picture you like, hit the variation on it and get as many looks of your character as possible For your prompt - keep the design simple, as simple as can be - mention an artist (I don't personally feel comfortable doing that but it is something that would work) - use 'multiple expressions, dynamic poses' - try '6 different panels' if you're in a wide aspect ratio. try '4 different panels' if it's more vertical - try storyboard - try character design - try character study sheet - image prompt successful pictures hope that helps
Thanks for the tutorial. Can I upload images from my vast image bank of fashion work using existing faces or mixing existing faces into one face as a consistent "character" for all of my images? A virtual model to work from and also my photography style?
No, I don't think you can do what you're asking for in Midjourney. But check out this video - ua-cam.com/video/JupUUoQQe0g/v-deo.html - maybe you'll find some way to use that technique
Thanks for exploding some of these persistent myths! Consistency is not what MJ is about right now. To me, it seems easier to create characters and backgrounds separately and then combine them in Photoshop or Clip Studio Paint. I'd like to try these tips with comic book characters. Perhaps getting consistency by naming a genre or artist. Thanks again!
I think it could be pretty powerful, but the biggest problem is going to be keeping the same outfit/clothing from scene to scene and I think that is what most people are looking for
About the looking away form the viewer problem. At times you got images where she was looking just slightly away. Couldn't you do that to get new references and pile them on your prompt and slowly move away from your original pose getting a more complete set of references, maybe selecting the best ones in the precess?
Thanks a lot for this - amazing video. Question: can you also create new photos based on a real person (when uploading a photo) to create a story around an existing person? Thank you.
No unfortunately, not at the moment. I mean, not in the way you're thinking. But if you wanted to try, it would be the same ideas as this video (including reference images in your prompt)
Remix is MJ's version of img2img (from Stable Diffusion), but without any control over degree of change (and of course they don't let you upload your own images) - I guess it's similar to always setting original image strength to 75%. So if your new prompt wants large changes, that's when the weird stuff happens - when it tries to fit the pose or extra details you want into the colour blocks that already exist.
To have the character shopping or in a different position you can try lowering the image weight. 0 to 2 and the new version let's us use below 0.5 Example (--iw 0.4)
Thanks, there was some ideas here. My biggest issue is using a character face image in other images but requesting subtle changes like eye colour or hair styles. I have had some success with "use this image for face" but it just puts the face and hair from the source image. Another thing, I expected you to mention the envelope reaction and using --seed but I noticed it wasn't included. Does this have any value?
I think there is a lot of bad advice out there about seeds and I really don't think they help at all with consistency. In fact, they probably hold it back. Seeds are great for testing out changes to a generation (like a person with a happy face/sad face), but they don't help you take a character and put them in different scenes.
AGREE about Remix. When it first came out, the 'promise' of what it said it would do was great. But in reality, all I ever get is a hot mess. Would like to know if ANYONE is using Remix with any kind of success.
I heard a funny trick recently... hit reroll on the remix? or maybe leave remix on, hit 'reroll' and run the prompt again through remix. Sorry if that's hard to explain, I think I'll make a video about it soon
Your tips are great. I do notice it is hard for midjourney to change clothing style. For example once they are in pants it seems they will always be in pants.
@@FutureTechPilot It is like the old 60s western shows like Big Valley and Bonanza where they change everything but their costumes. It will change the color and pattern, but the basic style remains the same. I have not been able to get it to change from cargo shorts to a dress.
@@FutureTechPilot I did play around with weight prompts and seem to get some change in the clothing. There is a new feature I noticed called remaster. Do you know what that does?
@@jamesm7653 Remaster was a thing in Version 3 to update a picture to a newer algorithm.. I just looked for it in V5.1 and didn't see it. Where did you find it?
Remix in MJ has (almost) _always_ let me down. Loss of quality or definition, added artefacts, disfigurements, loss of detail, etc. Perhaps I'm not using it well, but it's not the best. Bottom line, it's not that easy to create consistent characters. Thank you for the video though. 👍 I need to try the blend idea.
I am trying to create website cover photos from my pictures. I want the output to use my character. Is this possible using any ai tool? Or each one of these will not replicate the character exactly?
Favorite doesn't work but rating you images on Midjourney profile should. You have 4 rating options on that screen, for every individual image you upscaled. Theoretically it gives better results in time
TY for another great video. If you include "candid view" or "candid angle" you can get a less self-aware result, more fly-on-the-wall style. Favouring the word "view" or "angle" instead of "shot" usually reduces the likelihood of including a camera in the image. Using the phrase "instagram model" is way too self conscious and loaded toward a "fashionista" "look-at-me" "ain't I pretty" outcome. It's not well suited if you're looking for a CSI style. IMHO.
Is it possible that midjourney is seeing 'instagram model' and making sure it's a model pose, rather than a more natural or film pose? Maybe 'beautiful woman' or 'femsle detective ' would work better to create a character that didn't have as much of a 'style'
That's a good point! My personal belief is that it will just naturally make any subject face the camera, unless you prompt otherwise. Which is something I should mention in a future video!
@@FutureTechPilot thank you, it's a bit confusing, however I'll work with it.. I regret ever using it, now everytime I don't specify it gives me that same face, I have several characters, I don't always what to see that one..
@@LolaMoonflower unfortunately that's just how the ai looks right now! You can hit 'Creative or Subtle Upscale' underneath a picture you choose - but if you want it to be 300dpi, you might need to use another program like photoshop
Hey Nolan! I have an idea for a video that would be perfect for your channel and I think it would be helpful for a lot of people as well. I’d love to have a quick chat with you if you were open to it. Is there a way contact you outside of UA-cam comments to discuss? Cheers!
I've made almost 40 thousand images and I don't think 'seed' helps in the way people think it will. There is lots of misinformation around it. And in my opinion, it's only going to limit your generation. There are over 4 billion seeds for every prompt, and I like letting Midjourney choose the random seed
funny - midjourney immediately ignored 'full body'. Be cool if MJ had something like a thumbnail of poses it could reference too so thats never an issue
sorry, but adding a weight to text prompts ONLY influences the weight of that text prompt towards other text prompts. If you want to influence the text vs image prompt weight, you use --iw [0-2] 0 being not at all, 2 being the highest it can go.
lmaoo you gotta add in way more prompts than that. just type in that she should look off camera and done. you cant be specific with the parameters and not type in more prompts where you actually tell midjourney, what it should do
Is there something wrong with the video, or is UA-cam just breaking down??? It won't load. Other videos will... This one takes such a lloooooooonnnnng time to load.
Maybe! But my understanding is that it wouldn't really matter. I think the better solution would be to get many views of your character as possible, and then use what you needed for the prompt. Maybe I should make an updated version of this video!
Thank you for your tutorial but you are too much between the phrase it's very disturbing and we cannot follow it still quick everybody do that now on UA-cam and it's a very wrong way to explain something thank you very much too slow a little bit
Check out my NEW Prompt Pack for sale - www.futuretechpilot.com/shop/p/midjourney-variety-prompt-pack
No description about what the prompts are NOR what they do. SMH
@@SnipsOfTime69 I'll try to do better in the future
@@FutureTechPilot - Just saying it, because i know that you would get more sales in the future if you do.
This is so good, especially the explanation of those magic -- arguments, thank you!
Thanks a lot for the feedback! I'm really glad my video worked well for you
Man, your videos are really satisfying to watch. Thanks
I appreciate the comment! Thanks a lot, pal
Finally someone who doesn't want to make the viewers believe that he has the holy grail. Keep it up!
haha just trying my best. Thanks for the comment!
Excellent! ...and you mentioned the magic word: Photoshop! For graphic designers this Ai is invaluble...most people will just play around with it! EXAMPLE: A colleague of mine designs and makes bespoke clothing. I can reproduce an exact 'look' in a number of poses without needing to do a location camera shoot! This becomes particularly useful when you use the 'mergeing' of image and background. I found the process frustrating at first, but as you say...once you have the images you need and the suitable prompts 'pasted' the world has no limits! Thank you.
I couldn't agree more! Thanks for sharing your use case. I think lots of people will be able to relate, especially within the next 2 years or so
you are successfully keeping character and outfit , for different scenes ? or even different poses ?
@@mattduncan5500 Usually different scenes, but poses are workable...
That's just what I needed to use Midjourney more!! Thanks man, always appreciate your content 🎉
And I appreciate the comment, cheers buddy
I teach Midjourney full time on Fiverr and I do commissions and prompt building. I just wanted to say I am really impressed with this video it is definitely the best I have seen so far. The rampant misinformation that is spread on UA-cam is highly annoying so this is truly a breath of fresh air. I have subscribed and I’m looking forward to seeing more.
haha I know what you mean about misinformation ... thanks for the comment and I'm glad we share a common interest!
Hey man, just wanted to say thank you. There have been a lot of content grabs around this challenge with really low quality. It was heavy I’ll effecting the use cases for midjourney for me. 🙏🙏
haha cheers buddy! Thanks for the kind words
0:30 "Keep your prompt simple."
All of this advamced high level tech today...and still the programmers missed - by a mile - interpreting the prompt: "same character, different scene."
I can't wait to see what version 7 is like!
Do you have any tips to use two specific characters in same scene, considering I have portraits of both and need them in particular setting doing particular things? Let's say if you want that exact (or very similar) silver-haired model girl sitting on a bench next to that purple hair girl and exchanging a handshake, how would you do that?
Also I'm curious if there's any way to make custom variables in Midjourney. I know we can set option list, but it doesn't seem quite the same. What I mean is something like this:
_Alice = "(link to portrait) instagram model with pointy ears dressed like a rabbit"
_Bob = "(link to portrait) middle aged man, blue hair, colorful clothes"
_My_style = "art style blend of photoreal and whatever specific artist"
/imagine _Alice and _Bob walking through the modern city, looking surprised, _My_style
Something like that could've make creating illustrations a lot less of a pain, especially if set variables would assume that character is supposed to look similar and consistent if there's an image link.
Anyway, thanks for this video and your channel overall, I find it very informative, considering I'm just starting to learn Midjourney.
To answer your first question, Midjourney doesn't handle more than one subject very well right now. However, there is something you could try. I would get pictures of both characters I want, then I would 'stitch' them together in photoshop, and then use that image as an image prompt.
^ I don't know if that would work but it came to mind when you asked.
And about the custom variables, no that's not a part of midjourney yet. But they are working on something like that for sure. The ability to 'save' characters and recall them in different scenes ... I don't have any inside information, that's just what I understand from hearing them talk about it
@@FutureTechPilot thanks for reply! That's a good idea, I'll try to stitch them in photoshop. Or, actually, I don't even need to hehe - they are 3D models, so I can rougly position them and render in any pose. It's usually getting details that makes it hard, plus DAZ Studio Genesis 8 characters are still on the verge of uncanny valley, so readers don't really like them, unlike Midjourney stuff.
And glad to hear there are at least talks in that direction. I'm honestly looking forward to what future brings to Midjourney and it would be nice to learn this tech before it fully got there. Someday maybe I'll be able to visualize my writing myself, whatever way I want, despite not being able to draw pretty much at all.
@@LoneIrbis haha the future will be fun!
I think the company is actually working on this feature to get consistent characters, especially when doing character concepts.
Yeah you're right! Maybe in a couple of months, this will all be very easy lol
Is there a way to get the latest updates on this topic?
@@alexanderg8144you ever find out?
It will be a lovely day when we can generate consistent characters in Midjourney! I think it will happens eventually. I'm interested to see all the stories people will create once we can generate consistent characters.
I completely agree! It's going to be super cool to see what gets created
Where did you get the link "that was shortened"? @3:56
Once you generate a prompt, the link will shorten
@@FutureTechPilot Will try it. Thank you.
This was great! Im really new in Midjourney, and I already have gotten nice results.
Awesome! I'm glad I could help
Just what I needed a reference image with different scenes, thanks 🤩💙
hope it helps!!
It is a LOT easier merging prompts from: ie. Person & then background....youll have a lot more success this way...Enjoy! Phil
Thank you, if you don’t mind I will mention the same in one of my videos and quote you. Cause I feel you nailed it and there is no point in me covering it again, I reached to the same conclusion.
Sure thing pal!
My video is almost ready, should be live on Friday 8:30 eastern time, I will link your channel, as I have managed to solve all issues and gain 100% consistency using another tool and midjourney. Thank you.
thank you for your simplicity .
Happy to help, pal!
I'm receiving this error, how can i get past it: Cannot use --version 5 with only a single image prompt.
Please add another image prompt, or a text prompt.
Yeah you need to include some words after the image link, or another image link
There is one exception to reacting to your images in Midjourney - if you react with the Envelope emoji, it will send you a private message with that image, so you can find out the seed number, and have a link to go back to the image when it was generated it any time you want.
Yeah super helpful
Midjourney views things as archetypes from my understanding and will stick to general things about them. The strength in which Midjourney sticks to them depends on the "version" you're using.
Consider re-wording: "model" is an archetype and one of the things models are known for is looking into the camera since they are generally posing for one. Using the term "model" will increase the likely hood of the subject looking at the camera. Try to find a replacement for "model." Here are some examples to further demonstrate what I mean:
Example 1 - say I wanted a cat with purple hair. In this case, the archetype is "cat" and cats don't have purple hair. As a result, Midjourney will resist creating this. Again, to what degree depends on the "version" you are using. So, in this case, you might try saying "filinoid" (I believe is the term, like humanoid). Now Midjourney doesn't view the subject as a cat and will be more likely to produce what you want because you have changed the archetype.
Example 2 - If I wanted to create an image of She-Hulk (the comic version not the show lol) I would not want to say "A woman with green skin" because women don't have green skin. So instead I might say something like "Female humanoid body-builder" so now Midjourney focuses on the subject being a humanoid instead of a regular human. Again, this changes the archetype.
This is how I have come to understand it so far. Results will still vary, because of course they will lol, but it does help in guiding Midjourney closer to what you are looking for. I Hope this helps.
love your content, keep up the great work my friend. 😊😊😊
That is a fascinating way of looking at things, thank you for sharing. I knew 'model' came with a 'look' but I never thought of the eye contact.
I really like that idea of prompting in terms of broader categories of description ... it's kinda hard to wrap my mind around, in terms of how to teach that to others. But I'll sit with it and try to make a video on it soon. Thanks again for explaining it in clear terms, I'll definitely give you credit when I make the video.
Cheers!
@@FutureTechPilot No problem, I am glad to help where I can. I forgot to mention one other thing as well, regarding the part of your video where you mentioned trying to get your subject driving a car. When it comes to different scenes, try the action first, like "driving a car" and see if it works. If it doesn't, then try giving Midjourney a small scenario instead. In this case, a scenario in which a person would logically be driving a car. So basically, try the action you want first and if that fails, switch to a scenario which would contain the action you are looking for. I hope this makes sense.
The first subject you describe in the prompt, in this case a woman, is going to be the center point of the image. The second subject, the car, will be given less weight. Try switching the order and make the car the center focus. Also, make sure there are some words between the subjects. I have heard between 4-6. I believe that's what it was:
[Subject 1] Descriptive words for subject 1, [Subject 2] Descriptive words for subject 2.
The best analogy I have come across is that of a story. Your first subject is the main character and the second subject is the supporting character.
If you put a theme or a style before the first subject, it should affect the rest of the prompt. Not always though. Usually, style/theme is the only thing I tend to put before my first subject. For more on that I recommend the following UA-cam channels (I have no affiliation with either one):
1. Making the Photo
2. Thaeyne
Also make sure not to use unnecessary words. Every word or special character (like + \ - and so on) adds "noise" to the image. The more noise there is, the less accurate the bot will be. I have heard that anything after 20 or so words begins to loose relevance. I have never figured out an exact word count so results will vary. But there does seem to be something to that.
Once again, I hope this helps. We're all learning these things together.
Really helpful video. Thank you!
That's great to hear, I'm glad I could help!
Did you also try keeping the same seed number of a rendered scene you like , and developing that to try and maintain some consistency as you guide the new prompts.
It's a good suggestion! My instinct tells me that locking in a seed helps more for finding differences than keeping similarities, but it's worth exploring more. Thanks
@Future Tech Pilot Is there now any feature in midjourney to keep the same character? Thanks
No unfortunately. I'll say maybe we get something in the next 6 months (but I really don't have any inside information)
@@FutureTechPilot what about Leonardo. Does it give consistent same character ?
@@mania7927 kind of, but probably not in the way you're thinking
This was very helpful, thank you.
Thanks a lot for the feedback! I appreciate it
I have found something that may work the the remix feature .. I start with a prompt (say "abstract whimsical cow") .. I choose the option I like best and use the remix feature on it ( to change it to say "abstract whimsical tiger") .. now this usually produces a weird looking cow that is slightly tigerish .. but the trick is to then select the reroll feature and I have found this usually gives me what I was hoping to remix to (i.e abstract whimsical tiger) in the style of the original selected version ... my goodness I hope this makes sense. The main trick is to hit the reroll after the initial remix change
Wow! What a cool tip. I'll have to try it out now. Thank you!
Just tried that out, and holy cow, that was incredible!
@@edmoala so glad it worked for you
holey cow! that worked. thanks for the tip
This is amazing man
Cheers buddy!
Why does it almost always cut of the tip of the model's head? How to keep the entire head in frame?
It's a tricky one for sure, 'full body' usually helps with that. Or you could try changing the aspect ratio - (I know that's not super helpful but it could work for sure)
This was awesome! Thanks a lot!
You're very welcome!
Really nice tutoria!! A question: did you ever tried to put 2 different characters with photo references in the same scene? I have a problem: both characters mix in a single character... Thanks a lot
That's a really common problem in Midjourney. There is no 'solution' for that now, but my best suggestion would be to create the picture you want using another program (like go into photoshop and put your two characters beside eachother, and then save that image and use it as an image prompt) ... otherwise you'll probably get that blending/mixing
@@FutureTechPilot yes, I’ve been searching in other turials and also investigating by my side and that seems to be a good solution. Still a little bit of mixing, but almost there. Thanks!
Fantastic! thank you!
Cheers man!
Hello. For example, how can I edit a portrait photo I took in this way?
Right now I don't think you can edit pictures using Midjourney in the way you're thinking
Thank you at all times. Is it only possible to take pictures from Midjourney? Is there any way to make my photos various poses and facial expressions for example?
No, not right now. But they're definitely working on character consistency and posing tools! Keep an eye out for updates within the next few months
You start out by recommending against trying to do animated characters. But what steps might you recommend if you did want to do some kind of animated style character?
I'm also looking for a way to create consistent comic book characters. perhaps if you pick a genre or cartoonist that will help with consistency. And then use some of the tricks from the video.
I'll make a video on this for sure. But I'll tell you my rough notes --
Process
- getting the character in a scene will be difficult, so you're probably going to have to make a bunch of different elements (backgrounds, characters) and then composite them together in something like photoshop
- you're going to get pictures of your character using a phrase like 'character sheet' and then divide them up in photoshop, and use the individual pictures as image prompts
- you can try generating in Niji journey as well for a more consistent style
- utilize the 'variation' buttons - whenever you find a picture you like, hit the variation on it and get as many looks of your character as possible
For your prompt
- keep the design simple, as simple as can be
- mention an artist (I don't personally feel comfortable doing that but it is something that would work)
- use 'multiple expressions, dynamic poses'
- try '6 different panels' if you're in a wide aspect ratio. try '4 different panels' if it's more vertical
- try storyboard
- try character design
- try character study sheet
- image prompt successful pictures
hope that helps
@@FutureTechPilot Thank you so much!
Great video 🎉
🤝 thanks!
Thanks for the tutorial. Can I upload images from my vast image bank of fashion work using existing faces or mixing existing faces into one face as a consistent "character" for all of my images? A virtual model to work from and also my photography style?
No, I don't think you can do what you're asking for in Midjourney. But check out this video - ua-cam.com/video/JupUUoQQe0g/v-deo.html - maybe you'll find some way to use that technique
Good stuff 👏
Cheers buddy!
Very helpful, but it also is more evidence that I need to start learning Stable Diffusion 😂
hahah yeah I know what you mean...
Thanks for exploding some of these persistent myths! Consistency is not what MJ is about right now. To me, it seems easier to create characters and backgrounds separately and then combine them in Photoshop or Clip Studio Paint. I'd like to try these tips with comic book characters. Perhaps getting consistency by naming a genre or artist. Thanks again!
And thank you for your thoughts! Cheers vuddy
I struggle so much with this. So many characters. Getting two in same scene is tough.
yeah two coherent characters isn't really possible right now
how about using this technique together with Insight Faceswap?
I think it could be pretty powerful, but the biggest problem is going to be keeping the same outfit/clothing from scene to scene and I think that is what most people are looking for
@@FutureTechPilot certainly. the holy trinity of consistency that we're all waiting for is: faces x clothes x setting
how about consistent of multiple characters?
Not possible right now!
Can't we use --no looking at camera , to not have her look at it ?
That's a good idea!! Thanks for the suggestion, I'll try it out
About the looking away form the viewer problem. At times you got images where she was looking just slightly away. Couldn't you do that to get new references and pile them on your prompt and slowly move away from your original pose getting a more complete set of references, maybe selecting the best ones in the precess?
Yes, excellent point! It's definitely an iterative process and doing as you suggested would be a good routine to try
Thanks a lot for this - amazing video.
Question: can you also create new photos based on a real person (when uploading a photo) to create a story around an existing person?
Thank you.
No unfortunately, not at the moment. I mean, not in the way you're thinking. But if you wanted to try, it would be the same ideas as this video (including reference images in your prompt)
Remix is MJ's version of img2img (from Stable Diffusion), but without any control over degree of change (and of course they don't let you upload your own images) - I guess it's similar to always setting original image strength to 75%. So if your new prompt wants large changes, that's when the weird stuff happens - when it tries to fit the pose or extra details you want into the colour blocks that already exist.
Yeah it's not very intuitive to work with
damn, bro you're super Sharpe 20/20
😂🤝
Solid video!
Thanks!
To have the character shopping or in a different position you can try lowering the image weight. 0 to 2 and the new version let's us use below 0.5 Example (--iw 0.4)
That's a really good point that I should have thought of! Thanks for the suggestion
can you expand?? what would the prompt be?
this is with image reference ?
Be great if you revealed some Leonardo ai tips & tricks too! Thanks
I haven't spent much time with Leonardo but it's on the to-do list!
Thanks so much 💭🎨🎬👥
Cheers!
Thanks, there was some ideas here. My biggest issue is using a character face image in other images but requesting subtle changes like eye colour or hair styles. I have had some success with "use this image for face" but it just puts the face and hair from the source image.
Another thing, I expected you to mention the envelope reaction and using --seed but I noticed it wasn't included. Does this have any value?
I think there is a lot of bad advice out there about seeds and I really don't think they help at all with consistency. In fact, they probably hold it back. Seeds are great for testing out changes to a generation (like a person with a happy face/sad face), but they don't help you take a character and put them in different scenes.
AGREE about Remix. When it first came out, the 'promise' of what it said it would do was great. But in reality, all I ever get is a hot mess. Would like to know if ANYONE is using Remix with any kind of success.
I heard a funny trick recently... hit reroll on the remix? or maybe leave remix on, hit 'reroll' and run the prompt again through remix. Sorry if that's hard to explain, I think I'll make a video about it soon
Your tips are great. I do notice it is hard for midjourney to change clothing style. For example once they are in pants it seems they will always be in pants.
Really? I hadn't tried to change the fashion style but I believe you lol doesn't sound like something Midjourney is capable of solving right now
@@FutureTechPilot It is like the old 60s western shows like Big Valley and Bonanza where they change everything but their costumes. It will change the color and pattern, but the basic style remains the same. I have not been able to get it to change from cargo shorts to a dress.
@@FutureTechPilot I did play around with weight prompts and seem to get some change in the clothing. There is a new feature I noticed called remaster. Do you know what that does?
@@jamesm7653 Remaster was a thing in Version 3 to update a picture to a newer algorithm.. I just looked for it in V5.1 and didn't see it. Where did you find it?
The experimenting is the most frustrating issue I have with working with Midjourney. Especially when you need to post your content w/in the next hour.
haha yeah Midjourney is all about experiments!
Remix in MJ has (almost) _always_ let me down. Loss of quality or definition, added artefacts, disfigurements, loss of detail, etc. Perhaps I'm not using it well, but it's not the best. Bottom line, it's not that easy to create consistent characters.
Thank you for the video though. 👍 I need to try the blend idea.
😂 I've never met someone who felt confident using remix. And /blend worked out waaay better than I would have thought lol
I noticed you did not mention seeds. There were times I used seeds and the images got much worse.
That's one of the reasons I didn't them! They aren't a key to consistency at all, in fact, I think they hold you back!
I am trying to create website cover photos from my pictures. I want the output to use my character. Is this possible using any ai tool? Or each one of these will not replicate the character exactly?
unfortunately - you can't accomplish that with Midjourney! But you should search for 'Stable Diffusion' on UA-cam - you might be able to use that!
I am the only who did not see how he place the separate background? I watched twice and still didn't see where you put the background link
I placed the background link right next to the link of the character! You paste them back to back in the prompt (leave a space between them of course)
There's no copy image address for me 🤷🏻♂️
very very strange! Let me know if you figure out what's wrong
@@FutureTechPilot thanks I'll run through it again today, love your vids by the way, cheers 🥂
Favorite doesn't work but rating you images on Midjourney profile should. You have 4 rating options on that screen, for every individual image you upscaled. Theoretically it gives better results in time
Rating your images helps Midjourney but it doesn't teach a specific style, just whether an image was 'pretty' or not
TY for another great video.
If you include "candid view" or "candid angle" you can get a less self-aware result, more fly-on-the-wall style. Favouring the word "view" or "angle" instead of "shot" usually reduces the likelihood of including a camera in the image. Using the phrase "instagram model" is way too self conscious and loaded toward a "fashionista" "look-at-me" "ain't I pretty" outcome. It's not well suited if you're looking for a CSI style. IMHO.
Lot's of good points! Thanks for your thoughts. I wrote them down and I'll point them out in a future video
@@FutureTechPilot Just my opinion. Thanks for remaining open to feedback. We're all learning to shake the berries from this bush. 🪇🫨
Is it possible that midjourney is seeing 'instagram model' and making sure it's a model pose, rather than a more natural or film pose? Maybe 'beautiful woman' or 'femsle detective ' would work better to create a character that didn't have as much of a 'style'
That's a good point! My personal belief is that it will just naturally make any subject face the camera, unless you prompt otherwise. Which is something I should mention in a future video!
Variations are worthless, they are always worse pictures. Not even sure why they have this feature.
I find that variations can help a lot! Especially fixing a 'broken' picture
Used to be good, but lately they kinda get all ugly very fast👀
Nope
How do I change to a new cref,
ua-cam.com/video/zgeI_ffCudY/v-deo.html - maybe that video might help you?
@@FutureTechPilot thank you, it's a bit confusing, however I'll work with it..
I regret ever using it, now everytime I don't specify it gives me that same face, I have several characters, I don't always what to see that one..
Maybe you can point me in the right direction for getting 300dpi .. my images come out blurry..
@@LolaMoonflower unfortunately that's just how the ai looks right now! You can hit 'Creative or Subtle Upscale' underneath a picture you choose - but if you want it to be 300dpi, you might need to use another program like photoshop
@@FutureTechPilot thanks I'll try it!
Hey Nolan!
I have an idea for a video that would be perfect for your channel and I think it would be helpful for a lot of people as well.
I’d love to have a quick chat with you if you were open to it.
Is there a way contact you outside of UA-cam comments to discuss?
Cheers!
You can email me - futuretechpilot@gmail.com
@@FutureTechPilotwill do shortly! Thanks
what about--seed
I've made almost 40 thousand images and I don't think 'seed' helps in the way people think it will. There is lots of misinformation around it. And in my opinion, it's only going to limit your generation. There are over 4 billion seeds for every prompt, and I like letting Midjourney choose the random seed
@@FutureTechPilot thanks interesting point of view. I will experiment more
funny - midjourney immediately ignored 'full body'. Be cool if MJ had something like a thumbnail of poses it could reference too so thats never an issue
Yeah I bet one day they'll have something like that
as seen from the back
Yes good suggestion! Thank you
sorry, but adding a weight to text prompts ONLY influences the weight of that text prompt towards other text prompts. If you want to influence the text vs image prompt weight, you use --iw [0-2] 0 being not at all, 2 being the highest it can go.
Yes that's a very good point! Sorry for the misunderstanding
All they would have to do is add a "remember character button"
I think that's how it will work in the future!
lmaoo you gotta add in way more prompts than that. just type in that she should look off camera and done. you cant be specific with the parameters and not type in more prompts where you actually tell midjourney, what it should do
Yeah you're right! I guess I was just showing what happens when you don't change things up, but you're right, I should have experimented more
Reacting to the photo gives you fast hour man if it does nothing won't give you free fast hours for it
it helps the a.i learn what is 'beautiful' - not what you want from a prompt
👋
🤝
Is there something wrong with the video, or is UA-cam just breaking down??? It won't load. Other videos will... This one takes such a lloooooooonnnnng time to load.
You're the first person to say that. Hope it starts working better soon
Its not the same character though ...it never will be with "interpretive" promoting software.
They're working on a Consistent Character feature ... so never say 'never'
I am sure it's already in for elected
May be it's because Instagram models don't shoot their back of the head? I guess they are terrible choice for a character.
Maybe! But my understanding is that it wouldn't really matter. I think the better solution would be to get many views of your character as possible, and then use what you needed for the prompt. Maybe I should make an updated version of this video!
First 🎉
🥇
Thank you for your tutorial but you are too much between the phrase it's very disturbing and we cannot follow it still quick everybody do that now on UA-cam and it's a very wrong way to explain something thank you very much too slow a little bit
I know what you mean. Sorry. Maybe I can work on my own digital course soon where I can explain things more slowly
The poses are never consistent with Mid. It's annoying tbh. Other Ai's keep the pose the same.
Yeah it's a slot machine for sure