@@felipeitsuithen prompt better, use nationality prompts, prompts that specify look, use face loras. Generally models generate same face if you are too vague with prompts it will go for whatever has highest weight internally since you didn't precisely said what you want.
@@felipeitsui Not if you know how to prompt. The only times I've got the same person is when I've deliberately written the prompt to get the same person. If you keep your prompt simple, vague and worded the same, you probably will get the same person. That's how it works. 🙂
Thanks for another great video Bro ! Pro tip - on Civitai, if you see an image you want to "borrow", just drag it to the "Process Image" tab in SD (I am using Vlad). If the prompts are in the meta, they will come with it. Cheers !
I have 900gb of checkpoints and my favoritenso far is Awportrait. No one really talks about it but it's the most realistic model i ever tried on many different styles
Not heard of that one. I'll check it out and I know that feeling of having a lot of checkpoints. I don't have anywhere near as much as you do but they still need pruning back.
Trying out this model and am very impressed. In many cases, does a better job than my old favorite realisticVision 2.0. Thanks for bringing this model to our attention.
New to this! Mind blowing if I am looking at what I think I am looking at. Can you please provide a link to a basic intro to this software/process? MUCH appreciated!
Thank you, Olivio. What you do here is valuable. You have just the right amount of enthusiasm and technical ease for being a great person for getting this info out.
I use ddim for everything lately, i dont know what changed but it seems to work best since last major a1111 update for ome reason. Also i would strongly recommend using adetailer insted of face restore that gives terrible results most of the time. As for uoscaler i suggest tiled diffusion it gives best results out of all methods iv tested and is very fast.
Me: Ok, time to clean house and prune out all the models I no longer use to free up space and make it simpler to find what I do want. Actual Me: Ok, time to add Reliberate and LowRA.
Hey, Mr. Olivio... thanks for another great video. I had not heard of Reliberate, so thank you for the info! Just FYI, though... Euler [Leonhard Euler, the Swiss mathematician/ engineer/ astronomer/ and much, much more] is actually pronounced "oil-er" rather than "yule-er". So... now we know!
Do you also pronounce French words with perfect pronunciation? What about words of Asian heritage? Euler can be pronounced several ways. It all depends on nationality, geography etc. Let's not bog down content creators with silly stuff like this.
@@Mocorn It's a man's name and it has an actual pronunciation. Yes, I do speak a little French and a little Cantonese - none of it perfectly, but I do make an effort to be respectful of other languages. Mainly, I brought up the pronunciation in this case because the term Euler appears throughout a number of CG packages, such as the Euler Filter in Maya's graph editor. If I mention it to Olivio - who is kind enough to put thall these wonderful videos together and share them with the rest of us - it is done, thoughtfully and considerately, to help inform him in the same way he generously informs us. My guess is that if someone mispronounced your name, you'd correct them - as well you should. Where is the harm in that?
@@atlanteum people mispronounce my name all the time actually and I do not correct them because that is how the name is pronounced in this country. Where I was born the name is pronounced differently. Interestingly, my name is pronounced a third way if you look at the country of origin which is only an hour away by flight. So, three different ways to pronounce my name, which one is the correct one!?
As a long time ChaiNNer user, I'm surprised you've never made a video about it.... It's a must for graphic designers as me, especially for batch precesses... Free of out of VRAM errors. Also I'm having much better results using SD Upscale than Ultimate Upscale.
@@The_Daily_Meow Both SD upscale and Ultimate upscale are bad. You can test your images in PS- add curves layer and play to extreme and you will see that your images are in tiles (if you can't see without curves layer). Only tiled diffusion method works, but with larger scales it loses details. But still without tiles.
@edu_machado correct if I'm wrong: ChaiNNer is the same type upscaler as Topaz Gigapixel (Except that in ChaiNNer you can upload multiple upscaling models) ? If so, I don't see the point to compare as it acts totally different from Auto111. In Auto1111 you can add extra small details with denoise and sampling steps. Is that possible in ChaiNNer ? There is also another software called Upscayl where you can also upload your own upscaling models. But again- it's not the same, as Auto1111- you just upscale and can't add extra small details.
@@relaxation_ambience No. It doesn't loose details but adds them when you go in smaller steps. I said 'if you know how to use it, of course'. otherwise, it's bad
@@relaxation_ambience Topaz also uses AI to Upscale but I believe they use a proprietary model. With ChaiNNer you have the freedom to use different models.... So yes you're right. I believe it's an obsolete app now (Topaz) but they have another app for video that will keep relevant for some time IMO.
There is one more I haven't seen anyone cover yet - epiCRealism. In my tests it was always better in realistic images than reliberate. I would love to see you cover this one as well. Always great videos!
According to you, if you would rank these models based of their capability of producing Realistic Image, how the rank would go ? Cyberrealistic, Deliberate, epiCRealism, ChillOutmix
I came to the comment section to say this as well. I've tried many others, most of them being the ones mentioned in the comments here, but epiCRealism is just better.
@@Phraxas52 I agree! I would put them in the same order. However I rarely stick with just one checkpoint. Usually I test my prompts with different combinations of checkpoints and sampling methods to find which result I like the most. X/Y/Z plot is my favourite tool by far :D
it took you some time, 😁 usually I am using euler but will try dimm, and I am using add_detail:1, everything higher than 1.2 is already too much. But I apply it to warpfusion mostly. Thanks for tutorial🖐
I've been trying to do something and I can't seem to make it so I'm here asking for help. Is it possible to create a character turnaround by using 2 different controlnet models, one that for example becomes the reference character and the second the poses? Because I created a character I really like but by using the img2img, and of course I can't simply recreate it in txt2img, and whenever I do try with any settings, any combinations (and even without poses but only reference image) I can't do it :/ Found people trying to do the same in reddit 6 months ago but with no follow up. Do you know any way with the new reference only control that can make it happen?
You can probably recreate it in txt2img by copying the settings from the PNG info tab after placing your character in the png info tab. Maybe? I have a character woman I love but it was an accident. So I had three pictures only. I trained her face as a Lora and it came out perfectly. Now I insert her Lora anytime I want. Didn’t think it would work but it did. If you only have one face, I would try making a Lora from it. Who knows
@@SantoValentino if it was a character completely made from txt2img it would work, but since it's from img2img from another image I had created, the parameters aren't accurate to those in infotab
Hey great video. Before i research the whole internet i ask here. I got a good model at tensor art but every picture is a bit different. Now im searching a method to create a model in tensort art so that it will be the same face everytime Or a good free software where i can create a model which doesn’t change
I've been defaulting my portrait renders to a 640 x 800 resolution, Instead of like 512x768. Just a slight bump, it will very rarely doubles up a head. But I feel the bump is worth it.
I've been trying to get a Character Lora from one SD generation. I made and image with 5x6 head OpenPose-poeses to use it with Controlnet. That way I got 30 head poses of the exact same person, enough to train a Lora. However, some of the 30 images missed detail (and were shot from front instead of the back of the head (standard problem). I think what you show in this video can help with some of my challenges here.
Hi Olivio! I never understood what is the role of the sampler and what are the difference. Of course I can render a xyz plot to check the diff, but I would like to understand more why and when use which sampler... Can you make a video on that or maybe a live? thx a lot
stumbled into this. got very little idea what it is about. it reworks your pics and makes them better? so why didn't he show a before and after? what's so great about it?
Hi @Olivio Sarikas or anyone who can help me. Thanks for your videos, excellent stuff! When I go to my Scripts section, there's many Scripts missing for me compared to yours, & SD Upscale script is also missing. Can you please let me know why?
If face in the picture wears glasses (or sunglasses) Roop will erase the glasses, then replaces face in the video but it does not erase clear. Roop can not choose to reserve the glesses (or sunglasses) in the video. Can this problem be solved ?
im new to this stuff. is there particular software needed? I keep seeing things about models, checkpoints etc. But i never come accross what program everyone's using to generate the models.
in the description of the model there is information that, among other things, it is not allowed to sell images created with it. Does this also mean that you can't use this model to create sets of your own commercial projects (e.g. in your own comic or game)?
Olivio, it is interesting that you mentioned Restore Face feature. The author of this model hates restoring face this way and says he even can see it when others use it. He is an awesome smart guy who definitely knows what he is doing.
@@anonymousmuskox1893 Yes, he has a YT channel but it is in Russian. The channel's name is ХрисТ (copy my text because it is in Cyrillic though it looks like Latin letters).
hi.. i was using stable diffusion since the beginning.. but im studing hard for some exames and after some months. its like i dont know nothing anymore... lora? checkpoints? clip?? i will need a resume what is happening...
These images are almost too perfect - they look like photos from the best digital camera and heavily edited. Can this model be used to generate images that appear realistic but are of lower quality? So that they resemble photos taken with a medium-quality mobile phone?
On Civitai, I noticed that if I right click and save the image, then load it into PNG Info, it loads all the prompt data when I send it to Text to Image. Saves a lot of copy and pasting. Can delete the image afterward if you want. I don't since I want to see if I can replicate what the artist did.
How is that easier? You are just doing the same thing only with an image, but have the extra step of having to save it then browse for it, etc. Whereas with copy/paste you just click two buttons.
Olivio you are the best ... Please help, I just learned about this AI image generating media, already blown away. Just tried "hypernetwork" training, got beautiful image but not photo-realistic. May I know is it possible to combine this amazing model with my hypernetwork trained models ? really appreciate for your advice, thanks
try mid-journey, it's simple and a lot more creative ability off the go, you will get photorealism straight away by just saying "I want it photorealistic " It has its limits, but they all do. enjoy
@OlivioSarikas Do you have a video where you teach how to create an RPG or NPC character with the face of a friend of yours or a photo that already exists? how could i do this?
I find this model has slightly too much anime weight to be truly photorealistic (can look nice and arty if that is what you are going for), I use epiCRealism_pureEvolution v3 for better photo realisim.
I'm doing the same steps exactly but when generating it always makes drastic changes to the final result, like adding faces, extra fingers, deforming the existing ones, the image never stays the same
@@sheedee2 I discovered that it's related to the prompts, make sure it doesn't have any prompts related to descriptions, just related to quality of render
I'm not sure if all that prompting is valuable to learn, because all this stuff will shortly change in way easier (more artistic) tools and will be much more controllable without all that stuff.
I think prompting will definitely be a skill to have, but it will probably standardize in how models are trained and thus how they're used. I see a LOT of pointless tokens being used and UA-camrs teaching superfluous words that only makes it harder to learn. Sometimes I see people basically writing poetry in their prompts, not because it's interesting and fun but because they actually believe the poetic way of writing prompts will generate a better result. It does not.
@@Phraxas52 Absolutely! :D ...I wait until all these AI models can be controlled in a more natural/standardized and artistic way. At the moment it's more guessing/hopping and not guiding. This will be the point in AI Art when real artists can profit again from the knowlage and creativity.
guys is there an tutorial on how to use this modell? is it kind of a programm that i can download and use r how is it done ? i have dwnloaded the files, wht is next ? would appreciate your help!
The person who claimed this model is the same is just plain wrong - I assume it's the one that posted an x/y/z plot. You can clearly see on his plot, that the images are different. The guy just didn't read the model description :) people expect every model to give completely new images, but they don't understand, that new versions won't necessarily give something new, especially when the author continues the model training or merges two same models, which differ in style. The model author even explicitly stated, that he separated his model versions, as I guess he didn't want to use a checkpoint merge (or continued training merge) in the same model post for some reason. After plotting both Reliberate and Deliberate v2 (also Realistic Vision v2 and RunDiffusion for a test), Reliberate seems to give similar results with less exposure for most prompts.
Hi i would like to know if its possible to create bulk images for more than 1 person, for example i want to generate 10 different people with 5 images of them. If someone know how to do this feel free to answer.
this has become my favorite model to use very quick since i discovered it! ty for the vid!
Awesome. Same here. It also create more playful results than most other models
Realistic Vision V3.0 came out today, that seems like even a bigger deal as V2.0 was one of my favs.
Thanks for the heads up. Getting it asap!
@@Devalocka Realistic Vision 4.0 is out now!
@@Showbiz_CHlol, the humanity
@@Devalocka Realistic Vision v5 has been released!
Reliberate is great but I've been loving aZovyaPhotoreal V2 recently, with the Heun sampler and it's giving out really great results.
I love it too
Zovya is too biased, always renders the same single girl
@@felipeitsui You might look into the roop extension for A1111. Put any face you want on her that way.
@@felipeitsuithen prompt better, use nationality prompts, prompts that specify look, use face loras. Generally models generate same face if you are too vague with prompts it will go for whatever has highest weight internally since you didn't precisely said what you want.
@@felipeitsui Not if you know how to prompt. The only times I've got the same person is when I've deliberately written the prompt to get the same person. If you keep your prompt simple, vague and worded the same, you probably will get the same person. That's how it works. 🙂
Thanks for another great video Bro ! Pro tip - on Civitai, if you see an image you want to "borrow", just drag it to the "Process Image" tab in SD (I am using Vlad). If the prompts are in the meta, they will come with it. Cheers !
I have 900gb of checkpoints and my favoritenso far is Awportrait. No one really talks about it but it's the most realistic model i ever tried on many different styles
Not heard of that one. I'll check it out and I know that feeling of having a lot of checkpoints. I don't have anywhere near as much as you do but they still need pruning back.
Not everyone is just making waifus Bro ! 🤣🤣🤣
Sounds like something I’m trying to do 😂have any videos suggestions? I’m a super beginner
Great review 🤟
Вот оно, заслуженное признание!:)
Trying out this model and am very impressed. In many cases, does a better job than my old favorite realisticVision 2.0. Thanks for bringing this model to our attention.
I have a server rack with a few boxes in my basement. I need to get something let this setup locally. This is awesome!
New to this! Mind blowing if I am looking at what I think I am looking at. Can you please provide a link to a basic intro to this software/process? MUCH appreciated!
Realistic Vision V2.0 has been my go to for a while. I'll check this one out for sure
They uploaded V3.0 today.
Thank you, Olivio. What you do here is valuable. You have just the right amount of enthusiasm and technical ease for being a great person for getting this info out.
I use ddim for everything lately, i dont know what changed but it seems to work best since last major a1111 update for ome reason. Also i would strongly recommend using adetailer insted of face restore that gives terrible results most of the time. As for uoscaler i suggest tiled diffusion it gives best results out of all methods iv tested and is very fast.
You have helped me so much since I started watching your videos
Loving your videos. Making the complex easy to understand. Thanks!
Olivio let me ask, what is your trick to avoid NSFW on livestreams? did you uses only Negative prompts or is some other things? Nice vídeo btw
Cool video. Will all this loras work with SDXL too? If don't how can I know which one will or will not?
Me: Ok, time to clean house and prune out all the models I no longer use to free up space and make it simpler to find what I do want.
Actual Me: Ok, time to add Reliberate and LowRA.
Reliberate == Deliberate. So there's some spring cleaning for you :)
Hey, Mr. Olivio... thanks for another great video. I had not heard of Reliberate, so thank you for the info! Just FYI, though... Euler [Leonhard Euler, the Swiss mathematician/ engineer/ astronomer/ and much, much more] is actually pronounced "oil-er" rather than "yule-er". So... now we know!
Do you also pronounce French words with perfect pronunciation? What about words of Asian heritage?
Euler can be pronounced several ways. It all depends on nationality, geography etc. Let's not bog down content creators with silly stuff like this.
@@Mocorn It's a man's name and it has an actual pronunciation. Yes, I do speak a little French and a little Cantonese - none of it perfectly, but I do make an effort to be respectful of other languages. Mainly, I brought up the pronunciation in this case because the term Euler appears throughout a number of CG packages, such as the Euler Filter in Maya's graph editor. If I mention it to Olivio - who is kind enough to put thall these wonderful videos together and share them with the rest of us - it is done, thoughtfully and considerately, to help inform him in the same way he generously informs us. My guess is that if someone mispronounced your name, you'd correct them - as well you should. Where is the harm in that?
yeah it is the worst claim😄
@@atlanteum people mispronounce my name all the time actually and I do not correct them because that is how the name is pronounced in this country. Where I was born the name is pronounced differently. Interestingly, my name is pronounced a third way if you look at the country of origin which is only an hour away by flight.
So, three different ways to pronounce my name, which one is the correct one!?
@@Mocorn It doesn't matter - your name is not Euler.
3:36 he did it again ! The G !
As a long time ChaiNNer user, I'm surprised you've never made a video about it.... It's a must for graphic designers as me, especially for batch precesses... Free of out of VRAM errors.
Also I'm having much better results using SD Upscale than Ultimate Upscale.
ChaiNNer is amazing. Ultimate Upscale is really bad. SD Upscale is really all you need. If you know how to use it, of course
@@The_Daily_Meow Both SD upscale and Ultimate upscale are bad. You can test your images in PS- add curves layer and play to extreme and you will see that your images are in tiles (if you can't see without curves layer). Only tiled diffusion method works, but with larger scales it loses details. But still without tiles.
@edu_machado correct if I'm wrong: ChaiNNer is the same type upscaler as Topaz Gigapixel (Except that in ChaiNNer you can upload multiple upscaling models) ? If so, I don't see the point to compare as it acts totally different from Auto111. In Auto1111 you can add extra small details with denoise and sampling steps. Is that possible in ChaiNNer ? There is also another software called Upscayl where you can also upload your own upscaling models. But again- it's not the same, as Auto1111- you just upscale and can't add extra small details.
@@relaxation_ambience No. It doesn't loose details but adds them when you go in smaller steps. I said 'if you know how to use it, of course'. otherwise, it's bad
@@relaxation_ambience Topaz also uses AI to Upscale but I believe they use a proprietary model. With ChaiNNer you have the freedom to use different models.... So yes you're right. I believe it's an obsolete app now (Topaz) but they have another app for video that will keep relevant for some time IMO.
Sd upscale? What happened to *Ultimate SD upscale* you used to choose? Did 'SD upscale' end up better?
There is one more I haven't seen anyone cover yet - epiCRealism. In my tests it was always better in realistic images than reliberate. I would love to see you cover this one as well. Always great videos!
According to you, if you would rank these models based of their capability of producing Realistic Image, how the rank would go ?
Cyberrealistic, Deliberate, epiCRealism, ChillOutmix
I came to the comment section to say this as well. I've tried many others, most of them being the ones mentioned in the comments here, but epiCRealism is just better.
@@ryansetiawan2202 Imo, epiC > Cyber > Delib > ChillOut, but they aren't poor quality by any means.
@@Phraxas52 I agree! I would put them in the same order.
However I rarely stick with just one checkpoint. Usually I test my prompts with different combinations of checkpoints and sampling methods to find which result I like the most.
X/Y/Z plot is my favourite tool by far :D
@@Phraxas52 thx a lot for the answer.. appreciate it.. 🤝
it took you some time, 😁 usually I am using euler but will try dimm, and I am using add_detail:1, everything higher than 1.2 is already too much. But I apply it to warpfusion mostly. Thanks for tutorial🖐
Hi @Olivio can you show us how to make architectural renders more realistic please 😇🙌🏻
Yeah... I'm looking for something like that. Sadly I didn't find anything helpful related to architecture.
amazing work as always my friend
DDIM seems to be great for inpainting especially
would be great if you could provide links to negative embeddings too! would be easier for viewers to click on it and download
I recommend checking out "C3" (named because its original model is a merge of "Colorful", "Clarity", and "Consisency".
As usual... It would be nice to share a link to those usual videos..
Thanks Olivio for this new video!
@OlivioSarikas Do you have a reliberate v1.0 available to dl anywhere?
Спасибо за вашу работу!
where can I find those models now that they are not available at Civitai anymore?
man, these prompt artists are insane.
My fav realistic models are cyberrealistic, epicrealism and realistic vision
+1 for epicrealism.
Is there any progress on hands and feet? Seems like models and embeddings still don't do too well with them. Arm proportions get strange at times too.
I've been trying to do something and I can't seem to make it so I'm here asking for help. Is it possible to create a character turnaround by using 2 different controlnet models, one that for example becomes the reference character and the second the poses? Because I created a character I really like but by using the img2img, and of course I can't simply recreate it in txt2img, and whenever I do try with any settings, any combinations (and even without poses but only reference image) I can't do it :/
Found people trying to do the same in reddit 6 months ago but with no follow up. Do you know any way with the new reference only control that can make it happen?
You can probably recreate it in txt2img by
copying the settings from the PNG info tab
after placing your character in the png info tab.
Maybe?
I have a character woman I love but it was an accident. So I had three pictures only. I trained her face as a Lora and it came out perfectly. Now I insert her Lora anytime I want. Didn’t think it would work but it did.
If you only have one face, I would try making a Lora from it. Who knows
@@SantoValentino if it was a character completely made from txt2img it would work, but since it's from img2img from another image I had created, the parameters aren't accurate to those in infotab
@@Kuresuto this si where the fun begins
Hey great video. Before i research the whole internet i ask here. I got a good model at tensor art but every picture is a bit different. Now im searching a method to create a model in tensort art so that it will be the same face everytime
Or a good free software where i can create a model which doesn’t change
You should try out "Lunar Diffusion 1.28" . I'd love to hear your thoughts on it :)
Help...! How can I keep the detail when swapping face with "Roop" extension.
Whenever I try to swap face, it renders a clean face.
I've been defaulting my portrait renders to a 640 x 800 resolution, Instead of like 512x768. Just a slight bump, it will very rarely doubles up a head. But I feel the bump is worth it.
Great! I will try that. Thanks for the tip!
Привет Христу от подписчика!
I love this but it's still too "cartoony" for me. Still, this is all fascinating! Thanks for the video!
thanks for the video!!
mmm, quite nice for architecture too!
I've been trying to get a Character Lora from one SD generation. I made and image with 5x6 head OpenPose-poeses to use it with Controlnet. That way I got 30 head poses of the exact same person, enough to train a Lora. However, some of the 30 images missed detail (and were shot from front instead of the back of the head (standard problem). I think what you show in this video can help with some of my challenges here.
that sounds like a really interesting idea :)
Hi Olivio!
I never understood what is the role of the sampler and what are the difference.
Of course I can render a xyz plot to check the diff, but I would like to understand more why and when use which sampler...
Can you make a video on that or maybe a live?
thx a lot
stumbled into this. got very little idea what it is about. it reworks your pics and makes them better? so why didn't he show a before and after? what's so great about it?
Thanks!
love this skin. wish there was a lora model just for the skin texture
chainner ? what's that? if you're making a tutorial about that : SUBSCRIBED right now. Hope for that video, liked and shared. Thank you.
Amazing , great video
Awesome!!!!!
Great!
Hi @Olivio Sarikas or anyone who can help me. Thanks for your videos, excellent stuff! When I go to my Scripts section, there's many Scripts missing for me compared to yours, & SD Upscale script is also missing. Can you please let me know why?
wow
If face in the picture wears glasses (or sunglasses) Roop will erase the glasses, then replaces face in the video but it does not erase clear.
Roop can not choose to reserve the glesses (or sunglasses) in the video. Can this problem be solved ?
Can this be used with Openpose? Also can we make a full body image with this model?
im new to this stuff. is there particular software needed? I keep seeing things about models, checkpoints etc. But i never come accross what program everyone's using to generate the models.
the "software" is called *stable diffusion* and it is commonly described as a picture-generating artificial intelligence (ai for short).
in the description of the model there is information that, among other things, it is not allowed to sell images created with it. Does this also mean that you can't use this model to create sets of your own commercial projects (e.g. in your own comic or game)?
Is there a way to quickly apply words need for loras like there is when u apply a negative prompt? Or do we jusy make like 20 different styles then
Olivio, it is interesting that you mentioned Restore Face feature. The author of this model hates restoring face this way and says he even can see it when others use it. He is an awesome smart guy who definitely knows what he is doing.
Does that creator have a UA-cam or something?
@@anonymousmuskox1893 Yes, he has a YT channel but it is in Russian. The channel's name is ХрисТ (copy my text because it is in Cyrillic though it looks like Latin letters).
@@1Know1tHurts ты не прав. его канал как раз на латинице пишется
4:39 How did you do these sample images? By hand or is there a plugin to do this?
x, y, z plot script
Is there an updated version of this? It looks like it's no longer available. Can someone send over the working link?
dead link for model..error 404
Hi, great tutorial. I am new to Stable Diffusion. What is Automatic1111 for? Have you got any tutorial on how to install and use it?
Thanks
Automatic 111 is a tool for using stable diffusion. There are a couple of videos explaining how to install. Search for a not so outdated one.
ua-cam.com/video/3cvP7yJotUM/v-deo.html
Am i perfection is a solid base model, have you used it??
hi.. i was using stable diffusion since the beginning.. but im studing hard for some exames and after some months. its like i dont know nothing anymore... lora? checkpoints? clip?? i will need a resume what is happening...
Another one just released is Epic realism pure evolution v3. Prob the best I've used so far
I like your shirt amigo!
These images are almost too perfect - they look like photos from the best digital camera and heavily edited. Can this model be used to generate images that appear realistic but are of lower quality? So that they resemble photos taken with a medium-quality mobile phone?
Где рэдхэд? Хачатур же накажет.
On Civitai, I noticed that if I right click and save the image, then load it into PNG Info, it loads all the prompt data when I send it to Text to Image. Saves a lot of copy and pasting. Can delete the image afterward if you want. I don't since I want to see if I can replicate what the artist did.
How is that easier? You are just doing the same thing only with an image, but have the extra step of having to save it then browse for it, etc. Whereas with copy/paste you just click two buttons.
Olivio you are the best ... Please help, I just learned about this AI image generating media, already blown away. Just tried "hypernetwork" training, got beautiful image but not photo-realistic. May I know is it possible to combine this amazing model with my hypernetwork trained models ? really appreciate for your advice, thanks
try mid-journey, it's simple and a lot more creative ability off the go, you will get photorealism straight away by just saying "I want it photorealistic " It has its limits, but they all do. enjoy
I'm getting a blue tint to faces with "restore faces" on. It appears on the final render frame. I've reloaded SD/A1111, and it still happens.
nice
Is it also possible in midjournmey?
FYI "Euler" is pronounced "Oiler"
Can you show how to work with tiles to make big resolution AI images?
4:56 btw are you sure Euler is pronounced like that? I thought it's pronounced the same as the mathematician it is named after.
Just saw the video thumbnail on my feed and came here, so it is basically a tool to convert AI generated faces to realistic faces ?
@OlivioSarikas Do you have a video where you teach how to create an RPG or NPC character with the face of a friend of yours or a photo that already exists? how could i do this?
3:34 someone definitely added more weight ;)
when i use the sd upscale i dont have the option "4x ultrasharp"
I find this model has slightly too much anime weight to be truly photorealistic (can look nice and arty if that is what you are going for), I use epiCRealism_pureEvolution v3 for better photo realisim.
Is there any model that I can use to create illustration and art for T-shirt
i get a 404 error for the model link, searching it on civit ai is also no results
3:17 Wait, am I hallucinating or the title on the paper says "Уже который год во всём мире п*здец"? 🤣🤣🤣
How to give photos batch process inpatient?
Hey??? What happened to the Colorful Shirts!!! You've Changed Brah! 😥
Sometimes a man feels the need to go all hardrock and gates of hell and wear a black shirt from the office show 😅
I'm doing the same steps exactly but when generating it always makes drastic changes to the final result, like adding faces, extra fingers, deforming the existing ones, the image never stays the same
Same here...but know one explains why this happens or how to fix it 🤪
@@sheedee2 I discovered that it's related to the prompts, make sure it doesn't have any prompts related to descriptions, just related to quality of render
I'm not sure if all that prompting is valuable to learn, because all this stuff will shortly change in way easier (more artistic) tools and will be much more controllable without all that stuff.
I think prompting will definitely be a skill to have, but it will probably standardize in how models are trained and thus how they're used. I see a LOT of pointless tokens being used and UA-camrs teaching superfluous words that only makes it harder to learn.
Sometimes I see people basically writing poetry in their prompts, not because it's interesting and fun but because they actually believe the poetic way of writing prompts will generate a better result. It does not.
@@Phraxas52 Absolutely! :D ...I wait until all these AI models can be controlled in a more natural/standardized and artistic way. At the moment it's more guessing/hopping and not guiding. This will be the point in AI Art when real artists can profit again from the knowlage and creativity.
3:15 is the headline also generated? It translates from russian something like "the world is f*cked up year after year". Is it deliberate? 🤔
i think they made a joke on their page
Could you please advise, how to create a picture with a smoking woman? I always have an issue with hands and cigarette.
have you tried controlnet or photobashing?
So you know, “Euler” is a actually pronounced “Oiler” like “oil” plus “er”. It is the last name of the Swiss mathematician Leonhard Euler.
guys is there an tutorial on how to use this modell? is it kind of a programm that i can download and use r how is it done ? i have dwnloaded the files, wht is next ? would appreciate your help!
Is there any model like : mid journey
Isn't it the model where someone commented it's just a renamed deliberate model? Cause I did not download it for that reason..
@@weirdscix I will try for myself.. that person claimed to get the same results with the same seed etc..
The person who claimed this model is the same is just plain wrong - I assume it's the one that posted an x/y/z plot. You can clearly see on his plot, that the images are different. The guy just didn't read the model description :) people expect every model to give completely new images, but they don't understand, that new versions won't necessarily give something new, especially when the author continues the model training or merges two same models, which differ in style.
The model author even explicitly stated, that he separated his model versions, as I guess he didn't want to use a checkpoint merge (or continued training merge) in the same model post for some reason. After plotting both Reliberate and Deliberate v2 (also Realistic Vision v2 and RunDiffusion for a test), Reliberate seems to give similar results with less exposure for most prompts.
Hi i would like to know if its possible to create bulk images for more than 1 person, for example i want to generate 10 different people with 5 images of them. If someone know how to do this feel free to answer.
^^! Great works ^^! :3