You're welcome! I am not sure how useful but I figure the more info people have the better we can all learn.
Місяць тому+3
I am not expert neither, but I was thinking: maybe if you try to come up with nonrreal trigger word, because when you write "miniature person" it triggers all the neural paths responsible for plastic miniatures. Maybe if you switch the word for "m1n1ppl" or something like that, and avoid using miniature in your prompt, it might pop out better results, closer to your lora training data. But as I said, I am just guessing :D Edit: Oh. I am now just on 10:00 and I see you did that :D I should comment only after watching full video next time :D PS: You just earned a subscriber :3
Works good. I have a realistic renders after 5 tries. Using Mini-People and Faetastic-Details at a strength of 1.0. Would love to see a V2 of this cool Lora. Thanks! Liked and Subscribed.
Thanks! I do hope to improve it soon. Working on more source material to allow for more options. I keep trying different detail LoRAs but in my testing I was not fully satisfied with most. I am using my Hyper Detailed Illustration LoRA at low weight since that can seem to help.
@@KLEEBZTECH Hello, thank you for your response. I may have asked the question incorrectly. Let me try again: is it possible to install the miniature (safetensors) model in Fooocus 2.5.5? I already have Fooocus installed locally. I tried with the current version (Fooocus 2.5.5) and the result was people with plastic doll characteristics. I'm running the checkpoint "juggernautXL_v8Rundiffusion" in Fooocus.
i would have 1. used photoshop to scale people down and position them on objects, then used those images for the data set and 2. used a unique trigger word that wasn't likely to be in the base model's training set.
Tried a unique trigger. Made no difference in the end. I thought of doing the images of people and scaling down but just didn't feel I would get them looking right. I am not expert on that.
Some Random Tips: Let ChatGPT writ a python program for vs code that does screen captures every x seconds from a video. Tell ChatGPT that you also want to specify the start and end time and that the filename has to include the video name and the timestamp. Then you can start with some screenshots from the full video and go info details based on the result. No need for resolve an going though 100000 captures ;-)
Miniature LoRA: civitai.com/models/766577
I downloaded this beautiful model the day it was released, and today I'm already watching your tutorial )) Beautiful work! Thank you so much!
Thanks for watching!
I really appreciate your talk-through on this.
Super helpful! Thank you!
You're welcome!
Fantastic video, a lot of good solid information. The lora came out great!! Thanks for the shout out!!
thanks, this is very helpfull
You're welcome! I am not sure how useful but I figure the more info people have the better we can all learn.
I am not expert neither, but I was thinking: maybe if you try to come up with nonrreal trigger word, because when you write "miniature person" it triggers all the neural paths responsible for plastic miniatures. Maybe if you switch the word for "m1n1ppl" or something like that, and avoid using miniature in your prompt, it might pop out better results, closer to your lora training data. But as I said, I am just guessing :D
Edit: Oh. I am now just on 10:00 and I see you did that :D I should comment only after watching full video next time :D
PS: You just earned a subscriber :3
This is awesome!
Works good. I have a realistic renders after 5 tries. Using Mini-People and Faetastic-Details at a strength of 1.0. Would love to see a V2 of this cool Lora. Thanks! Liked and Subscribed.
Thanks! I do hope to improve it soon. Working on more source material to allow for more options. I keep trying different detail LoRAs but in my testing I was not fully satisfied with most. I am using my Hyper Detailed Illustration LoRA at low weight since that can seem to help.
Wowww BIG UPS brah!! 🎉👏🏻💯
Pretty dope!
Thank you very much for your Stable Diffusion tutorials. They're always great! Is it possible to install Fooocus on my local machine?
@@mauricioc.almeida2482 older video but install method is the same. ua-cam.com/video/j1WuQndmgFE/v-deo.html
@@KLEEBZTECH Hello, thank you for your response. I may have asked the question incorrectly. Let me try again: is it possible to install the miniature (safetensors) model in Fooocus 2.5.5? I already have Fooocus installed locally. I tried with the current version (Fooocus 2.5.5) and the result was people with plastic doll characteristics. I'm running the checkpoint "juggernautXL_v8Rundiffusion" in Fooocus.
No. I have not released an SDXL version. I have tried and will attempt again in the near future but so far the results have always been terrible.
i would have 1. used photoshop to scale people down and position them on objects, then used those images for the data set and 2. used a unique trigger word that wasn't likely to be in the base model's training set.
Tried a unique trigger. Made no difference in the end. I thought of doing the images of people and scaling down but just didn't feel I would get them looking right. I am not expert on that.
Plus Flux knows what miniature people are but it makes them plastic so figured this would just replace that look.
Where have you gone? I haven't seen your content related to Text to Image generator for a long time.
New videos soon. Things outside of UA-cam taking my time up recently.
In Foocus
How do I use the program with the CPU?
Some Random Tips: Let ChatGPT writ a python program for vs code that does screen captures every x seconds from a video. Tell ChatGPT that you also want to specify the start and end time and that the filename has to include the video name and the timestamp. Then you can start with some screenshots from the full video and go info details based on the result. No need for resolve an going though 100000 captures ;-)