The art of explaining something succinctly after getting straight to the point seems to be lost on the internet for the most part. You my friend are an exception. Thank you!
:D I should do more of the short ones, problem is there is not too many you can really do it with because theres sooooo much depending on settings for some stuff.
Short and to the point. I had been wondering why some of my Lora's weren't seeming to work - turns out they're just incompatible with the checkpoint 😅👍
I always knew somewhere in the back of my head, somebody must put the windows taskbar at the top. But I never saw it until now. Anyway, shitposting aside, appreciate the quick and straightforward info.
:D, my other monitor has it on the side ;) just like to have the clock visible up the top. I think the main reason is because often i end up with drink cans sitting either side and below my screen or prop my phone up on the screen if im waiting for a message. so I just kinda gave up and put it up top.
I was under the impression (and based on a lot of testing) that you cant actually put the trigger word in the prompt. Do you have something specially installed or special that needs to be done?
Great tutorial - but when I click on the lora node i don't get any loras to come up. I have them loaded in the loras folder in comfyui and i have everything set to load from stable diffusion as well. it's like that field is not functional. any idea what might be wrong? thanks
if you have just downloaded models, make sure you reload your browser tab to refresh the list, sometimes you may need to restart the server too. this happens because it wont update node lists or model lists in real time and requires the page to load to do a check for them.
Hi, Thanks for the video. Any hints welcome. Can load the LoadLoRa node but can't access the lora files in my ComyUI>Models>loras subdirectory. Tried using load and ComfyUI Manager etc. Thanks!
When you install lora models (you need to get some they don't come with comfyUI by default) you need to put them in the lora folder. once you have some it will allow you to select from that list. Make sure you reload the browser tab once you have installed models to make it refresh the list.
my biggest issue is remember trigger words, that's why i like the webui version since i can look at my lora trigger words...i need a comfyui plugin for this
we all do, i think one exists, uh.... like you right click on the node and it gives you an option to print it out or something, I cannot remember the name but I think its from Pythongossss
theres a node... cannot remember the name that tries to find the trigger word automatically, but yeah, I havent checked recently but generally its a good idea to try and find out the trigger word when you get a model. often its just the name of the model, wish more models did that.
there is an option in the settings menu to switch from splines to lines. to line them up straight you can enable the always snap option in the same menu
Finally, I understand LORA now. I searched your other vids for how to add an image to the workflow that I can use to generate another image through Lora, is there a specific video you cover this in?
uhh..... I don't think ive ever done an img2img specific tutorial XD. if you just put in an 'image load' node then use a 'vae encode' you can plug that in instead of the empty latent source, just make sure to lower the denoise value. Ive definatley got it in one of my tutorials, its kind of one of those basic skills i refer to in like lots of videos rather than a video specifically devoted to it.
@@ferniclestix Thank you for replying, much appreciated. I guess there`s only one thing for me to do and that`s start at your first video and binge watch every ComfyUI video. I`ve only been using it for about a week, so a LOT is alien to me :) Keep up the excellent vids \m/(^_^)\m/
I'm new to ComfyUI. I noticed that your KSampler & Main widget shows a live preview of each step of the image generation as it generates. How do I enable this?
To future readers: I figured it out. It's called "ComfyUI Manager" and is easy to install by git cloning the repo into the right directory. Guide is in the github repo page
yah, its in the settings comfyUI manager has a little option to choose the prieview method, you can choose from a few different versions which are varying degrees of speed or acuracy. (generally renders are slower with it on)
thank you very much! I have a question, how did you do to make the Ksampler show the steps of the image processing in real time? edit: nvm I got the extension
Excellent how to videos. If you could make one on ADetailer it would be much appreciated. This is the only reason I jump to A1111 to fix face and hand 🙏😊
thanks! I want to learn to train a lora with my face. where would you recommend me to train it? on the automatic111 UI (which I'm not familiar), or koyah_ss? could it be done inside ComfyUI?
I was looking into this today but I think its best to use say automatic1111 for this process, have not tried koyah_ss You can model merge in comfyUI, haven't found the lora method there yet. but probably mention it in future If I find it.
I tried to use one of the many pornographic pose LORAs (for serious research purposes), it didn't trigger it. I'll have another go now I've seen this, many thanks!
lol theres no judgment here, art is still art >.> but yeah, if its still being tricky, try using the name of the lora as a trigger word, sometimes thats worked for me.
Hello thanks for your work. Could you make and explain how to make a workflow that makes multiple images from different models (same seed, same prompt) and upsacle each ? Thanks.
check the sampler settings and latent sizes, you will probably see they are small res images with low ish step counts. most people just leave it at 30 but 20 and sometimes 12 or even 10 is doable on some sampler types.
im using a lot euler_a with 50 steps with 1024x1024 resolution, it tooks like 20m-30m to generate a single image but i like the results, the problem is im using a laptop gtx1050 XD
well, this one is a quick tutorial designed to help people just get it running if they are looking for it. A lora is a small model designed to 'tune' the output of larger models its a good way of controlling the output and modifying larger models to do stuff they might normally struggle with or to ensure more stable outputs for things like character creation or animation.
The art of explaining something succinctly after getting straight to the point seems to be lost on the internet for the most part. You my friend are an exception. Thank you!
Bro, you literally explained in 4 minutes what it took other videos 20 minutes to do. THANK YOU!!!
:D I should do more of the short ones, problem is there is not too many you can really do it with because theres sooooo much depending on settings for some stuff.
"This does *not* mean it's working." 😊 Love it. Thanks for putting this together.
Short, to the point, very nice. Thank you very much, you da best!
Short and to the point. I had been wondering why some of my Lora's weren't seeming to work - turns out they're just incompatible with the checkpoint 😅👍
Just switched to ComfyUI and this was exactly what I was looking for. Thank you!
:D happy to help, most of my tutorials are pretty long, the short ones are soooo hard for me to do, i just love waffling on.
Exactly how all tutorials should be. Thank you so much. Subscribed and liked. :)
:D welcome aboard
I always knew somewhere in the back of my head, somebody must put the windows taskbar at the top. But I never saw it until now. Anyway, shitposting aside, appreciate the quick and straightforward info.
:D, my other monitor has it on the side ;) just like to have the clock visible up the top. I think the main reason is because often i end up with drink cans sitting either side and below my screen or prop my phone up on the screen if im waiting for a message. so I just kinda gave up and put it up top.
thanks for this man very simple and quickly explained!
I was under the impression (and based on a lot of testing) that you cant actually put the trigger word in the prompt. Do you have something specially installed or special that needs to be done?
Great tutorial - but when I click on the lora node i don't get any loras to come up. I have them loaded in the loras folder in comfyui and i have everything set to load from stable diffusion as well. it's like that field is not functional. any idea what might be wrong? thanks
if you have just downloaded models, make sure you reload your browser tab to refresh the list, sometimes you may need to restart the server too. this happens because it wont update node lists or model lists in real time and requires the page to load to do a check for them.
thanks I got it sorted.@@ferniclestix
how did u get the image generation images under the K sampler as it generates? mine doesn't seem to do that
Hi, Thanks for the video. Any hints welcome. Can load the LoadLoRa node but can't access the lora files in my ComyUI>Models>loras subdirectory. Tried using load and ComfyUI Manager etc. Thanks!
When you install lora models (you need to get some they don't come with comfyUI by default) you need to put them in the lora folder. once you have some it will allow you to select from that list. Make sure you reload the browser tab once you have installed models to make it refresh the list.
How can I wire several LoRas? The model plug from LoRa to KSampler accepts only one plug.
my biggest issue is remember trigger words, that's why i like the webui version since i can look at my lora trigger words...i need a comfyui plugin for this
we all do, i think one exists, uh.... like you right click on the node and it gives you an option to print it out or something, I cannot remember the name but I think its from Pythongossss
@@ferniclestix Ight, i'll look into it. Might look into making my own node for people to download as a fun project and real world experience
is there an easy way to make the triggerwords accessible? i cant remember them and noting them down in another place is inefficient.
theres a node... cannot remember the name that tries to find the trigger word automatically, but yeah, I havent checked recently but generally its a good idea to try and find out the trigger word when you get a model. often its just the name of the model, wish more models did that.
damn ok. thnx@@ferniclestix
great video. thanks!
The lines between nodes are straight, how to do it
there is an option in the settings menu to switch from splines to lines.
to line them up straight you can enable the always snap option in the same menu
Very good, thank you.
Finally, I understand LORA now. I searched your other vids for how to add an image to the workflow that I can use to generate another image through Lora, is there a specific video you cover this in?
uhh..... I don't think ive ever done an img2img specific tutorial XD. if you just put in an 'image load' node then use a 'vae encode' you can plug that in instead of the empty latent source, just make sure to lower the denoise value.
Ive definatley got it in one of my tutorials, its kind of one of those basic skills i refer to in like lots of videos rather than a video specifically devoted to it.
@@ferniclestix Thank you for replying, much appreciated. I guess there`s only one thing for me to do and that`s start at your first video and binge watch every ComfyUI video.
I`ve only been using it for about a week, so a LOT is alien to me :)
Keep up the excellent vids
\m/(^_^)\m/
Thanks dude!
Super informative. Do loras work well with image-to-image and inpainting?
they should work fine, they just tune the model you are using.
Thanks man
I'm new to ComfyUI. I noticed that your KSampler & Main widget shows a live preview of each step of the image generation as it generates. How do I enable this?
To future readers: I figured it out. It's called "ComfyUI Manager" and is easy to install by git cloning the repo into the right directory. Guide is in the github repo page
yah, its in the settings comfyUI manager has a little option to choose the prieview method, you can choose from a few different versions which are varying degrees of speed or acuracy. (generally renders are slower with it on)
thank you very much!
I have a question, how did you do to make the Ksampler show the steps of the image processing in real time?
edit: nvm I got the extension
theres just a line in the .bat file which you can add, I cover it in the basic comfyUI introduction tutorial.
Excellent how to videos. If you could make one on ADetailer it would be much appreciated. This is the only reason I jump to A1111 to fix face and hand 🙏😊
Will do, ill be starting impact pack soon, they have an equivalent
thanks!
I want to learn to train a lora with my face.
where would you recommend me to train it? on the automatic111 UI (which I'm not familiar), or koyah_ss? could it be done inside ComfyUI?
I was looking into this today but I think its best to use say automatic1111 for this process, have not tried koyah_ss
You can model merge in comfyUI, haven't found the lora method there yet. but probably mention it in future If I find it.
@@ferniclestix thanks
I tried to use one of the many pornographic pose LORAs (for serious research purposes), it didn't trigger it. I'll have another go now I've seen this, many thanks!
lol theres no judgment here, art is still art >.> but yeah, if its still being tricky, try using the name of the lora as a trigger word, sometimes thats worked for me.
Hello thanks for your work. Could you make and explain how to make a workflow that makes multiple images from different models (same seed, same prompt) and upsacle each ? Thanks.
the skills to do these things are in many of my tutorials. its just a matter of adapting and using the techniques .
how do you generate images THAT fast?? is awesome
check the sampler settings and latent sizes, you will probably see they are small res images with low ish step counts.
most people just leave it at 30 but 20 and sometimes 12 or even 10 is doable on some sampler types.
im using a lot euler_a with 50 steps with 1024x1024 resolution, it tooks like 20m-30m to generate a single image but i like the results, the problem is im using a laptop gtx1050 XD
anyways your basic lora implementation worked for me! thank you so much
ddim is the fastest, alot of the dmp ones work under 12 steps with karras turned on.
Would make more sense if you actually explained what Lora is.
well, this one is a quick tutorial designed to help people just get it running if they are looking for it. A lora is a small model designed to 'tune' the output of larger models its a good way of controlling the output and modifying larger models to do stuff they might normally struggle with or to ensure more stable outputs for things like character creation or animation.