Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity. You can now support the channel and unlock exclusive perks by becoming a member: ua-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin
Another great tutorial!! Thank you! I didn't realize how much the iTools nodes could do! Shout out to Mohammad for the great Node suite! I think I will go kick a tip over his way as well!! 😀
Thank you Ioan for this wonderful tutorial and Thanks Mohammad for the amazing and powerful iTools which I use daily. As usual, I learned several new tips and tricks from this episode 🙏🏼
Hello, I am new to image generation via AI. I discovered your channel yesterday and I watched all 15 videos! Although I am not very comfortable in English, I loved all your tutorials and explanations! Everything is clear, simple and easy to reproduce on your side! A huge thank you for all your work! I can't wait to see the rest!
I haven't seen anyone doing this yet, but you don't have to convert to widget with the menu anymore. You can just drag an output to the textbox and will be presented with a dot to convert it to input. I'm not sure if this is from a custom node, though. It works for text like clips, height/width, etc. it's a lovely quality of life functionality.
It would be cool if you ran a tutorial series on how to use this to make cover images for books. Maybe if you did a different genre of cover for each video.
Very nice Video!! but i got an issue.. everytime i want to install ComfyUI-iTools it stucks on import failed.. i updated everything, restart, refresh, etc all done.. any tips?
this one is easier because we dont need that extra concatenate node, at least for me, but any method will work. This one is also easier to edit than csv that get corrupted easily.
Do you think it is possible to make consequent scenery with consequent characters and environment by Flux? I would love to do it but I have no idea, how to start it. I have no idea what process or procedure I should follow.
You can train lora with people, objects or styles if you have different images from different angles, but depends on what you want to create. If you just want any people and landscape but want certain composition you can use control net for that
I'm not sure if you're aware of this, but the styles only seem to work with extremely short prompts. If I write "a young blonde woman with short hair leaning against a brick wall wearing a black t-shirt with a white skull on the front, ripped jeans, and red sneakers.", the style doesn't work at all. Is there any way to fix this? Or is this limited to very simple prompts?
You must understand that the style is not a lora is just a simple text a prompt, și wtih any prompt the words at the beginning are more important than the one at the end, and those art style prompts are at the end. And depending on the model some have some limits of how many words or tokens can support. If you can do long prompts you probably don't need styles anyway since you can describe what you want. Or you can copy that style prompt and add more weight to it if you put it between brackets, 2,3 round brackets ((more important word))
You can put a folder name in the save image name where it says comfyui, Set the folder path: In the "filename prefix" field, enter the desired folder path where you want to save the image. For example, if you want to save it in a folder called custom_folder, you would enter something like: custom_folder/ You can include subfolders if needed: custom_folder/subfolder/
@@pixaroma But it still won't create a new folder on its own for each day, right? Forge is doing this on his own. A new folder with the number and month every new day.
@Fayrus_Fuma i think you need a custom node for that, i am sure there are all kinds of nodes, just I didn't need one, even in forge i removed the daily folder since i don't keep the generated images I just select what i need and put it in separate folder
@@pixaroma You can use date syntax found in the docs. It works often, sometimes it has bugged for me though. Can't paste it here, but google "ComfyUI SaveFileFormatting", you see how you should format the strings --> basically you can then get a 20241001_my_images and in there you can have 20241001_my_img_001.png for example.
I try to post this here (hopefully it doesn't get removed by YT), this puts generated images in folder, and images themselves will have a date+time prefix: %date:yyyy-MM-dd%_img/%date:yyyyMMdd_hhmmss%
depends on how you want to save, not sure exactly, I know that there is a save image node that has more option maybe try that, is called Image Save and comes with WAS Node Suite custom node
I think it might be more convenient if the iTools prompt loader automatically ignored empty lines in text files, so you don't have to remove them manually :)
iTools won't install on macOS. Dev using back slashes for Windows, macOS /Linux uses forward slashes. Too many paths to correct in code. Sorry Mac users. And for those who say ComfyUI doesn't run on macOS my 38-core GPU 96GB shared memory says otherwise. Looks like a nice node. Linux and macOS sit this one out.
I just fixed that now there was no too many paths to correct :) I have to test the code before I can update it, I will let you know when it is live in another comment, thanks for reporting this
Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity.
You can now support the channel and unlock exclusive perks by becoming a member:
ua-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin
Cheers 🥂 to Mahammad for your exceptional creation and contributions to the PIXAROMA community!
Yes, is very helpful ☺️ thanks
Learning from your tutorial videos has been such a joy and incredibly fulfilling. I’m really thankful to you!
You are so welcome🙂
Another great tutorial!! Thank you! I didn't realize how much the iTools nodes could do! Shout out to Mohammad for the great Node suite! I think I will go kick a tip over his way as well!! 😀
☺️ thanks, yeah he has done a good job with the nodes
Thanks a lot for the work. Best tutorial series ever since i started with comfyui. You're a lifesaver!
Thank you so much for your support ☺️ glad it helps
Thank you Ioan for this wonderful tutorial and Thanks Mohammad for the amazing and powerful iTools which I use daily. As usual, I learned several new tips and tricks from this episode 🙏🏼
Thank you so much ☺️ I try to include new tips and tricks in each episode 😁
Hello, I am new to image generation via AI. I discovered your channel yesterday and I watched all 15 videos! Although I am not very comfortable in English, I loved all your tutorials and explanations! Everything is clear, simple and easy to reproduce on your side! A huge thank you for all your work! I can't wait to see the rest!
Thank you ☺️
This episode is great! By the way, you can create multiple prompt files, one for SD1.5, SDXL, Flux and the node will read them just fine.
Yeah it can be any kind of prompts customized for each own workflow ☺️
Thank you for a very inspiring tutorial. This is looking to be my number 1 work flow.
Outstanding! Thank you for this video!!!
Very useful nodes. Thanks to Pixaroma and Makadi!
Very nice. Thank you for the tutorial
O seu conteúdo é excelente. Muito obrigado por ajudar. Aprendi do zero a utilizar o ComfyUI contigo.
Thanks!
I haven't seen anyone doing this yet, but you don't have to convert to widget with the menu anymore. You can just drag an output to the textbox and will be presented with a dot to convert it to input. I'm not sure if this is from a custom node, though. It works for text like clips, height/width, etc. it's a lovely quality of life functionality.
I think it was introduced in the new interface, i keep forgetting it was been added so i dont have to right click
It would be cool if you ran a tutorial series on how to use this to make cover images for books. Maybe if you did a different genre of cover for each video.
Thanks for idea will see what i can do
If memory serves wasn't there a node that allows you to inpaint the areas you wanted different prompts to have influence over?
I think I saw something, but didn't try it yet, with areas that are different colors or masks
dope
Very nice Video!! but i got an issue.. everytime i want to install ComfyUI-iTools it stucks on import failed.. i updated everything, restart, refresh, etc all done.. any tips?
I saw on discord that is not local install, will see if creator of node can help
@@pixaroma much much appreciated
U recommend this method or the CSV method?
this one is easier because we dont need that extra concatenate node, at least for me, but any method will work. This one is also easier to edit than csv that get corrupted easily.
@@pixaroma thx
Do you think it is possible to make consequent scenery with consequent characters and environment by Flux? I would love to do it but I have no idea, how to start it. I have no idea what process or procedure I should follow.
You can train lora with people, objects or styles if you have different images from different angles, but depends on what you want to create. If you just want any people and landscape but want certain composition you can use control net for that
I'm not sure if you're aware of this, but the styles only seem to work with extremely short prompts. If I write "a young blonde woman with short hair leaning against a brick wall wearing a black t-shirt with a white skull on the front, ripped jeans, and red sneakers.", the style doesn't work at all. Is there any way to fix this? Or is this limited to very simple prompts?
You must understand that the style is not a lora is just a simple text a prompt, și wtih any prompt the words at the beginning are more important than the one at the end, and those art style prompts are at the end. And depending on the model some have some limits of how many words or tokens can support. If you can do long prompts you probably don't need styles anyway since you can describe what you want. Or you can copy that style prompt and add more weight to it if you put it between brackets, 2,3 round brackets ((more important word))
Hi. Is it possible to teach Comfy to take snapshots to separate folders like Forge? So it can sign new folders every day by itself?
You can put a folder name in the save image name where it says comfyui, Set the folder path: In the "filename prefix" field, enter the desired folder path where you want to save the image. For example, if you want to save it in a folder called custom_folder, you would enter something like:
custom_folder/
You can include subfolders if needed:
custom_folder/subfolder/
@@pixaroma But it still won't create a new folder on its own for each day, right? Forge is doing this on his own. A new folder with the number and month every new day.
@Fayrus_Fuma i think you need a custom node for that, i am sure there are all kinds of nodes, just I didn't need one, even in forge i removed the daily folder since i don't keep the generated images I just select what i need and put it in separate folder
@@pixaroma You can use date syntax found in the docs. It works often, sometimes it has bugged for me though. Can't paste it here, but google "ComfyUI SaveFileFormatting", you see how you should format the strings --> basically you can then get a 20241001_my_images and in there you can have 20241001_my_img_001.png for example.
I try to post this here (hopefully it doesn't get removed by YT), this puts generated images in folder, and images themselves will have a date+time prefix:
%date:yyyy-MM-dd%_img/%date:yyyyMMdd_hhmmss%
hello, why in itools prompt styler my negative text have to input value, thanks?
Check the discord post screenshot and explain what exactly you try to do, i think i saw you posting there
Is there a way to save each prompted generation to a seperate named folder?
depends on how you want to save, not sure exactly, I know that there is a save image node that has more option maybe try that, is called Image Save and comes with
WAS Node Suite custom node
Can one do batch processing even if we have controlnets so the workflow executes simultaneouly for multiple different input images
Can use batch number from queue to run multiple times
@pixaroma What if we want to run simultaneously to save time, given one has enough VRAM
Not sure since i didn't need it, maybe there are some custom nodes that let you do that, but i didn't need it so i am not aware of any
is possible to get two different person lora image, in one
Probably with inpainting, but didn't try, like inpaint a face and use the lora you want for that face
You can use Vscode instead of Notepad ++ and will see mistake a little clearly. It can also auto format to what ever language you want. i.e Ymal
Thanks ☺️
I think it might be more convenient if the iTools prompt loader automatically ignored empty lines in text files, so you don't have to remove them manually :)
Waiting for an update on that 😁
Is this the same way to use with Flux gguf model?
Yes, just connect to any workflow, styles are just text prompts, just flux doesn't know so many art styles like sdxl
Thanx, but I think Styles Selector from Easy-Use is much simpler to use
This is easier to edit and doesn't need text concatenate node :) so use what works best for you
iTools won't install on macOS. Dev using back slashes for Windows, macOS /Linux uses forward slashes. Too many paths to correct in code. Sorry Mac users. And for those who say ComfyUI doesn't run on macOS my 38-core GPU 96GB shared memory says otherwise. Looks like a nice node. Linux and macOS sit this one out.
I will let the creator know, thanks
I just fixed that now there was no too many paths to correct :) I have to test the code before I can update it, I will let you know when it is live in another comment, thanks for reporting this
update is now live pleas try to install it now and tell me if there is another problem.