OMG! Thank you so much for providing this guide as I was struggling the past few days trying to get the right workflow and base model to work the right image. This workflow is a beast, as I can finally not having to experiment with hundreds of workflows seeing which one works the best. This provides realistic images and works well with my character LoRAs.
Well done, excellent help and suggestions for me as a newbie. I follow you with interest. Continue like this. I like GGUF because i can use in ComfyUI and also Forge. Thanks
Hi, i have a trouble to query your workflow, because i got error with "`newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead. " .. do you have any idea why? Are you using Python 2 or 3 ?
amazing and helpful video. id love to understand what all those files are used for that you showed at the beginning, like what each of their part is in the whole process. getting it to work is one thing, but im trying to understand what those files do :D
I feel like i'm doing something wrong, but when I load this template into my instance of comfy ui, the UNET Loader GGUF node errors out for me and nothing I do seems to fix it. Any suggestions on what I can try doing?
@@xclbrxtra I get the error: "Warning: Missing Node Types When loading the graph, the following node types were not found: UnetLoaderGGUF No selected item Nodes that have failed to load will show as red on the graph." Whenever I try install/reinstalling this I keep getting the same error with no ability to fix it
No matter what Lora options I select / use it makes no difference to the output image. Is there a reason why this might be? I have downloaded various Lora models and changed them from none to a downloaded, turned it on and off, no difference to the output at all (using the same seed for comparison).
Hello, thank you very much for your job, for me the output is very different from my portrait input "Load Image" ; on which parameters can i operate to get more similar face between input and output? Fluwguidance? Crop image? denoise Basic Scheduler??
for me, with your workflow, it seems like the input image has ZERO influency on image OUTPUT: maybe it's me non understand the possibility and the reason from this workflow.I thought to get OUTPUT with the same face than INPUT , with an other context gived by the prompt and that's not that at all(sorry for my english, i speak usually french)
Hi, make sure that the switches are set in a way to take loaded image (not empty latent and prompt) and set the denoise to 0.2 and then start increasing it till you like the changes. If denoise is 1 then it means whole image is denoised so no input image effect and 0 means no change in input image. 0.3-0.4 should give best results
@@xclbrxtra Thanks for your explanation of the switch. Aaah ok, in concrete terms, does this mean that I have to “unplug” (or destroy) the pink wire between the EMPTY LATENT IMAGE node (in the SET PARAMETERS group) and the input 2 node of the SWITCH ANY node? and so the input image (top left) will be taken into account? and influence the final result? For the denoise part, yes, I'll test it with 0.3 0.4 .
I've just tried it, it doesn't work if I remove this link...I don't understand how to manage the switch so that the INPUT image is taken into account.Sorry again.
Thank you so much for this incredible video. I am new to this AI image generation, but I've managed to run the workflow and tested different prompts in order to see if I will be able to create a CONSISTENT AI character 🤔, but unfortunately I was unable to do so☹, the face is really different every time I generate a new prompt even when I use fixed seed. Can somebody give me some kind of advice or idea on how to achieve CONSISTENCY in the images, and especially in the faces of the generated characters ? ⁉Plese any advice or suggestion will help 😄
Thank you for the videos, it really helped me a lot. I had no idea how to implement flux in comfyui. I have a question, if anytone could help me please. I would like to know whi do you use clip_I which is only 246 MB and another one of some GB. Why dont you use both heavy clips or only one? Thank you so much
Clip L is good at understand comma separated smaller keywords while another one is good for complex sentence understanding. As they both are trained differently we try to use both. Also if you want outputs with text, there's VitL with enhanced text but it focuses more on generating amazing text but messes up eyes and faces more. It's all about usage
Thanks for sharing. I like to know whether flux gguf support controlnet and ipadapter. Can you able to do workflow based on living room interiors. Right now, i am using SDXL for creating different interior designs.
Actually the flux controlnet and ipadapter models are not stable, I couldn't make them give consistent results. Feels like every image needs different tuning. But I'll look into it 💯
city96 now has quantized gguf text encoders also, supported by the same GGUF extension (new clip loader nodes). [seems like providing the link makes the comment non visible]
Yes, I've uploaded a video for upscaler today and I have updated the GGUF text encoders. The Q6_k is pretty good. It's smaller than fp8 but the quality is more near fp16 🔥💯
Actually you can try this out with just 6GB vram. This tutorial is made on a laptop with RTX4060 with 8gb Vram. It takes around 1.5-2 mins for an image with 1 lora but it's still not bad for a gaming laptop 💯🔥
HELP : i have compfyui installed through stability matric. i copy pasted all the files in their folder. then i launched comfyui and dropped your workflow file on to the existing workflow area . it says that come modeuls are missing and they are marked in red color. what am i doing worng ?
Just go to comfy manager and click on Install Missing Custom Nodes. You will get a list of all nodes missing and can directly install from comfy. Install them all and just restart 💯
Interesting but Load Image doesn't do anything despite the switch. I tested by blocking the seed, and varying the switch to 1 or 2 and the result is always the same, while I expect a generated image inspired by the input image, any idea? Have other people managed to have an image generated inspired by the input?
When you are using Img2Img, you'll need to reduce the denoise in Basic Scheduler. It would be set at 1 by default, try adjust it to 0.5-0.6 Complete denoise means even if you use a loaded image, it is getting completely denoised.
This tutorial is made on a gaming laptop with RTX 4060 (8GB VRAM). It takes around 2 mins for a single image with 1 lora, without any lora it's around 1 min, 40 sec. (You can reduce this timing if you choose a smaller GGUF of flux and T5xxl Model)
Hi thanks for the great video. Any advice on speeding up image generation apart from the obvious such as smaller images, and one image at a time etc etc? I have 8GB VRAM and each image takes around 10-15 minutes which is a little bit annoying. Thanks again!
Which model are you using ? The Q4 ks ? I am using RTX 4060 with 8gb VRAM and it takes around 1 min 50 sec without loras and 2 min 30 sec with 2-3 Loras. 10-15 mins for single images seems like wrong 🤔
can you please help me with this, when i try and que the prompt it says this Prompt outputs failed validation DualCLIPLoader: - Required input is missing: clip_name1 - Required input is missing: clip_name2 UnetLoaderGGUF: - Value not in list: unet_name: 'flux1-dev-Q4_K_S.gguf' not in []
OMG! Thank you so much for providing this guide as I was struggling the past few days trying to get the right workflow and base model to work the right image. This workflow is a beast, as I can finally not having to experiment with hundreds of workflows seeing which one works the best. This provides realistic images and works well with my character LoRAs.
Your description and workflow works perfectly thank you so much 🙂
Thanks for the multiple LoRA node, lives in my workflow now. CivitAI is your friend for Flux LoRA's.
bro for free content you are one of the best on youtube TOP
💯❤️
Well done, excellent help and suggestions for me as a newbie. I follow you with interest. Continue like this. I like GGUF because i can use in ComfyUI and also Forge. Thanks
Thanks. You helped me already🙏
THANK YOU!!!!
Great video, thanks.
💯🔥
Hi, i have a trouble to query your workflow, because i got error with "`newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.
" .. do you have any idea why? Are you using Python 2 or 3 ?
i hv the same issue did u find a fix?
@@mouliksatija3345 someone fixed that?
Did u fix it?
amazing and helpful video. id love to understand what all those files are used for that you showed at the beginning, like what each of their part is in the whole process. getting it to work is one thing, but im trying to understand what those files do :D
Congratulations on the video. I keep trying to figure out where I save the files Text Encoders on ComfyUI.
could u pls show how to add pose control to this workflow?
Thank a lot for your help for install manager of comfyui to solve my probleme of Missing Node Types
I feel like i'm doing something wrong, but when I load this template into my instance of comfy ui, the UNET Loader GGUF node errors out for me and nothing I do seems to fix it. Any suggestions on what I can try doing?
Can you mention what the error is ?
@@xclbrxtra I get the error:
"Warning: Missing Node Types
When loading the graph, the following node types were not found:
UnetLoaderGGUF
No selected item
Nodes that have failed to load will show as red on the graph."
Whenever I try install/reinstalling this I keep getting the same error with no ability to fix it
No matter what Lora options I select / use it makes no difference to the output image. Is there a reason why this might be? I have downloaded various Lora models and changed them from none to a downloaded, turned it on and off, no difference to the output at all (using the same seed for comparison).
Hello, thank you very much for your job, for me the output is very different from my portrait input "Load Image" ; on which parameters can i operate to get more similar face between input and output? Fluwguidance? Crop image? denoise Basic Scheduler??
for me, with your workflow, it seems like the input image has ZERO influency on image OUTPUT: maybe it's me non understand the possibility and the reason from this workflow.I thought to get OUTPUT with the same face than INPUT , with an other context gived by the prompt and that's not that at all(sorry for my english, i speak usually french)
Hi, make sure that the switches are set in a way to take loaded image (not empty latent and prompt) and set the denoise to 0.2 and then start increasing it till you like the changes. If denoise is 1 then it means whole image is denoised so no input image effect and 0 means no change in input image. 0.3-0.4 should give best results
@@xclbrxtra Thanks for your explanation of the switch.
Aaah ok, in concrete terms, does this mean that I have to “unplug” (or destroy) the pink wire between the EMPTY LATENT IMAGE node (in the SET PARAMETERS group) and the input 2 node of the SWITCH ANY node? and so the input image (top left) will be taken into account? and influence the final result?
For the denoise part, yes, I'll test it with 0.3 0.4 .
I've just tried it, it doesn't work if I remove this link...I don't understand how to manage the switch so that the INPUT image is taken into account.Sorry again.
absolutely sorry, the commutateur is in the button select!!, switch to 1 2 3 ! so stupid from my part.
Thank you for your content! When I start your workflow it miss some nodes... What can I do? Thanks
You can open the ComfyUI manager and there is 'Install missing custom nodes' You can install all those which are missing.
Thank you so much for this incredible video. I am new to this AI image generation, but I've managed to run the workflow and tested different prompts in order to see if I will be able to create a CONSISTENT AI character 🤔, but unfortunately I was unable to do so☹, the face is really different every time I generate a new prompt even when I use fixed seed. Can somebody give me some kind of advice or idea on how to achieve CONSISTENCY in the images, and especially in the faces of the generated characters ? ⁉Plese any advice or suggestion will help 😄
Thank you for the videos, it really helped me a lot. I had no idea how to implement flux in comfyui. I have a question, if anytone could help me please. I would like to know whi do you use clip_I which is only 246 MB and another one of some GB. Why dont you use both heavy clips or only one? Thank you so much
Clip L is good at understand comma separated smaller keywords while another one is good for complex sentence understanding. As they both are trained differently we try to use both. Also if you want outputs with text, there's VitL with enhanced text but it focuses more on generating amazing text but messes up eyes and faces more. It's all about usage
Thanks for sharing. I like to know whether flux gguf support controlnet and ipadapter. Can you able to do workflow based on living room interiors. Right now, i am using SDXL for creating different interior designs.
Actually the flux controlnet and ipadapter models are not stable, I couldn't make them give consistent results. Feels like every image needs different tuning. But I'll look into it 💯
city96 now has quantized gguf text encoders also, supported by the same GGUF extension (new clip loader nodes).
[seems like providing the link makes the comment non visible]
Yes, I've uploaded a video for upscaler today and I have updated the GGUF text encoders. The Q6_k is pretty good. It's smaller than fp8 but the quality is more near fp16 🔥💯
Good job ! I don't nderstand why do you use the image for ? It seams the prompt is the only controller
Need a serious rig to be doing this locally 🙌…
Actually you can try this out with just 6GB vram. This tutorial is made on a laptop with RTX4060 with 8gb Vram. It takes around 1.5-2 mins for an image with 1 lora but it's still not bad for a gaming laptop 💯🔥
😁🤗👋👋👋
HELP : i have compfyui installed through stability matric. i copy pasted all the files in their folder. then i launched comfyui and dropped your workflow file on to the existing workflow area . it says that come modeuls are missing and they are marked in red color. what am i doing worng ?
I got the same isssue too, did u solve it?
getting this " Error occurred when executing UnetLoaderGGUF:
cannot mmap an empty file
File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\custom_nodes\ComfyUI-GGUF
odes.py", line 191, in load_unet
sd = gguf_sd_loader(unet_path)
File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\custom_nodes\ComfyUI-GGUF
odes.py", line 39, in gguf_sd_loader
reader = gguf.GGUFReader(path)
File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\gguf\gguf_reader.py", line 90, in __init__
self.data = np.memmap(path, mode = mode)
File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages
umpy\core\memmap.py", line 268, in __new__
mm = mmap.mmap(fid.fileno(), bytes, access=acc, offset=start)
Queue size: 0
Extra options
What custom nodes are you using? The workflow you shared won't work without them
Just go to comfy manager and click on Install Missing Custom Nodes. You will get a list of all nodes missing and can directly install from comfy. Install them all and just restart 💯
Bro plz make videos on image to image using flux and boreal lora
Interesting but Load Image doesn't do anything despite the switch.
I tested by blocking the seed, and varying the switch to 1 or 2 and the result is always the same, while I expect a generated image inspired by the input image, any idea?
Have other people managed to have an image generated inspired by the input?
When you are using Img2Img, you'll need to reduce the denoise in Basic Scheduler. It would be set at 1 by default, try adjust it to 0.5-0.6
Complete denoise means even if you use a loaded image, it is getting completely denoised.
@@xclbrxtra Thanks, I missed that point in the explanations. that's perfect
how to find lora file?
works fine but generated images are low resolution 1344/786 and looks pixeleted..how to improve quality of image..AT least to HD (1080P)
How many minutes does it take you to generate one image and with what kind of graphic card are you using?
This tutorial is made on a gaming laptop with RTX 4060 (8GB VRAM). It takes around 2 mins for a single image with 1 lora, without any lora it's around 1 min, 40 sec. (You can reduce this timing if you choose a smaller GGUF of flux and T5xxl Model)
Can we use a photo as a "model" so that the ai know knows what to inspire on
You can try img2img with high denoise to achieve it, or you can check out my flux controlnet video to use the depth map to guide the generation.
bro next time if you are creating a video show each and every step or attach the previous video where you have done, it is impossible to follow
Hi thanks for the great video. Any advice on speeding up image generation apart from the obvious such as smaller images, and one image at a time etc etc?
I have 8GB VRAM and each image takes around 10-15 minutes which is a little bit annoying.
Thanks again!
Which model are you using ? The Q4 ks ? I am using RTX 4060 with 8gb VRAM and it takes around 1 min 50 sec without loras and 2 min 30 sec with 2-3 Loras. 10-15 mins for single images seems like wrong 🤔
@@xclbrxtra Thanks for the fast response. Yes I'm using the Q4 ks. 1 lora. In all fairness I got this pc about 6 years ago, things might be outdated.
👋👋👋
I don'' know copy flux 1 Q8 to folder ??? help me
In comfyUI folder...go to models and then unet folder. Paste the flux GGUF there
I want to learn it
Im just getting a bunch of random pixels - running it on m3 air 16gb ram
you shouldnt use mac for stable diffusion as stable diffusion need computing power and shit macs have no gpu.
can you please help me with this, when i try and que the prompt it says this
Prompt outputs failed validation
DualCLIPLoader:
- Required input is missing: clip_name1
- Required input is missing: clip_name2
UnetLoaderGGUF:
- Value not in list: unet_name: 'flux1-dev-Q4_K_S.gguf' not in []
it's not free, if it's the schnell version of flux
We need to stop using versions of flux that aren't actually free
flux1-dev-Q8_0.gguf vs flux1-dev-Q6_K.gguf what's best?
If you have good GPU and VRAM then Q8
@@xclbrxtra i have 3090.
@@xclbrxtra i have gpu rtx 3090.