I know that some people arent happy with flux lack of control but I appreciate you going into detail with this. I also do still use SDXL and others due to the options. I use flux like a refiner.
This Jasper Upscaler changes the face of my subject too much. If you don't care about that, then it's a good upscaler. I care, so I won't use it on humans. I think on animals it would be fine. Thanks for a good workflow.
Thank you for the detailed explanation. any idea why I get this error when I hit Queue? : Error occurred when executing ControlNetLoader: MMDiT.__init__() got an unexpected keyword argument 'image_model'
Are you sure you are using the Comfy Core version of the Advanced Controlnet node? For now, only the defatult nodes will work. There is an issue here that may be of help: bit.ly/3Ums2If
I'm getting an error in the DualClipLoader: required input is missing. I don't have either of your clip_names 1 and 2. How do I get them and in what folder should I put them? Thanks in advanced
Hello, I made a video on how to get all the models for Flux here: ua-cam.com/video/HzjHvdH5bE8/v-deo.html. This should help you get all missing models.
you mentioned on the video that this is not supposed to be used for 4k upscaling. could you make a video with a good upscaler method to get up to 4k images ?
Yes, this model was trained on upscaling low resolution (320px) images to higher resolution. It's not meant for 4k upscaling. Most likely you will run into Out Of Memory error. I'll see if I can make a video on 4k upscaling.
thank you for your work . I downloaded diffusion_pytorch_model.safetensors for comfyui. where to paste that file? There are many diffrent directories inside the model directory
nice! yeah this is good alternative to supir, which is also a great upscaler. i have yet to find an upscaler for enlarging small text, most results/outputs are made up gibberish! lol
Hello, are you perhaps using a Mac. If so, change the "Load Diffusion Model" node "dtype_weight" to default. In case you are using windows and still getting black images, try to bypass the controlnet nodes and see if you are able to generate images with the Flux model.
Rendered out after 20 minutes with an image size of 1024x1600 Scale by 2.0 and steps 20, with a 12GB Graphics card and 32GB of RAM this can be produced in seconds on a free online upscale app.🤔
Thanks for sharing your findings. For me, it takes a little bit less than a minute (~40 seconds after loading model) to go from 320x320 to 1024x1024. However, when I tried going from 1280x720 to 1920x1080, it took 22 minutes to complete.
I know that some people arent happy with flux lack of control but I appreciate you going into detail with this. I also do still use SDXL and others due to the options. I use flux like a refiner.
Thank you! I will probably do some SDXL videos too.
Fantastic video!! Super clear!! Thank you for this!
Thanks for watching!
Great tutorial. Very clear and concise. Many thanks for explaining every step and not going a million miles an hour. Just subscribed.
Thanks for the sub! Glad it was helpful!
Still use SDXL and SD1.5... just getting into FLUX thanks to video's like yours so keep up the excellent work!!
Thank you!
as always, great videos. no body explains and show how things works as good as you do.
Thank you!
I'd love the SDXL stuff as well. Your channel is so helpful, thanks for everything!
Thank you!
As always the best tutorials, thanks a lot very concise and at the same time great explanation without getting too technical. Thanks a lot
Glad you like them!
Amazing work! Short to the point and informative!
Glad it was helpful!
Thank you very much for your work. Good luck and good mood!
Thank you! You too!
Thank you for your clear English!💛💛
Thank you!
This Jasper Upscaler changes the face of my subject too much. If you don't care about that, then it's a good upscaler. I care, so I won't use it on humans. I think on animals it would be fine. Thanks for a good workflow.
Good point! Thanks for sharing.
A SDXL video should be interesting.
Thanks! Will see what I can do.
Thank you for the detailed explanation. any idea why I get this error when I hit Queue? : Error occurred when executing ControlNetLoader: MMDiT.__init__() got an unexpected keyword argument 'image_model'
Are you sure you are using the Comfy Core version of the Advanced Controlnet node? For now, only the defatult nodes will work. There is an issue here that may be of help: bit.ly/3Ums2If
@@CodeCraftersCorner Thank you will check it out
macos m1max(32gb ram) is freezing when loading the unet model, is there a way to use gguf model instead
Sorry, I do not own this system to test. Can you try changing the dtype weight in the load diffusion model node to default and try again.
I'm getting an error in the DualClipLoader: required input is missing. I don't have either of your clip_names 1 and 2. How do I get them and in what folder should I put them? Thanks in advanced
Hello, I made a video on how to get all the models for Flux here: ua-cam.com/video/HzjHvdH5bE8/v-deo.html. This should help you get all missing models.
Hi, why does my 'Apply ControlNet (OLD Advanced)' not have a 'vae' input?
Hello, I am using the latest ComfyUI version. Try to update yours and try again.
you mentioned on the video that this is not supposed to be used for 4k upscaling.
could you make a video with a good upscaler method to get up to 4k images ?
Yes, this model was trained on upscaling low resolution (320px) images to higher resolution. It's not meant for 4k upscaling. Most likely you will run into Out Of Memory error. I'll see if I can make a video on 4k upscaling.
thank you for your work . I downloaded diffusion_pytorch_model.safetensors for comfyui. where to paste that file? There are many diffrent directories inside the model directory
in the models folder, go inside the controlnet folder and paste the safetensors file there. You can rename it and try again.
nice! yeah this is good alternative to supir, which is also a great upscaler. i have yet to find an upscaler for enlarging small text, most results/outputs are made up gibberish! lol
Thanks for the tips!
👍
Thank you so much!
Hi, why does the output is just a black square? Upscaler is in diffusion_models folder, t5xxl and clip l are in clip folder, and vae is in vae
Hello, are you perhaps using a Mac. If so, change the "Load Diffusion Model" node "dtype_weight" to default. In case you are using windows and still getting black images, try to bypass the controlnet nodes and see if you are able to generate images with the Flux model.
@@CodeCraftersCorner I’m on windows, 12gb vram. What do I do to bypass controlnet?Disconnecting it does not help
Can this workflow be modified to use the gguf models?
Hello. Yes, you can.
@@CodeCraftersCorner I tried it, the results are way off, almost like its doing creative upscale or something.
don't know which part has error, when i press queue prompt, python error
Hello, usually the error message should be in the terminal (CMD). It will tell you if there is anything missing.
@@CodeCraftersCorner python.exe has been quit
the cmd only has `Using split attention in VAE` no error code left
Rendered out after 20 minutes with an image size of 1024x1600 Scale by 2.0 and steps 20, with a 12GB Graphics card and 32GB of RAM this can be produced in seconds on a free online upscale app.🤔
Thanks for sharing your findings. For me, it takes a little bit less than a minute (~40 seconds after loading model) to go from 320x320 to 1024x1024. However, when I tried going from 1280x720 to 1920x1080, it took 22 minutes to complete.
When did tuvok come to earth and started doing videos about ai imagery
Lol! first time someone made that reference to me. Only OG will know about this!
Tuvok? That's a stretch...
this is worst upscaler
Sorry to hear that!