#### Links from my Video #### My Workflows + Images drive.google.com/file/d/1kM51XBuVYfq0RA_o5AtpnMXr6bdfEGNT/view?usp=sharing huggingface.co/jasperai/Flux.1-dev-Controlnet-Upscaler/tree/main huggingface.co/city96/FLUX.1-dev-gguf/tree/main huggingface.co/XLabs-AI/flux-lora-collection/blob/main/realism_lora_comfy_converted.safetensors civitai.com/models/689192?modelVersionId=805898 huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors
For me, I've found that just using a simple Latent Upscale in ComfyUI with Flux works well. Feed your image into a VAE Encode, then into a Latent Upscale set to the new dimensions, then feed that latent to your sampler setup.
you mean from a normal low res ai image or from a low res, high compression image like i show in this video. i have reduced them too 200px with 40% compression
I mean from a normal Flux result. For example, if I make a batch of 4 images at 1344x768, and I find one I like, will upscale to 1.5X, 2X, or 3x. I haven't tried 4x yet. If you set the sampler denoise to 0.55, the results stay close to the original image, no controlnet needed. Even at higher denoise like, 0.8, it's still very similar to the original image.
HEY, thank you thank you thank you for this comment. I had completely banned Latent Upscaling but it's true with FLUX is working incredible well, so much enhancing details. Sending hugs.
Need way too much Hardware sources. For example, i use RTX3090 (24GB VRAM) + 128GB RAM (DDR4).. After loading everything in the workflow into VRAM and Memory, i am getting cuda.out.of.memory error. I have to use tiled decode, tiled sampler, smaller gguf (Q5) model, etc.. Although mine is relatively strong configuration, it is so precise in name of limit overflows, to deal with such workflows. So i will stuck on SUPIR or ultimate-sd upscale with sdxl models. Today's local configuration necessities easily exceeds 24GB VRAM for sure. Thank you for the effort and video, greatly appreciated..
@@tripleheadedmonkey420 i dont say it does not run, i tried to upscale from 1024 to 4096, its really painful.... Still too much hardware need.. Thats what i mention
@@satyamgaba for really low resolution image this workflow and approach is amazing. Think about a 352x288 image you have from the past years taken with your grandpa owned cell-phone, and iterative upscale with Flux works well. But for upscaling purpose from 1024 to 4096 or more (what i need) is really painful ( @01:50 its mentioned ) . Tiled encode and decode works, but still takes to much time in comparison with the ultimate-sd upscale workflows with sdxl or lightning models ( or turbo).. So for the aim, i will not use upscale with Flux because i dont have images lower resolution than 1024 or less. And for comparison, my configuration, with 24GB GPU and 128 GB RAM (we can say high-end for Ai business) is affordable to use on local/personal config without renting a Server (runpod or dedicated server, google collab, etc..). So, i will stuck on SDXL, ultimate-sd, and SUPIR for my upscale workflows. But theorically, it is nice to know that Flux can Upscale..
I tried several options for enlarging my photo. Both through an IP adapter and through an upscaler, I could not achieve exactly a photo. Everywhere in the output version I get an image but not a photo. I have not been able to find a solution yet. Today I think I will try to use this option from this video! I watched halfway and realized that I have already tried this method. Unfortunately, it also did not work for my photo. It creates too much of a difference between the image and the photo at the output.
Hello Olivio! I'm getting to know and learning about ComfyUI. Where should I put the respective files that you left for download? In which folders should I put them? Can someone guide me?
I keep getting this error "Error occurred when executing ControlNetLoader: MMDiT.__init__() got an unexpected keyword argument 'image_model'" I updated everything and still no luck anyone have any idea whats wrong?
I have an error when I use booth of your templates: Warning: Missing Node Types When loading the graph, the following node types were not found: UnetLoaderGGUF No selected item Nodes that have failed to load will show as red on the graph.
Thank you so much for all your informative videos. They helped us a lot. I have some special request. Could you please create a workflow that can (1) generate image of person using flux model, (2) then that image will go to correct all deformities such as bad hands and bad eyes, (3) after this, this image will go for face enhancement, (4) then more details to the skin and hair will be added to make it realistic skin and more natural human being, not typical ai generated image. (5) And finally, we will upscale the processed image. All above will be done in single workflow. We can also do batch processing in it. also, we can add functionality providing multiple images of single character including whole body to create consistent character. and this would be done without lora, just using multiple images not single image. Thank you.
You can have 12 GB of VRAM, but you must have 64 GB+ RAM to actually run the Q8 model albeit it will run slower but the important thing is that it wont crash.
MISSING NODE TYPES: UnetLoaderGGUF - This node cannot be found in the manager repository and there is also no link to it in the video description. So where to get it? 😐
@@OlivioSarikas Sure, "install missing nodes" shows an empty list only. Also updated ComfyUI to the latest version. Is there a GitHub repository, so I could try to install it manually?
@@MikevomMars if the list is empty, you might already have it, but it's not loading. in comfyui folder there is a update folder. i think you need to run update_comfyui_and_python_dependencies.bat maybe that solves it
i love new ai tech but recently with all flux related stuff i cry every time i want to do something...., it's always either long to very long or it crash... Sadly cry with my 3080 10go in a corner haha 😅😢 and for comfy it's not that hard to wrap around, when i got a bit in it, the most difficult thing in it is knowing which node to use to do what you want, or knowing the new node that come or update.
How do you do it? I reproduce all this without problems on my RTX 3060. Yes, sometimes it takes longer than I would like and I have to edit the configuration myself, but it does not cause any special difficulties. Initially I used Flux Dev, then switched to the FP8 version. You will not particularly notice the difference in quality, and if you run the images through Ultimate Upscaler, then not at all.
@@sparks1943 I was thinking of getting that for AI workflows like this ! and now you are saying its not enough 😵💫 . Do i really have to sell a kidney after all...
i would still invite you to give it a shot. you learn a lot about ai image generation when using comfy. a1111 is more like using a microwave, instead of learning how to cook
As far I can see, this is only useful for upscaling very lowres pictures. What I and many other people are really looking for, is a workflow for upscaling Flux output to, say, 4K without resorting to SDXL. The so-called Flux Upscale ControlNet is not a solution for that, because it does not do any tiling and consequently requires too much VRAM to be practical, even on a 4090.
What if I want to reproduce and restore an old foto? Like not too much AI the people in the foto should still be recognizable as themselves and still look photorealistic. THAT is when I will be impressed
Imo Comfyui is more a reference development UI. Ie developers develop and test in comfyui and leave making it work in other ui's to those ui's to sort out. There is nothing wrong with installing and using more than one UI.
comfyui is used to build complex workflows that now other ui can do. even many ai startups run comfyui on their servers and it is the prefered tool of professionals to create contract work, ads and entertainment visuals and more
Rendered out after 20 minutes with an image size of 1024x1600 Scale by 2.0 and steps 20, with a 12GB Graphics card and 32GB of RAM this can be produced in seconds on a free online upscale app.🤔
#### Links from my Video ####
My Workflows + Images drive.google.com/file/d/1kM51XBuVYfq0RA_o5AtpnMXr6bdfEGNT/view?usp=sharing
huggingface.co/jasperai/Flux.1-dev-Controlnet-Upscaler/tree/main
huggingface.co/city96/FLUX.1-dev-gguf/tree/main
huggingface.co/XLabs-AI/flux-lora-collection/blob/main/realism_lora_comfy_converted.safetensors
civitai.com/models/689192?modelVersionId=805898
huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors
👋
Standing ovation for that poem. 👏
Been testing out different upscale setups with flux for the past several days, and this setup is the best so far
Loved the poem and laugh brother
Made a switch to comfyui a year back. No regrets.
Gotta luv the small wicked laugh
For me, I've found that just using a simple Latent Upscale in ComfyUI with Flux works well. Feed your image into a VAE Encode, then into a Latent Upscale set to the new dimensions, then feed that latent to your sampler setup.
you mean from a normal low res ai image or from a low res, high compression image like i show in this video. i have reduced them too 200px with 40% compression
I mean from a normal Flux result. For example, if I make a batch of 4 images at 1344x768, and I find one I like, will upscale to 1.5X, 2X, or 3x. I haven't tried 4x yet. If you set the sampler denoise to 0.55, the results stay close to the original image, no controlnet needed. Even at higher denoise like, 0.8, it's still very similar to the original image.
HEY, thank you thank you thank you for this comment. I had completely banned Latent Upscaling but it's true with FLUX is working incredible well, so much enhancing details. Sending hugs.
Need way too much Hardware sources. For example, i use RTX3090 (24GB VRAM) + 128GB RAM (DDR4).. After loading everything in the workflow into VRAM and Memory, i am getting cuda.out.of.memory error. I have to use tiled decode, tiled sampler, smaller gguf (Q5) model, etc.. Although mine is relatively strong configuration, it is so precise in name of limit overflows, to deal with such workflows. So i will stuck on SUPIR or ultimate-sd upscale with sdxl models. Today's local configuration necessities easily exceeds 24GB VRAM for sure. Thank you for the effort and video, greatly appreciated..
Olivio is running this on a 4080 16GB GPU FYI. So if he can do it, you should be able to.
@@tripleheadedmonkey420 i dont say it does not run, i tried to upscale from 1024 to 4096, its really painful.... Still too much hardware need.. Thats what i mention
Split image into multiple parts and then process it
@@satyamgaba for really low resolution image this workflow and approach is amazing. Think about a 352x288 image you have from the past years taken with your grandpa owned cell-phone, and iterative upscale with Flux works well. But for upscaling purpose from 1024 to 4096 or more (what i need) is really painful ( @01:50 its mentioned ) . Tiled encode and decode works, but still takes to much time in comparison with the ultimate-sd upscale workflows with sdxl or lightning models ( or turbo).. So for the aim, i will not use upscale with Flux because i dont have images lower resolution than 1024 or less. And for comparison, my configuration, with 24GB GPU and 128 GB RAM (we can say high-end for Ai business) is affordable to use on local/personal config without renting a Server (runpod or dedicated server, google collab, etc..). So, i will stuck on SDXL, ultimate-sd, and SUPIR for my upscale workflows. But theorically, it is nice to know that Flux can Upscale..
I tried several options for enlarging my photo. Both through an IP adapter and through an upscaler, I could not achieve exactly a photo. Everywhere in the output version I get an image but not a photo. I have not been able to find a solution yet.
Today I think I will try to use this option from this video!
I watched halfway and realized that I have already tried this method. Unfortunately, it also did not work for my photo. It creates too much of a difference between the image and the photo at the output.
I had to update comfyui along with all the nodes for the workflow to work, I recommend it to those who get the error.
thank you dude! that fixed my error!
I've tried the controlnet upscale and many variations of the settings. I didn't like the results compared to Ultimate SD Upscale.
Question; do you (or anyone else) notice lower quality results using the GGUF converted models?
I wonder if that could substitue gpen et al
Hello Olivio! I'm getting to know and learning about ComfyUI. Where should I put the respective files that you left for download? In which folders should I put them? Can someone guide me?
Does itcwork also with Video?
I keep getting this error "Error occurred when executing ControlNetLoader:
MMDiT.__init__() got an unexpected keyword argument 'image_model'" I updated everything and still no luck anyone have any idea whats wrong?
Same!
same
I have an error when I use booth of your templates:
Warning: Missing Node Types
When loading the graph, the following node types were not found:
UnetLoaderGGUF
No selected item
Nodes that have failed to load will show as red on the graph.
It will not even eat an 2x lancos upscale of a 1024 on my 3060 12 gyg
I can't run it even on a GeForce RTX 4070 Ti SUPER 16gb
can you show how to run fine tuned flux dev safetensors in ComfyUI?
It's not working on a 4090
You mentioned the better quality version gave you some troubles but I didn't hear you mention what graphics card you're using.
he has 3080
So, why is it weaker than the online demo?
yes but this is a refiner not a very upscaler because the original subject change
Nearly 12mins just for an upscaling 😅 Hello Topaz my old friend.
Thank you so much for all your informative videos. They helped us a lot.
I have some special request.
Could you please create a workflow that can
(1) generate image of person using flux model,
(2) then that image will go to correct all deformities such as bad hands and bad eyes,
(3) after this, this image will go for face enhancement,
(4) then more details to the skin and hair will be added to make it realistic skin and more natural human being, not typical ai generated image.
(5) And finally, we will upscale the processed image.
All above will be done in single workflow. We can also do batch processing in it.
also, we can add functionality providing multiple images of single character including whole body to create consistent character.
and this would be done without lora, just using multiple images not single image.
Thank you.
You can have 12 GB of VRAM, but you must have 64 GB+ RAM to actually run the Q8 model albeit it will run slower but the important thing is that it wont crash.
I have rtx 3060 12gb, 32 GB RAM, I'm sitting on Flux Dev FP16, no hangs, works great.
MISSING NODE TYPES: UnetLoaderGGUF - This node cannot be found in the manager repository and there is also no link to it in the video description. So where to get it? 😐
did you go to "manager" -> install missing nodes?
@@OlivioSarikas Sure, "install missing nodes" shows an empty list only. Also updated ComfyUI to the latest version. Is there a GitHub repository, so I could try to install it manually?
@@MikevomMars if the list is empty, you might already have it, but it's not loading. in comfyui folder there is a update folder. i think you need to run update_comfyui_and_python_dependencies.bat maybe that solves it
🔥✌️
ComfyUI looks out so complex, so so far been sticking with just Fooocus...
i love new ai tech but recently with all flux related stuff i cry every time i want to do something...., it's always either long to very long or it crash...
Sadly cry with my 3080 10go in a corner haha 😅😢
and for comfy it's not that hard to wrap around, when i got a bit in it, the most difficult thing in it is knowing which node to use to do what you want, or knowing the new node that come or update.
How do you do it? I reproduce all this without problems on my RTX 3060. Yes, sometimes it takes longer than I would like and I have to edit the configuration myself, but it does not cause any special difficulties. Initially I used Flux Dev, then switched to the FP8 version. You will not particularly notice the difference in quality, and if you run the images through Ultimate Upscaler, then not at all.
Thanks for all your efforts, however with my 4060 I have zero luck getting those workflows to function.
@@sparks1943 I was thinking of getting that for AI workflows like this ! and now you are saying its not enough 😵💫 .
Do i really have to sell a kidney after all...
the best rhyme EVER (from You)!
I think I'll wait for the Forge release. 😛
Comfy UI looks powerful but very involved. I think Automatic 1111 spoiled me.
i would still invite you to give it a shot. you learn a lot about ai image generation when using comfy. a1111 is more like using a microwave, instead of learning how to cook
@@OlivioSarikas I have no doubt it's the most powerful option, but all this years of constantly having to learn new technologies have made me lazy. 😅
💫 Friends of digital and analog noodles like modular synths
hell yeah :)
What's wrong with Comfyui???
It's slow for FLUX... the fastest is Forge.
People get scared when they see real software
You should run that beginning rhyme thru suno and make a rap song about comfy ui out of it 😂
it works but 28 steps with flux is very long even with 24gb vram and 64 of ram. I much prefer supir
😢 not A1111????
Forge people started working on ControlNet for FLUX yesterday and will deliver it by 7 October.
Comfy is the future
As far I can see, this is only useful for upscaling very lowres pictures. What I and many other people are really looking for, is a workflow for upscaling Flux output to, say, 4K without resorting to SDXL. The so-called Flux Upscale ControlNet is not a solution for that, because it does not do any tiling and consequently requires too much VRAM to be practical, even on a 4090.
Out of memory here, in a RTX 4090
Hip Hop Rhymes 😍
What if I want to reproduce and restore an old foto? Like not too much AI the people in the foto should still be recognizable as themselves and still look photorealistic. THAT is when I will be impressed
Imo Comfyui is more a reference development UI. Ie developers develop and test in comfyui and leave making it work in other ui's to those ui's to sort out.
There is nothing wrong with installing and using more than one UI.
comfyui is used to build complex workflows that now other ui can do. even many ai startups run comfyui on their servers and it is the prefered tool of professionals to create contract work, ads and entertainment visuals and more
ai-videoupscale AI fixes this. Run Flux Upscale ControlNet Workflow
Sooo slow! RTX 3080 12Gb - not usable - at scale 1 (not 5!) already 6 a 7 minutes
i hate comfyui i wish i could just put everything there and make it work
Rendered out after 20 minutes with an image size of 1024x1600 Scale by 2.0 and steps 20, with a 12GB Graphics card and 32GB of RAM this can be produced in seconds on a free online upscale app.🤔
and its a small L in the clip_l.savetensor name, also available as GGUF