Looks like a great workflow, thanks for sharing. Unfortunately when I run it with the latest version of comfyUI the "tile height (Math Expression) nodes show red. The error returned is: - Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT,IMAGE,LATENT - Return type mismatch between linked nodes: b, INT != INT,FLOAT,IMAGE,LATENT Will try to figure out why, if anybody has any ideas please let me know.
@@puyobock i'm running it through Pinokio, but yeah, it's a conda environment. These are the errors: Failed to validate prompt for output 117: * SimpleMath+ 130: - Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT - Return type mismatch between linked nodes: b, INT != INT,FLOAT * SimpleMath+ 129: - Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT - Return type mismatch between linked nodes: b, INT != INT,FLOAT Output will be ignored
*Ugh! Finally, it worked on the third try! The first two times, some errors popped up halfway through, until I meticulously double-checked if all the models were installed correctly. These setups are so finicky! At last, what I had been searching for so long! Thanks!*
Best Flux upscale workflow that I have tried to date. Well done. A well-deserved subscription and liked. Looking forward to more great workflows from you.
Thank you for your service to the community! :) Really appreciate your approach to explaining the workflow and the steps needed to get up and running. I personally tend to get frustrated with long introductions and too much waffling between steps and am always skipping through the video to get to the info i need, but the pace of your delivery was spot on with this for me :) Liked and subscribed! Looking forward to future content!
Excellent video, thank you very much! It's a delight to have people from around the world sharing knowledge. Amazingly powerful, this rapid progress...
itd be awesome if you could show what steps youd take after generating the image to refine it further, like for instance, say you dont like the necklace part, it looks a bit weird, what would u do to fix that? what steps to take? great video thanks
Yes I'll be adding that in the workflow too...for now you can add any text concat node with your token being added to any generated prompt so your lora can work 👍
Interesting workflow and just running it now, although I'm using the original Flux Dev model so swapped out the GGUF nodes. In my upscaling tests I've found McBoaty Upscaler and Refiner nodes from the Mara Scott custom nodes to perform better than UltimateSD so will try swapping those tomorrow and see how it compares.
It's a really good "creative" scaler, no problem if you're looking for a variation of the original image, but looking at the example, it's very destructive also to skin details, as seen with the jewelry. Maybe tested with a lower denoise. Please do more testing to see if it can be fixed.
Yes, this is mainly for low res AI images. You'll see when there are less fingers, broken artifacts, there this can help to repair according to the context. Lower denoising sharpens the edges and makes it high resolution but damaged parts of image still kind of remain damaged. It's mostly based on different use cases.
Thank you for this great workflow :) I have one issue. I wanted the scale up a 520x680 image by 2. But whatever these math Expression where doing they were not mathing correctly, since it said the image size was 889x1180 and upscaled it to 1784x2360. I haven't touched the nodes at all other than bypassing the 2 Lora Nodes and obviously setting "upscale by" to 2. Any idea whats wrong here?
Very well tutorial and workflow. Subbed. I use a 4070 GPU with 12 GB VRAM. Which models would give the best results in terms of quality and performance with this card? An answer would be great. Thanks.
I would suggest go with the flux dev q8 model. I have seen that the Q8 results are almost same as fp16 (original one). The text encoder is okay with Q6. This way you can get speed and quality.
Thank you! I used as a basis to make my own variant and it works very good for me, even if parameters have to be tweaked depending on the source image for best performance. May I ask you why the tile width and height are set as a*b/ 2 + 32? (image width or height * upscale / 2 +32)
@@motherindiafathersurf1568 in frustration I've deleted everything I've added, if it's not stable to work for all of us it's waste of my time, so I gave up on this workflow
I'm using it, but it changes too much the original faces, what value i need to low to try to conserve the original faces? denosie and seam_fix_denoise or only the first one?
@@xclbrxtra Thx. There is something strange with the image size, if i upload a 1920 x 1080 image in the *"Get image Size & Count"* node, don't get the correct size values, it just says 1365 x 768. Why is that?
If you are having problems with the node, you can right click and bypass it. Film grain is something which you can apply in any photo editing app, photoshop or any other free software as well 👍
Git clone the ComfyUI manager repo (copy the ComfyUI manager link of GitHub and in the folder of custom nodes, open cmd and type git clone *link*.) Restart and you will get the Manager ☺️
@@xclbrxtra thank you. do you know how to make upscaling for all files in folder and to make comfy save images to folder i want? do you know how to do it? sorry for strange english
Very great video, bravo! Simple workflow look to work great. It would be even better including controlnet union (tile), have you tryed it? I cannot make it work with ultraSDupscale. If anyone have done it, please comment.
please help, on my Mac I'm getting this error- view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
For just AI purposes, it's always better to go with higher VRAM, but if you are getting a common PC and are into gaming and other stuff as well RTX 4060 is better, 8GB will work too.
You can try something like Runpod to run it online. Runpod will cost you somewhere between $0.3-0.6 per hour which is pretty affordable if you just wanna try for fun 💯
I am getting and error and I have checked and installed accelerate? DownloadAndLoadFlorence2Model Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install 'accelerate>=0.26.0'`
Hi, actually you can do it with 4GB too, the GGUF models you choose, download the less Q versions. I am using Q4 flux, you can download a smaller version. Same goes for t5xxl ggufs. It will slightly reduce the quality but still will be local and workable.
damn a 970 in 2014 had 4GB VRAM. which was a mid range card. I think Nvidia specifically limited a lot of their recent cards VRAM so people have to upgrade faster.
This upscaler can work with around 6gb Vram that too in less time (5-6 mins which is quite low for upscaling locally) Here free is mentioned as most people can do it on their laptops as well, contrary to high VRAM hungry workflows which need ComfyUI to run on cloud like RunPod. You will see that the latest video of using negative prompts, even I had to use Runpod as most gaming laptops are not enough.
Hi i managed to install everything needed that I asked in previous comment. Can you let me know in what folder should this go to: flux1-dev-Q4_K_S.gguf? It's not clear by your video
Looks like a great workflow, thanks for sharing. Unfortunately when I run it with the latest version of comfyUI the "tile height (Math Expression) nodes show red. The error returned is:
- Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT,IMAGE,LATENT
- Return type mismatch between linked nodes: b, INT != INT,FLOAT,IMAGE,LATENT
Will try to figure out why, if anybody has any ideas please let me know.
Ok. I fixed it replacing both nodes with "Simple Math" ones and reconnecting.
@@puyobock had the same issue. Did what you did, but now the "simple math" nodes are giving the same error! Any ideas?
@@Alehantro Are you running comfyUI on a conda environment? Try to see if there're some errors on the command window and post them here.
@@puyobock i'm running it through Pinokio, but yeah, it's a conda environment. These are the errors:
Failed to validate prompt for output 117:
* SimpleMath+ 130:
- Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT
- Return type mismatch between linked nodes: b, INT != INT,FLOAT
* SimpleMath+ 129:
- Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT
- Return type mismatch between linked nodes: b, INT != INT,FLOAT
Output will be ignored
I have pinned this comment so anyone with this issue can solve it. Thanks for the fix 💯🔥
*Ugh! Finally, it worked on the third try! The first two times, some errors popped up halfway through, until I meticulously double-checked if all the models were installed correctly. These setups are so finicky! At last, what I had been searching for so long! Thanks!*
Best Flux upscale workflow that I have tried to date. Well done. A well-deserved subscription and liked. Looking forward to more great workflows from you.
Thank you for your service to the community! :)
Really appreciate your approach to explaining the workflow and the steps needed to get up and running.
I personally tend to get frustrated with long introductions and too much waffling between steps and am always skipping through the video to get to the info i need,
but the pace of your delivery was spot on with this for me :)
Liked and subscribed! Looking forward to future content!
Excellent video, thank you very much! It's a delight to have people from around the world sharing knowledge. Amazingly powerful, this rapid progress...
You're the best! and this is the best upscaler i've tried! keep doing those amazing videos!
Works very well! Liked and subbed! Thanks!
this is the best workflow so far!!! It works perfectly!
To save more Vram, Use the Force/Set CLIP Device node to force loading the text-encoders in RAM instead of VRAM.
Agreed! This is by far the best! Thanks so so much!
itd be awesome if you could show what steps youd take after generating the image to refine it further, like for instance, say you dont like the necklace part, it looks a bit weird, what would u do to fix that? what steps to take? great video thanks
i think it needs a way to add to the prompt, for example I have a lora of my face and I have no way of adding the token I used
Yes I'll be adding that in the workflow too...for now you can add any text concat node with your token being added to any generated prompt so your lora can work 👍
@@xclbrxtra yeah I have no idea how to do that, I'll just wait for the update! thanks
Thank for this, works great!
These is a expectacular video thks so much
Interesting workflow and just running it now, although I'm using the original Flux Dev model so swapped out the GGUF nodes. In my upscaling tests I've found McBoaty Upscaler and Refiner nodes from the Mara Scott custom nodes to perform better than UltimateSD so will try swapping those tomorrow and see how it compares.
Bro that’s dope af thanks
Thank you for this. A great tutorial that gave me excellent results.
very happy to see another indian content creator on GenAI
This is an international platform. No need to mention that. You people think you guys own the world but you all are far from that
@@geekyprogrammer4831 you need a psychiatrist
@@geekyprogrammer4831 just say you're racist/xenophobic "you people" get a life
It's a really good "creative" scaler, no problem if you're looking for a variation of the original image, but looking at the example, it's very destructive also to skin details, as seen with the jewelry. Maybe tested with a lower denoise. Please do more testing to see if it can be fixed.
Yes, this is mainly for low res AI images. You'll see when there are less fingers, broken artifacts, there this can help to repair according to the context. Lower denoising sharpens the edges and makes it high resolution but damaged parts of image still kind of remain damaged. It's mostly based on different use cases.
Thank you for this great workflow :) I have one issue. I wanted the scale up a 520x680 image by 2. But whatever these math Expression where doing they were not mathing correctly, since it said the image size was 889x1180 and upscaled it to 1784x2360. I haven't touched the nodes at all other than bypassing the 2 Lora Nodes and obviously setting "upscale by" to 2. Any idea whats wrong here?
Very well tutorial and workflow. Subbed. I use a 4070 GPU with 12 GB VRAM. Which models would give the best results in terms of quality and performance with this card? An answer would be great. Thanks.
I would suggest go with the flux dev q8 model. I have seen that the Q8 results are almost same as fp16 (original one). The text encoder is okay with Q6. This way you can get speed and quality.
@@xclbrxtra Thank you very much for the quick reply and help. I will test it accordingly. Thanks again.
excellent again!!!!!
Thank you! I used as a basis to make my own variant and it works very good for me, even if parameters have to be tweaked depending on the source image for best performance. May I ask you why the tile width and height are set as a*b/ 2 + 32? (image width or height * upscale / 2 +32)
Can you do a similar workflow as this but using SUPIR?
Error occurred when executing DualCLIPLoaderGGUF:
module 'comfy.sd' has no attribute 'load_text_encoder_state_dicts'
same here, I've updated all 2 times and tried to recreate nodes (fix nodes) same error
@@motherindiafathersurf1568 in frustration I've deleted everything I've added, if it's not stable to work for all of us it's waste of my time, so I gave up on this workflow
Thanks for the video, could you continue making videos especially for low-resource computers?❤
What setting do you have turned on or what custom node are using to show the secs on every node?
How did you makes a better photo of Ana de Armas. Is it a lora?
Where u learned all these comfy ui things???
Finally! I was looking exactly this. Thanks. Do you know why the final picture looks washout compared to the input?
Very well described. Thank you very much.
Hi! Great video!
Is there any way to fix the banding issues of the upscaled image?
thanks for using your real voice also btw
I keep getting this error: Florence2Run
tuple index out of range. What should I do?
hmm, unfortunately I only get a black image output. not using any lora's. Any tips?
This is some good upscaling!
I just had to disable rgthree's ComfyUI Node, the viewport was unusable, my mouse movement zoomed everything out.
You're genius.❤❤
Great work, thanks.
How could I use the workflow with empty image loader?
Thank you~ I'll subscribe right away.
Very very good!
I'm using it, but it changes too much the original faces, what value i need to low to try to conserve the original faces?
denosie and seam_fix_denoise or only the first one?
Mainly denoise, if you see that lines are being generated then change the seam_fix
@@xclbrxtra Ok, thx. You think this method is better (in quality) than using the *SUPIR* one?
@@xclbrxtra Thx. There is something strange with the image size, if i upload a 1920 x 1080 image in the *"Get image Size & Count"* node, don't get the correct size values, it just says 1365 x 768. Why is that?
Error occurred when executing ProPostFilmGrain:
'int' object is not subscriptable
If you are having problems with the node, you can right click and bypass it. Film grain is something which you can apply in any photo editing app, photoshop or any other free software as well 👍
Go to your ComfyUI manager -> Install Missing Custom Nodes -> Install ComfyUI-ProPost .. should work after that
why gguf?....i cant bilieve, most workflows i see interesting are using gguf. I use dev original....dosent work. any advide? thanks
thanks. it make sense to use the realistic LoRA instead?
Hi! Check the link to the workflow, it won't let you download it, error.
plz help me, i've installed as you say, but there is no manager button in UI. how to fix it?
Git clone the ComfyUI manager repo (copy the ComfyUI manager link of GitHub and in the folder of custom nodes, open cmd and type git clone *link*.) Restart and you will get the Manager ☺️
@@xclbrxtra thank you. do you know how to make upscaling for all files in folder and to make comfy save images to folder i want? do you know how to do it? sorry for strange english
gun looks too good , how ? some other Lora addon ?
Very great video, bravo! Simple workflow look to work great. It would be even better including controlnet union (tile), have you tryed it? I cannot make it work with ultraSDupscale. If anyone have done it, please comment.
please help, on my Mac I'm getting this error-
view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
IMPRESSIVE RESULT
is it works on shakker ai comfyui mode??? please reply.
If you are able to install the custom nodes or access ComfyUI manager then it will work anywhere 💯
Superbly explained and it works perfectly. Thanks for that.Superbly explained and it works perfectly. Thanks for that. (Sub & Like)
3060 12gb or 4060 8gb
what should i get
For just AI purposes, it's always better to go with higher VRAM, but if you are getting a common PC and are into gaming and other stuff as well RTX 4060 is better, 8GB will work too.
@@xclbrxtra
okay thanks mate
i have m1 and cant still used flux, so sad
You can try something like Runpod to run it online. Runpod will cost you somewhere between $0.3-0.6 per hour which is pretty affordable if you just wanna try for fun 💯
I am getting and error and I have checked and installed accelerate?
DownloadAndLoadFlorence2Model
Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install 'accelerate>=0.26.0'`
Just 6BG? That's me out with my 3070 and it's 4GB then
Hi, actually you can do it with 4GB too, the GGUF models you choose, download the less Q versions. I am using Q4 flux, you can download a smaller version. Same goes for t5xxl ggufs. It will slightly reduce the quality but still will be local and workable.
damn a 970 in 2014 had 4GB VRAM. which was a mid range card. I think Nvidia specifically limited a lot of their recent cards VRAM so people have to upgrade faster.
Why adding that awfull filmgrain that comletely destroy the image quality?, i have removed it and it's 100 times better.
Bro how can i contact you
is anything for comfy sold hehehe What kind of a click bate is this...
This upscaler can work with around 6gb Vram that too in less time (5-6 mins which is quite low for upscaling locally) Here free is mentioned as most people can do it on their laptops as well, contrary to high VRAM hungry workflows which need ComfyUI to run on cloud like RunPod. You will see that the latest video of using negative prompts, even I had to use Runpod as most gaming laptops are not enough.
Hi i managed to install everything needed that I asked in previous comment. Can you let me know in what folder should this go to: flux1-dev-Q4_K_S.gguf? It's not clear by your video
In ComfyUI, there is a folder 'models' which has 'unet'. So it will be like ComfyUI/models/unet
@@xclbrxtra yeah I figured it out, but after all work I'm not able to upscale, I get some errors. I posted in another comment.