ComfyUI Tutorial Series: Ep12 - How to Upscale Your AI Images
Вставка
- Опубліковано 9 лют 2025
- In Episode 12 of the ComfyUI tutorial series, you'll learn how to upscale AI-generated images without losing quality. Using ComfyUI, you can increase the size of your images while enhancing their sharpness and detail.
We'll cover the process of installing the necessary nodes, choosing models like Siax and Anime Sharp for different image styles, and creating workflows that deliver quick, high-quality results. You’ll see how to compare upscaled images and fine-tune settings for the best output, whether you're working with portraits, landscapes, or illustrations.
This tutorial is perfect for anyone looking to improve their AI-generated art with sharper, larger images. Whether you’re using SDXL, Flux, or any other models, you’ll learn how to upscale efficiently.
Download all the workflows from Discord
/ discord
look for the channel pixaroma-workflows
Go to manger, model manager
sort by type Upscale
Install 4x_NMKD-Siax_200k
4x-AnimeSharp
Refresh ComfyUI
Install this custom nodes if you don't have it
ControlAltAI Nodes
ComfyUI-PixelResolutionCalculator
ComfyUI Easy Use
rgthree's ComfyUI Nodes
Restart ComfyUI
Unlock exclusive perks by joining our channel:
/ @pixaroma
A small token of my appreciation. Thank you for taking so much time to thoroughly test, select the best, and so clearly explain comfyUI to us. The workflows on your discord work like a charm 🙏🏽
Thank you so much ☺️ glad it helped
Amazing video. Great job on this and thanks for the workflows. 🙂
thank you for support 🙂
Great work! Appreciate your time and effort.
Thank you so much 😊
Thanks!
Thank you so much for your support😊
@@pixaroma Thank YOU! 💖
Man, I subbed to your channel after giving this a try. This is by far the best upscaling tutorial and workflow I've come across in the past year. I've seen about 15. No joke. A huge thank you!
Thank you so much🙂
this is like the only tutorial without attractive woman clickbait thumbnail
Thank you so much. Your work is amazing and highly appreciated. I usually find tutorials about this topic that doesn't show the details behind the process or the role of each node.
these upscalers are absolutely amazing, thank you
wow finally a good upscaler, thank you very much.
Amazing! Thank you so much for your special explanation!🤩🤩🤩
Very detailed tutorial. Congrats and thank you for the effort
Bro your tutorials and workflows are super useful, thank you!
Another banger, I love open source AI so much ❤
you really help to understand comfyui. love you
wow I got amazing results with your workflow!
Great to hear :)
Very detailed video & great information!
very good, precise explanation. Thank you.
thank you so much, this is an amazing workflow
Thank you!
Eeeeeeee boy! Really Thx man!
Thanks a lot ! you're the best !
Thank You
Awesome!
Thank you so much !!
Fantastic tutorial, Thank you.
Impressive 👍
Great tutorial, thank you very much
Glad I could help ☺️
Next video please make a tutorial on how to use Flux Controlnet and how to make good images with it. 👍
I will see what I can do ☺️
very good
I am very surprised this works so well. I have done pixel space upscaling using [euler/beta] with horrible results and even with very low denoise (.20-.35) the composition changes too much.
Using dpmpp_2m/karras seems to be the trick.
Thank you.
فيديو جميل وشرح واضح
tnx alot :D
thank you so much angel. now tell me how you get those performance bars on the right above your settings? thank you
Install the crystools node from manager
Great video! (Question:1.8) Where is the setting so you can see the CPU,GPU ect... On the menu gui?????
Is a custom node install crystools form custom nodes manager
@@pixaroma thank you! I'll check that out tonight!!
@@pixaroma I would also love to see a PROPER video on text syntax, tips and tricks for "CLIP text encode prompt". Like what is the proper format? When should I use 'underscores' how does *{(option1|option2|option3):1.2} work in an actual flow. I would love to see a video on this! Great work keep it up.
Niice
i think you are awesome
i downloaded and tried out the workflow. You are a saint, an angel from above of workflow heaven. Thank you so much.
also, i modded the workflow a little bit to generate image to image. magnifique.
do you have a workflow on how to change clothing on character models?
I don't have, there is online something with "try on", but it didn't work for me as expected
@@pixaroma right many clothing swap videos out there but do not work. ok we will wait.
please make a compy ui video on using and installing mimic motion, i really appreciate your video, it is very clear compared to other UA-camrs, can mimic motion be used on comfy ui on swarm ui?
I saw there are some nodes for comfyui with mimic motion so I will check it out, but probably in kater episodes there are still more to cover in static images before i go to motion and video
Even with my 3080Ti, I was having a lot of issues with freezing on this one. For some reason, haven't quite found out yet, Comfy isn't clearing VRAM appropriately. My solution was just to put Clean VRAM nodes after most operations. It added a couple seconds on, but prevented freezing.
Not sure, you can try to put vae encode and vae decode tiled the version with tiled in name
@@pixaroma I'll give that a try and let you know
@@CrunchyKnuts YOU NEVER GOT BACK TO HIM SCANDOLOUS!
👏
👏🏻💯🙏🏻
Personally I prefer to let the model-upscaler step for the last step, and have the latent img2img upscale as the second, that way you can make a good use of your VRAM, speed up the process and the result ends up the same. so, TLDR: from your workflow, I will swap the generations 2 3
I didn't get the right setting with the latent img2img, the image had some artifacts with latent compared to pixel method can you share how you did it on discord? Thanks
Hi, @AlexanderGarzon, so first you generate the image, then you bypass these nodes, then you start upscaling process? Did you mean that?
Great tutorial. One question: how could the upscaled results look similar to the first one since they go through a different seed in the second KSampler? Thanks.
You can reduce the denoise strength to make it look more similar
@@pixaroma Thanks for the quick response. Could feeding both Ksampler with the same seed also work?
It works but is using the same image and gets like super sharp or overcooked, like HDR look, I avoid using same seed
Cool video! I'll definitely try out your approach. However, in AI Search's comfyUI tutorial, he says using tile upscaling yields far better results. Have you tried his method to compare?
I tried with tiles and Ultimate SD upscalers with controlnet but for me took longer time and the results wasnt as good, maybe i didnt find the right settings, i mean I played a few days, and found this settings by mistakes and just worked good enough for me. I wanted something fast. Is not perfect but for me is good enough for what I need. If I find a better way in the future I will make a new video
Cool. which ai voice are you using?
VoiceAir and they have the voices from elevenlabs. Voice is called burt us
I made a 4608x3072p image with this method. My gpu (RXT3080) and cpu was at its limit and they are not happy with me, but I must say the image is really nice. I think it is way to much, but I found the limit of my pc. From now I make them half size and upscale them without the sampler to get 4k 🤣
You can also try to use vae decode tiled instead of vae decode maybe that help with lowvram
With tiles I get some lines in the image, so I was looking for a new solution. I will give it a try. Thanks ✌🏻
Thank you so much for this video. I can't access the Discord invitation. Could I learn the reason why?
try different browsers maybe or mobile app, this should work discord.com/invite/gggpkVgBf3
@@pixaroma Thank you, it worked! :)
awesome! is there any way to reduce the grain applied after upscaling?
is sharpness from the model, if you can try a different uspcale model that has different sharpness maybe, I didnt find yet a solution for that, other upscaler give different results instead of siax i tried RealEsrgan x4, for some illustrations might work but is smooth things too much, 4x_Foolhardi_Remacri might work in some cases, I also tried from the was node suite custom node to add the Image Dragan Photography Filter and has ther ea field for sharpness and reduced to 0.7 or 0.5 that reduce it slight make it more blurry, but didnt find a permanent solution yet
@@pixaroma got it, thanks!
Using the "upscale image using model" node, then the "upscale image by" node set to 0.5 (2x), results in the same workflow run time as running the "upscale image by" node at 1.0 (4x).
Is there any way to improve efficiency by forcing a 4x upscale model to run at 2x, instead of upscaling, and then downscaling the image. I tried to find a 2x version of NMKD-siax, but had no luck.
you can try maybe other models that are 2x only, but that doesnt take so much power anyway, the most consuming is the ksampler, that the bigger the image more time it takes, and can not be bigger than 2MP because it will start getting all kind of lines
I have a Lora that I trainted with Flux dev for my beauty product. Can I incorporate the lora node into the t2i upscale workflow and change the diffusion model to flux dev?
yes you can add it between load checkpoint and clip text encode
Thank you... When you add the kSampler at @14:33, is the upscaling now using Flux? And not just the Siax?
Yes, i make the image larger and sharper, then is running through flux again, just like you do on image to image, just instead of uploading a new image, I take the image from the previous generation from vae decode, i mage it bigger, entering again in ksampler so basically is an image to image but with bigger image instead of a small image.
how we can share the .gguf file for the unet nodes and Serge nodes? the require do place the files in different folders, and i think its not cool copy/paste a 14 gigabyte of flux Q8 in both folders.
I only have in one folder like i did on episode 10
if im using the full large version of flux, would i still use the flux workflows from this vid? they say gguff so just not sure.
No I dont really use the full large one because is double in size slower and the quality is almost the same
Old good trick with second sampler works as expected... but how to deal with those "flux lines" at final step?
if the image is under 2 megapixel so is not too big, and the width and height is divisible with 64 you can get usually ok result without lines. You could try different upscalers, i can not use huge images in the ksampler so i need it to do a normal upscale for last step, you can drag a save image before the final upscaler and use a different uspcaler if you want, so if you do a 1024px you could get a 2048 that is over 2 megapixel, so you cand do smaller maybe so the initial image is 960 maybe and the final image would be under 2 megapixel, play with settings
@@pixaroma Thanks for reply, I use 0.5Mpixel with that node as in video with 16:9 ar, then model upscale 4x scaled * 0.5 downscale... so 0.5Mpix becomes 2Mpix (as it upscales in both x &y dimensions), all usual and still lines, that is why I'm asking 🙂
@@Dunc4n1d4h0 I only get it on some images, not sure what cause it, but most of the time I dont get any lines, maybe the prompt influence somehow or some settings, but I didnt figure it out, I usually just run it a few seeds and pick my favorite :)
Can you please put a links to downloads on some public site, like github?
Only the workflow are on discord but that is free, the rest is public like all the links to models and other stuff are linked to public sites. For me discords is easier because i have them all in same channel and i can link them in the discussion channel when people need help so they don't have to leave discord and can find all they need there.
Thank you. I can't help financially. I hope the likes and comments bring attention to your channel
Thank you, yes that like and comments really helps 😊
2:40 . Hello. Can you explain how to get the result image to be exactly the same as the original image? Whenever I use this workflow, the result is always different from the original.
are you using the same settings I put on the workflow, just download that workflow from discord and test it, you can reduce the denoise on the Ksampler, but if is the same scheduler and sampler and model the result should be the same
@@pixaroma I created a workflow from a different Sampler with the same structure as your workflow. I noticed that it's basically an image-to-image process and adds upscale after the Sampler. I want to know which parameter determines the result image being similar to the original image but with more details.
You can't always have similar and with more detail is either similar and you don't get more details, or is less similar so is not constrained and can add more creativity and details. You can add a controll net to keep things more similar like depth and canny that way the composition is the same and lines so you can change more things between those lines. I used the setting in the video snd needed high denoise, with other schedulers it needed less denoise
The resulting image you created includes additional details but still retains the entire face of the character and the composition without using ControlNet. However, when I run my workflow, the result is a completely different image from the original.
@@AnNguyen-pd2xi are you using the same workflow? not sure what workflows you have there, but the workflow I use works like in the video, if you changed something can work differently, so get the same workflows and try to see if works, then see what you did different. Download the workflow from discord and try it.
Why use gguf model instead of fp8 model? I'm curious
the quality of q8 is similar to fp16, so fp8 is lower quality compared with gguf q8,
1. fp16 the original flux dev.
2. The Q8
3.Fp8
I‘m using a RTX 3090 but it breakes my vram, so the ksampler can’t work
I have included some low vram workflows on discord for that episodes, try those maybe if dont have enogh vram
why do you scale down before scaling up? that loses resolution before upscaling.
because is too big for the ksampler, and the pixels are replaced anyway when is generated a new image, you can increase it to see, if you have a good video card, but flux has like 2 megapixel limit, after that it doesnt get so good results
Hi (:
Can you please tell me what is the other use of Upscale besides photoshop? Here I am doing 1280 by 720 resolution for visual novels. Even if I will use in the game is not this screen resolution and FULL HD, but still the difference is almost equal to zero. Thanks 🙂
i use topazgigapixel Ai is not free, but does a good job for me when i need something fast
@pixaroma I meant that I'm a rookie. I've read that Upscale is used mostly for photoshop users. I make art for games for VN, there resolution is 1280 by 720. So, after Upscale even in 4 times still no effect for visual novels. Or is it useless for my work? 🙂
Hi @КристинаБуняева-о3в, I also learning comfyui for my visual novels.
What genre do you write?
@@pixaroma Why do you need topazgigapixel when upscaler can create very good upscaling? What it lacks than topazgigapixel ?
I'm getting Bad Request errors when trying to install upscalers. What might I be doing wrong?
can you post some screenshot on discord with workflow and the error you get and comand window error, mention pixaroma there
hello i hope you can help me i keep getting this error: mat1 and mat2 shapes cannot be multiplied (5280x16 and 64x3072)
That errors is usually when you use models that doesn't have same base, so make sure all models, control net, vae, lora all you use is same base, either are all sd, or all sdxl or all flux, you can not combine them
@@pixaroma im using the same as you just in VAE it forces me to choose safetensors not keep it default
Can you post on discord maybe screenshots of workflow and error to see what models you have there
@@pixaroma i posted in the comfyui channel : the problem with the image
Replied on discord
Cannot execute because node UpscaleModelLoader does not exist.: Node ID '#136:6' - hmmm could you pls tell me, do you have any idieas?
did you download and load the model? can you post a screenshot with workflow and error on discord on comfyui channel? discord.com/invite/gggpkVgBf3
Do you have any video that helps me install and set up Flux01 and Confy like for noobs, I have a 24Vram 4090
episode 1, 8 and 10 ua-cam.com/play/PL-pohOSaL8P9kLZP8tQ1K1QWdZEgwiBM0.html
i have nvidia graphic card 2060 super, can i try flux?
i have that one too on older pc with 64gb of ram also, i use flux schnell, for flux dev takes too much time for me. flux gguf q4 version
i get an error Install failed: 4x-AnimeSharp Bad Request
try to download it manually and put it in the ComfyUI_windows_portable\ComfyUI\models\upscale_models folder from the models manager if you click on model name a page will open from where you can download it
I cant join discord , it says invalid invitation or expired link .
thanks for letting me know, not sure what happened, here is the new link discord.gg/gggpkVgBf3
@@pixaroma you are very welcome
Your discord link is unfortunately invalid
I changed on the channel description yesterday, but in comments and descriptions some remained unchanged, try discord.com/invite/gggpkVgBf3
@@pixaromahiii. still invalid link :((
@@rezvansheho6430 I just test it, it works for me click on it and then click on go to site discord.com/invite/gggpkVgBf3
@@pixaromaI used a VPN, and now it worked ♥️
Your videos are great, but it would help if you slowed down your voice er..you talk very fast..
Sorry but the AI voice I use doesnt have yet a speed option, it generate the voice from the text I give it, but I dont have a way to make it talk slower :(
I felt like I have followed along the steps closely and installed everything correctly, but when I try to queue the image I get the following error message. Can you help me figure out what I am missing? I downloaded your Flux Dev Q8 GGUF IMG2IMG with Upscaler workflow and my screen looks exactly like yours in the UA-cam video. Many thanks!
Prompt outputs failed validation
UnetLoaderGGUF:
- Value not in list: unet_name: 'flux1-dev-Q8_0.gguf' not in []
DualCLIPLoaderGGUF:
- Value not in list: clip_name1: 't5-v1_1-xxl-encoder-Q8_0.gguf' not in ['clip_l.safetensors', 't5xxl_fp16.safetensors']
Never mind. I started with this Ep12. I needed to go back to Ep 10 for the proper GGUF installation.
Glad you figured it out, just woke up, you can always mention me on discord and give a screenshot ☺️
I'm so impressive with your work and all the afford you have given here (and in your discord), It really helps beginner like me a lot really. I'm appreciated. And your like button should be OVER 30K. For people who read this message, pls gives a LIKE!!!!! it do not cost you anything! thank you love and respect. ❤❤❤
thank you 🙂
Thanks!
Thank you so much ☺️