🔴Follow me on twitter to stay updated on the latest trending AI Tech 👉 x.com/TheLocalLab_ 👉 Want to reach out? Join my Discord by clicking here - discord.gg/5hmB4N4JFc
You would have to make some adjustments to the workflow to add negatives. Unlike SD models, negatives aren't used very often with the flux models. Before doing anything else, try adding more noise to the image(.75 and up) and providing a more accurate description of what you want the output image to look like instead of the base image.
@@TheLocalLab Do you have a workflow which literally takes a portrait painting and makes it totally realistic... i've seen some impressive videos of historical paintings turned into photo with accurate resemblence, but no idea how it's done. I can pay someone to make a workflow like that...
@@TopFactology Unfortunately, I don't have such a workflow immediately available. If you can provide me with more details on what your looking for and maybe some examples, I can look into finding such a workflow or creating a custom one that can fit the script. Join my discord and dm me directly on the specifics. Link - discord.gg/5hmB4N4JFc.
Thanks for this tuto!! 🙏🏻 I have a question: the dimension values "width" and "height“ in the "Upscale Image" have to be the same that the original image, right?
No, that node will automatically upscale the image you inputted into the dimension set in that node. Its mainly for smaller images you would like upscaled to a specific size or an image(like a banner) you have custom dimensions for that you want to remain the same before they run through main img2img conversion process.
You can use any clip model compatible with the flux models. I just use the one you see in the video for ease of use. You can download it more easily through the ComfyUI manager.
I'm having an error on SamplerCustomAdvanced. Did follow the workflow but could not proceed with the error. can you tell more about the SamplerCustomAdvanced node?
SUPER! You can render images in UE5, at a minimum resolution for greater speed and pass them through this configuration with a denoise of about 0.3, and get the necessary renders, I have been looking for this for so long, *THANKS!*
Its ok if you didn't use a lora on the original image but would like to on the new one, it should work fine. But if you don't want to use the lora node at all, you will have to disconnect the purple line from the Unet model loader and the Lora Loader node and connect the purple Unet model loader line directly to the purple slot in the ksmapler node. You can look over the two GGUF workflows as reference and see the difference on where the nodes are connected.
Possibly in a future video, but as a quick tip, you can cheat a little bit and visit civit.ai and copy some of the prompts and settings people are using with their current workflows. Also loras are also a major boost to quality if you can find the right ones to apply.
Yes the portable version of comfyui that I used in this video is for windows. To use comfyui on other systems like linux and mac, you would have to install it manually.
im keep getting this error no matter what I do for it =>> Prompt outputs failed validation: Exception when validating node: name 'full_type_name' is not defined PreviewImage: - Exception when validating node: name 'full_type_name' is not defined
After doing some research, it seems that you might have another custom node installed with the comfyUI your running that could be causing this problem. Try running the workflow with a fresh install of Comfy and see if you still get the same error.
Upscaling is relatively simple with this. Just keep the denoise value down and provide an accurate description of the image and what your looking for in the output. You should be able to produce a variation of outputs at a higher quality relative to your initial image. The upscale nodes are built into the workflow, you just have to control the amount of change(noise) you want added to the image.
My bad, i forgot to change the links to straight for better visual appeal. I usually use that on my other setup anyways but you can change from spline to straight on the link render mode in the settings.
You don't have a problem with Numpy? I get this error using your workflow: "newbyteorder was removed from the ndarray class in NumPy 2.0. Use arr.view(arr.dtype.newbyteorder(order)) instead."
@@TheLocalLab Thx for the reply. Yes I'm fully updated. Strange if you also had the latest Numpy, in googling for a solution i saw people recommended downgrading numpy, and that solved it for me.
Well I haven't updated numpy in months and don't remember having to. I do however update comfyUI constantly which could still be using an older version of it. I'm glad you were able to find the solution though. Let me know how you like the workflow.
@TheLocalLab Numpy caught version 2.0 for me again when I updated today 🙄.. By the way the error only shows in the DOS (command prompt) screen and not on the webui.
@TheLocalLab but yeah man, really good workflow.. first img2img I found for flux gguf! And better results than the 2nd one I tried. I followed you on Twitter actually 👍 and subscribed here 🙌 great content bro, wish you success into the future!
🔴Follow me on twitter to stay updated on the latest trending AI Tech 👉 x.com/TheLocalLab_
👉 Want to reach out? Join my Discord by clicking here - discord.gg/5hmB4N4JFc
You're spoiling us. Haha ♥
@@dhrubajyotipaul8204 Plenty more to come, stay tuned.
I love your videos. You guides are so detailed and through. Thank you for sharing!
WOW!!! i did it and the results are WOOOOOOOOW
tha,k youuuuuu
Thank you for sharing this with us! Will try it out with some old videogame screenshots from the 2000s :)
Enjoy brother.
I tried a portrait painting and it still came out as painting. How do we make it realistic? How do we add negative prompt?
You would have to make some adjustments to the workflow to add negatives. Unlike SD models, negatives aren't used very often with the flux models. Before doing anything else, try adding more noise to the image(.75 and up) and providing a more accurate description of what you want the output image to look like instead of the base image.
@@TheLocalLab Do you have a workflow which literally takes a portrait painting and makes it totally realistic... i've seen some impressive videos of historical paintings turned into photo with accurate resemblence, but no idea how it's done. I can pay someone to make a workflow like that...
@@TopFactology Unfortunately, I don't have such a workflow immediately available. If you can provide me with more details on what your looking for and maybe some examples, I can look into finding such a workflow or creating a custom one that can fit the script. Join my discord and dm me directly on the specifics. Link - discord.gg/5hmB4N4JFc.
Thanks for this tuto!! 🙏🏻 I have a question: the dimension values "width" and "height“ in the "Upscale Image" have to be the same that the original image, right?
No, that node will automatically upscale the image you inputted into the dimension set in that node. Its mainly for smaller images you would like upscaled to a specific size or an image(like a banner) you have custom dimensions for that you want to remain the same before they run through main img2img conversion process.
@@TheLocalLab Thanks!
what is this google clip safe tensor you use? why not the flux one ? Can you explain ?
You can use any clip model compatible with the flux models. I just use the one you see in the video for ease of use. You can download it more easily through the ComfyUI manager.
Great video!!
I'm having an error on SamplerCustomAdvanced. Did follow the workflow but could not proceed with the error. can you tell more about the SamplerCustomAdvanced node?
Make sure your ComfyUI is update to date and install any missing nodes in the ComfyUI manager if needed.
can you add flowrence - or ollama to make text?
That's actually a great idea that I was already thinking on. I will look into it and see if I can provide an update.
SUPER! You can render images in UE5, at a minimum resolution for greater speed and pass them through this configuration with a denoise of about 0.3, and get the necessary renders, I have been looking for this for so long, *THANKS!*
Yup no problem. Enjoy!
What if I can´t download any models from the model manager?
You can instead download each model manually from huggingface and drag them each into the their respective models folder.
how to upscale without using Lora? cause what if i didnt use lora on my uploaded image
Its ok if you didn't use a lora on the original image but would like to on the new one, it should work fine. But if you don't want to use the lora node at all, you will have to disconnect the purple line from the Unet model loader and the Lora Loader node and connect the purple Unet model loader line directly to the purple slot in the ksmapler node. You can look over the two GGUF workflows as reference and see the difference on where the nodes are connected.
refrence image isnt square its 1080 1920 what size should i set in upscale node?
I would advise you set the inner upscale node to the original size of the image if its larger then the initial upscale size of 1024 x 1024.
@@TheLocalLab this helped alot!
Great video!!, Could you also explain how to improve the image quality without upscaling ?
Possibly in a future video, but as a quick tip, you can cheat a little bit and visit civit.ai and copy some of the prompts and settings people are using with their current workflows. Also loras are also a major boost to quality if you can find the right ones to apply.
and the result img was stretch if we not put ratio manually. It would better if using 'Get Image Size' node
😲
This application seems to only work on Windows.
Am I wrong?
Yes the portable version of comfyui that I used in this video is for windows. To use comfyui on other systems like linux and mac, you would have to install it manually.
im keep getting this error no matter what I do for it =>>
Prompt outputs failed validation: Exception when validating node: name 'full_type_name' is not defined
PreviewImage:
- Exception when validating node: name 'full_type_name' is not defined
After doing some research, it seems that you might have another custom node installed with the comfyUI your running that could be causing this problem. Try running the workflow with a fresh install of Comfy and see if you still get the same error.
@@TheLocalLab thank you the real problem was a custom node called xyz I deleted it and everything is fine now
0:20 Nice image
Not much upscale discussed here unfortunately 😢
Nowt wrong with rest of video. Titles about misleading
Upscaling is relatively simple with this. Just keep the denoise value down and provide an accurate description of the image and what your looking for in the output. You should be able to produce a variation of outputs at a higher quality relative to your initial image. The upscale nodes are built into the workflow, you just have to control the amount of change(noise) you want added to the image.
Oh no, another Comfy spaghetti tutorial... 😑
My bad, i forgot to change the links to straight for better visual appeal. I usually use that on my other setup anyways but you can change from spline to straight on the link render mode in the settings.
You don't have a problem with Numpy? I get this error using your workflow: "newbyteorder was removed from the ndarray class in NumPy 2.0. Use arr.view(arr.dtype.newbyteorder(order)) instead."
Have you tried updating comfyUI along with the custom nodes? I don't remember having an issue with Numpy.
@@TheLocalLab Thx for the reply. Yes I'm fully updated. Strange if you also had the latest Numpy, in googling for a solution i saw people recommended downgrading numpy, and that solved it for me.
Well I haven't updated numpy in months and don't remember having to. I do however update comfyUI constantly which could still be using an older version of it. I'm glad you were able to find the solution though. Let me know how you like the workflow.
@TheLocalLab Numpy caught version 2.0 for me again when I updated today 🙄.. By the way the error only shows in the DOS (command prompt) screen and not on the webui.
@TheLocalLab but yeah man, really good workflow.. first img2img I found for flux gguf! And better results than the 2nd one I tried. I followed you on Twitter actually 👍 and subscribed here 🙌 great content bro, wish you success into the future!