🔴 Created a New Fluxgym Runpod Template for Training Flux Loras Faster at Low Cost 👉 ua-cam.com/video/d9ZyvxZEkHY/v-deo.html 👉 Want to reach out? Join my Discord by clicking here - discord.gg/5hmB4N4JFc
Thank you for this video! The barrier to entry with these free UIs and models is the complexity to get them working. This video helped me with each and every step! I had an existing installation on my PC and I had to install clean to get things to run. Mentioning that in case anyone else is in the same boat!
For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.
There was a joke in the 80's "this thing reads like stereo instructions." They said this because the manuals for stereos were so verbose that the avg person couldn't understand it and was mostly confused.
I have held off trying to install comfyui specifically because I messed up my python Envars. Which caused issues with a whole lotta other things. THIS is just the video I needed THANKS!
Just installed and run but got this error when loading the dev workflow and hitting Que Prompt: Error occurred when executing CheckpointLoaderNF4: load_checkpoint_guess_config() got an unexpected keyword argument 'model_options' EDIT: Resolved by running the ComfyUI updater .bat file
@@Markgen2024 D:\Firefox\ComfyUI_windows_portable>bitsandbytes command prompt - python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4 equirements.txt 'bitsandbytes' is not recognized as an internal or external command, operable program or batch file. Mi-a dat eroarea asta
@@Markgen2024 Right in the main file you should see 3 folders ComfyUI, python_embed, and update. Then in the 'update' folder just click "update_comfyui.bat". Then to back to the main file and "run_nvivia_gpu.. or run_cpu.bat"
Great job man ! Thanks to you i set it all up in 10 minutes and it works just fine (I output a 1024x1400 image in 90secs with a RTX3060Ti/8GB) ! 1 more subscriber ;)
I'm not sure about Flux img2vid but Cogvideo is the best open source img2vid we have available currently that can run on some low vram devices. It took a good while per generation but I was able to generate videos on my 6GB laptop. I have a video here to install via pinokio - ua-cam.com/video/wf-BiUN8fSY/v-deo.html.
Thanks for your videos. Very helpful for someone just starting on their locally hosted llm journey. I'm currently using Stability Matrix for managing Comfy UI and Stable Diffusion. What do you think of it?
Thanks for watching. Honestly I haven't used Stability Matrix yet since I only really use comfy after switching from SD but its seems useful if your using multiple UI's to keep packages and models together.
I'm going to have to clone the repo and go through the steps and see if it works for me. I've had the portable and can't get it to run and have seen at least one comment about that being an issue. The same checkpoint works fine in forge but out of ram in comfyui
I currently do sure workflows and content on civitai. I guess not this one since its easy to find everywhere. I like to share my more unique workflows there. Profile - civitai.com/user/TheLocalLab.
Plase Help, problem in the bitsandbytes command step at 6:51. I got this error: ""bitsandbytes" is not recognized as an internal or external command, program, or executable batch file."
Yes, you need to make sure your inside the ComfyUI portable windows directory that has the (python_embeded folder inside) in your terminal before running the command. The command should look like this "python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4 equirements.txt"
Everything should be included after extracting the files. Try deleting the comfyUI portable folder and extract the files again with 7-zip from the zip file you downloaded from the repo.
@@TheLocalLab Thank you for the fast reply. I have it working now...seems to be an issue with my stupid Alienware pc and Windows. Great video and I look forward to your next tutorial.
Thanks for the tutorial. I'm running NVIDIA GTX1650 with 4GB VRAM and my run on Schnell with 4 steps takes 12 minutes. There's actually no difference between dev and schnell for me. Both run at 3 minutes per step. Any advice on how to speed things up?
You can try running the quantized gguf versions instead of the nf4's. There's really small quants like the Q4_0 that still produce pretty decent quality images. I have a video tutorial here - ua-cam.com/video/nncY3dJLV78/v-deo.html.
for the comfyui workflow file you have here..do we extract it first (where the code is shown) or do we download it then extract it and move it to our created workflow folder?
yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
Hello, thank you for this enlightening video. As the owner of an AMD graphics card, do you know how to configure it with Zluda under ComfyUI? I've seen a few tutorials but they're hardly explicit. Greetings from France.
I ran your workflow file through a virus scanner n noticed ArcSight Threat Intelligence Suspicious all other checked out but it is other something for user to watch out for or u could check with the vendor to fix that issue
I saw a video that said that for greater performance it is good to eliminate extensions that are not used and that gives more speed and fewer conflicts. I don't know how good that is, I'm just trying it.🤔To close, also hear that the correct way is to put control and the letter C.
is it possible somehow to install GitHub on offline computer. because my graphic computer isn't connected to internet to keep it clean, is there any offline GitHub package? how can I install offline all those you did with CMD?
If you use the portable version of comfyUI then yes. Use a computer with internet to extract the portable comfyUI files. Follow the steps in the video to install the custom nodes and its bytes and bytes dependency. Once you have that complete and you know its work, transfer the comfyUI entire folder using a USB drive or whatever storage device and run comfy how you normally would on the offline pc. All the dependencies should remain in the comfyUI directory even after the transfer.
I have a 3050 ti 4gb and yes, even the dev model can run onto it thanks to NF4, but it ain't really worth it. Not a big deal if each iteration requires like 1 minute and half. You can just download and run it for some test and then free disk space.
How u doing this bro? I´m literally using the same gcard with same vram and it is crashing after 10 steps when generating. Would appreciate some advice!
@@erickbarsa5433 latest version? Latest drivers? Is the video memory being shared with other apps? Btw I've moved on to GGUF models and the Q4 works way better, it went down to 9s/it which makes the model usable at least.
It should be possible as long as you have enough RAM. I would think you would need more then 16GB to have it running smoothly. Someone posted a guide on reddit - www.reddit.com/r/comfyui/comments/1ev7ym8/howto_running_flux1_dev_on_a770_forge_comfyui/?rdt=35993
i got this error : All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([4718592, 1]), device(type='cpu')), (torch.Size([1, 3072]), device(type='cuda', index=0)), (torch.Size([1, 3072]), device(type='cuda', index=0)), (torch.Size([147456]), device(type='cpu')), (torch.Size([16]), device(type='cpu'))]
I followed step by step but got this error "ValueError(f"Expected a cuda device, but got: {device}") ValueError: Expected a cuda device, but got: cpu" I use a 1070 ti. GPU. How can i fix this?
@@64z I believe its a bug in a recent Comfyui update that affects all workflows that utilizes the CPU and GPU during inference. If your on windows, you can try downloading and using an older release version of Comfyui and seeing if the issue persist. Comfyui release page - github.com/comfyanonymous/ComfyUI/releases.
Can you please help? I get an error "ComfyUI_windows_portable\python_embeded\VCRUMTIME140.dll cannot be executed or has an error" (something like that - my pc is not in English) when trying to execute your bitsandbytes command. gemini and ChatGPT4o both says I need to install Visual C++ and Python, which I have done, but still getting the same error. Very much appreciate your help..
In what directory did you execute the command? You have to execute it in the "/comfyui_windows_portable directory. You will know you are in the right directory when you see the "python_embeded" folder. Its the same directory that holds the run_cpu.bat file.
@@TheLocalLab Translation of the whole error message "C:\ComfyUI_windows_portable\python_embeded\VCRUNTIME140.dll is not executable on Windows or contains errors. Reinstall using the original installation media or contact your system administrator or the software manufacturer. Error status 0xc000012f."
Thanks, very easy to follow. It worked for me, but yeah at the end speed is everything, and it takes around 190 seconds with a Nvidia geforce RTX 2060 Super for a 1080x1080
I get the following error: [WinError 126] The specified module could not be found. Error loading "C:\Flux3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies
I got an error when I added git clone in CMD. CMD cannot recognize it as a command. So I can' install the link. What's my wrong? my computer is windows
Yeah try using the Flux GGUF models instead. The nf4 models can take a lot longer to generate images anyways. The GGUF models come in a variety of smaller sizes that can still generate a decent quality image. I got a setup tutorial here - ua-cam.com/video/nncY3dJLV78/v-deo.html.
How to fix generation error "UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen ative\transformers\cuda\sdp_utils.cpp:455.)" ?
That's just a warning from comfy, stating pytorch wasn't compiled with flash attention( a separate package that can improve the efficiency of transformer models) and you most likely do not have flash attention installed. It's no problem, you should still be able to use comfy just fine.
I don't have Nvidia GPU and this method doesn't work for me when running with CPU. I'm getting errors saying no Nvidia driver found on my system. Is there a way to fix that?
The NF4 models are a bit more computational intensive. You will mostly need a GPU of some form to run those models. GGUF's on the other hand can run smoothly on CPU if you have enough.
@@weilinliang It's probably best to use a different workflow instead of configuring the current one. The GGUF model requires different custom nodes, you can follow the guide in my Flux GGUF video here - ua-cam.com/video/nncY3dJLV78/v-deo.html.
Got to this point but said no such file or directory??? C:\ComfyUI\ComfyUI_windows_portable>python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4 equirements.txt ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'ComfyUI\\custom_nodes\\ComfyUI_bitsandbytes_NF4\ equirements.txt'
Go into that folder using your file explorer. Check to make the python_embeded folder is in the directory your running the command. It should be in the same directory that has the run.bat files.
@@TheLocalLab D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI\custom_nodes>D:\AI\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable un_nvidia_gpu.bat 'D:\AI\ComfyUI_windows_portable_nvidia' is not recognized as an internal or external command, operable program or batch file. D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI\custom_nodes>
I don’t know how you managed to run that NF model on your 6GB vram device with 6 minutes, I have 6GB vram GTX, I can run the FP8 version in 30 minutes on comfyUI and I have NF Dev version and it only run on forge webui with 10 minutes, but for some strange reason it doesn’t run on comfyUI I get not enough allocated storage error, I tried everything and I couldn’t market it work despite’s the fact that the same model run on forge webui and I could even run the heavier FP8 version on comfy with no issue even if it took longer time, do you have any idea or solution why I’m getting that error ?
It could be because you have a GTX card instead of a RTX. I have a RTX 4050. RTX is known to be better equip for running AI applications due to its modern architecture. Check your network performance in task manager and make sure comfy is using as much of your vram capacity as possible.
@@TheLocalLab I gave up running it on comfy, i'm running the nf4 on Forge, I can hardly wait to wear it's as fast as normal models and I can run multiple images without killing my machine. I have a RTX 3070 with 8GB
@@TheLocalLab I don’t think it’s my GPU otherwise how come the same exact NF model run smoothly on forge, plus I can run the FP8 dev version on the same GPU it is slow but never overflow the VRAM, I believe it’s a memory mismanagement somewhere with the comfyUI version
Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
I got this error when it tried to compile the astronaut sample: Requested to load Flux Loading 1 new model loaded partially 3889.2 5854.812986373901 0 0%| \Users\Graphic\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen ative\transformers\cuda\sdp_utils.cpp:455.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) got prompt got prompt
That's not really an error. It just a warning to that your installed torch package wasn't compiled with flash attention. You don't need to worry about it. You should still be able to generate.
@@TheLocalLab thank you for your attention. If I update PyTorch: Version 1.12 and above, and PyTorch: Version 1.12 and above, may it solve the problem? because I realized that Flash attention is a technique used to speed up certain types of neural network computations particularly on GPU run. these installation don't damage the potable package settings?
@@martinmiciciday5235 Well you would have to uninstall your current torch packages and install the compatible torch cuda wheel package alongside flash attention and hope you don't run into compilation issues. I'm assuming you have a Nvidia GPU as well?
Check your clips folder inside the models directory and make sure your clips models are inside. Also make sure you select the correct clips models in the Dualclip node.
I get these erros this on below shown in cmd: ComfyUI-Manager: installing dependencies. (GitPython) WARNING: The script pygmentize.exe is installed in 'C:\Users\Max\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location and this error shown in comfy ui: Prompt outputs failed validation CheckpointLoaderSimple: - Value not in list: ckpt_name: 'v1-5-pruned-emaonly.ckpt' not in ['flux1-dev-bnb-nf4.safetensors'] I downloaded the file and placed it in checkpoint folder..not sure why it isnt working except for having to move it to correct path..but not sure since it also sounds like an issue with the file bot being in the correct folder (but it is) ..do u know why?
The first message is just a warning message and for the second one related to the checkpoint, make sure you select the flux vae file you downloaded in the "Load Vae" node in the ComfyUI webpage. The vae you downloaded from huggingface, should look something like this "diffusion_pytorch_model.safetensors". I think you have "v1-5-pruned-emaonly.ckpt" selected instead.
@@TheLocalLab Hey,,at which part in your video do you mention to download VAE? (did i have to put that file in the VAE folder?)...all i (remember) seeing was download comfy->mangers files->lllyasviel/flux1-dev-bnb-nf4{checkpoint} -> workflow -> then run comfy
I actually didn't show it in that video but did in my GGUF tutorial video - ua-cam.com/video/nncY3dJLV78/v-deo.html. I also have the link to the vae models in my description of that video.
🔴 Created a New Fluxgym Runpod Template for Training Flux Loras Faster at Low Cost 👉 ua-cam.com/video/d9ZyvxZEkHY/v-deo.html
👉 Want to reach out? Join my Discord by clicking here - discord.gg/5hmB4N4JFc
What TTS Model/Service are you using for the voiceovers? I really like it.
@@MS-gn4gl I'm glad you do. Its a mixture of RVC and Xtts.
Thank you for this video! The barrier to entry with these free UIs and models is the complexity to get them working. This video helped me with each and every step!
I had an existing installation on my PC and I had to install clean to get things to run. Mentioning that in case anyone else is in the same boat!
You are my hero.
For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.
quite helpful video. Thanks for making the video from end to end
There was a joke in the 80's "this thing reads like stereo instructions." They said this because the manuals for stereos were so verbose that the avg person couldn't understand it and was mostly confused.
Gorgeous tutorial! Thank you! :D
I have held off trying to install comfyui specifically because I messed up my python Envars. Which caused issues with a whole lotta other things. THIS is just the video I needed THANKS!
Just installed and run but got this error when loading the dev workflow and hitting Que Prompt:
Error occurred when executing CheckpointLoaderNF4:
load_checkpoint_guess_config() got an unexpected keyword argument 'model_options'
EDIT: Resolved by running the ComfyUI updater .bat file
where can i find it? dont see it inside
@@Markgen2024 D:\Firefox\ComfyUI_windows_portable>bitsandbytes command prompt - python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4
equirements.txt
'bitsandbytes' is not recognized as an internal or external command,
operable program or batch file.
Mi-a dat eroarea asta
@@Markgen2024 Right in the main file you should see 3 folders ComfyUI, python_embed, and update. Then in the 'update' folder just click "update_comfyui.bat". Then to back to the main file and "run_nvivia_gpu.. or run_cpu.bat"
Thanks for the awesome tutorial 😀
Great job man ! Thanks to you i set it all up in 10 minutes and it works just fine (I output a 1024x1400 image in 90secs with a RTX3060Ti/8GB) ! 1 more subscriber ;)
how much ram needed> my 16 gb ram is getting full and taking about 4 mins in my rtx 4060
@@manoharry7988 its taking 21gb ram for me, 18sec for 512x512 image. specs: 40gb ram, 8gb rtx4060
Sure it doesn’t crash
I have been searching for the last few days, how to run NF4 on comfyui, you helped a lot, thanks
will this run on a Surface pro 8 intel gpu i7 16gigs
Any Flux img2vid that could work on 6gigs of VRAM?
I'm not sure about Flux img2vid but Cogvideo is the best open source img2vid we have available currently that can run on some low vram devices. It took a good while per generation but I was able to generate videos on my 6GB laptop. I have a video here to install via pinokio - ua-cam.com/video/wf-BiUN8fSY/v-deo.html.
Thanks for your videos. Very helpful for someone just starting on their locally hosted llm journey. I'm currently using Stability Matrix for managing Comfy UI and Stable Diffusion. What do you think of it?
Thanks for watching. Honestly I haven't used Stability Matrix yet since I only really use comfy after switching from SD but its seems useful if your using multiple UI's to keep packages and models together.
I'm going to have to clone the repo and go through the steps and see if it works for me. I've had the portable and can't get it to run and have seen at least one comment about that being an issue. The same checkpoint works fine in forge but out of ram in comfyui
Thank you for your work, it works great with your instructions
awesome content man... why not use civitai to share workflows?
I currently do sure workflows and content on civitai. I guess not this one since its easy to find everywhere. I like to share my more unique workflows there. Profile - civitai.com/user/TheLocalLab.
Plase Help, problem in the bitsandbytes command step at 6:51. I got this error:
""bitsandbytes" is not recognized as an internal or external command, program, or executable batch file."
Yes, you need to make sure your inside the ComfyUI portable windows directory that has the (python_embeded folder inside) in your terminal before running the command. The command should look like this "python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4
equirements.txt"
@@TheLocalLab Thank you, my bad, I was copying the full text (I'm not good at coding), including the "bitsandbytes command prompt - " part
Error pop-up display "Unable to start the application correctly (0xc000012d). Click OK to close the application"
Can anyone help? I had most of comfyui installed but the 'run files' are missing from the folder...could this be a python issue?
Everything should be included after extracting the files. Try deleting the comfyUI portable folder and extract the files again with 7-zip from the zip file you downloaded from the repo.
@@TheLocalLab Thank you for the fast reply. I have it working now...seems to be an issue with my stupid Alienware pc and Windows. Great video and I look forward to your next tutorial.
@@TheLocalLab Sorry to bother but I don`t understand about workflow script, where and how? Can explain more detailed? Thx.
Thanks for the tutorial. I'm running NVIDIA GTX1650 with 4GB VRAM and my run on Schnell with 4 steps takes 12 minutes. There's actually no difference between dev and schnell for me. Both run at 3 minutes per step. Any advice on how to speed things up?
You can try running the quantized gguf versions instead of the nf4's. There's really small quants like the Q4_0 that still produce pretty decent quality images. I have a video tutorial here - ua-cam.com/video/nncY3dJLV78/v-deo.html.
for the comfyui workflow file you have here..do we extract it first (where the code is shown) or do we download it then extract it and move it to our created workflow folder?
Once downloaded, load the json workflow into ComfyUI. It in the right-side menu once you load Comfy in your browser.
Easy and simple tutorial, thank you.
yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
Hello, thank you for this enlightening video.
As the owner of an AMD graphics card, do you know how to configure it with Zluda under ComfyUI? I've seen a few tutorials but they're hardly explicit.
Greetings from France.
Unfortunately I do not as I'm a Nvidia card holder myself.
I ran your workflow file through a virus scanner n noticed ArcSight Threat Intelligence Suspicious all other checked out but it is other something for user to watch out for or u could check with the vendor to fix that issue
I saw a video that said that for greater performance it is good to eliminate extensions that are not used and that gives more speed and fewer conflicts. I don't know how good that is, I'm just trying it.🤔To close, also hear that the correct way is to put control and the letter C.
I have 3080 and 2060, is it possible to use the ram of these 2 cards. ?
Last I read, Comfy doesn't support multiple GPUs unfortunately. Could possibly in the future. You might be able to find a work around on reddit maybe.
is it possible somehow to install GitHub on offline computer. because my graphic computer isn't connected to internet to keep it clean, is there any offline GitHub package? how can I install offline all those you did with CMD?
If you use the portable version of comfyUI then yes. Use a computer with internet to extract the portable comfyUI files. Follow the steps in the video to install the custom nodes and its bytes and bytes dependency. Once you have that complete and you know its work, transfer the comfyUI entire folder using a USB drive or whatever storage device and run comfy how you normally would on the offline pc. All the dependencies should remain in the comfyUI directory even after the transfer.
I have a 3050 ti 4gb and yes, even the dev model can run onto it thanks to NF4, but it ain't really worth it. Not a big deal if each iteration requires like 1 minute and half. You can just download and run it for some test and then free disk space.
How u doing this bro? I´m literally using the same gcard with same vram and it is crashing after 10 steps when generating. Would appreciate some advice!
@@erickbarsa5433 latest version? Latest drivers? Is the video memory being shared with other apps? Btw I've moved on to GGUF models and the Q4 works way better, it went down to 9s/it which makes the model usable at least.
@@DarioToledo Can you share the video to do that, please.
@@Jadepulse-fx9jj it's nothing new, I've just followed other videos on this topic around here.
Your video is awesome! Could you please let me know if the Intel Arc 770 GPU can run FLUX?
It should be possible as long as you have enough RAM. I would think you would need more then 16GB to have it running smoothly. Someone posted a guide on reddit - www.reddit.com/r/comfyui/comments/1ev7ym8/howto_running_flux1_dev_on_a770_forge_comfyui/?rdt=35993
i got this error :
All input tensors need to be on the same GPU, but found some tensors to not be on a GPU:
[(torch.Size([4718592, 1]), device(type='cpu')), (torch.Size([1, 3072]), device(type='cuda', index=0)), (torch.Size([1, 3072]), device(type='cuda', index=0)), (torch.Size([147456]), device(type='cpu')), (torch.Size([16]), device(type='cpu'))]
same here
Sorry message: "All input tensors need to be on the same GPU, but found some tensors to not be on a GPU", impossible to get images here
I've been getting this error in other workflows as well and believe this is a comfyui bug that needs to be fixed.
What's the name of the music at the beginning of the video?
I followed step by step but got this error "ValueError(f"Expected a cuda device, but got: {device}")
ValueError: Expected a cuda device, but got: cpu" I use a 1070 ti. GPU. How can i fix this?
Yeah I've been getting this error with a variety of other workflows as well. I believe this might be a ComfyUI bug.
@@TheLocalLab Thanks for the reply. So this is a work flow error? With a different workflow it should work?
@@64z I believe its a bug in a recent Comfyui update that affects all workflows that utilizes the CPU and GPU during inference. If your on windows, you can try downloading and using an older release version of Comfyui and seeing if the issue persist. Comfyui release page - github.com/comfyanonymous/ComfyUI/releases.
@@TheLocalLab Thanks for the suggestions. Will see if i can do that.
Can you please help? I get an error "ComfyUI_windows_portable\python_embeded\VCRUMTIME140.dll cannot be executed or has an error" (something like that - my pc is not in English) when trying to execute your bitsandbytes command.
gemini and ChatGPT4o both says I need to install Visual C++ and Python, which I have done, but still getting the same error.
Very much appreciate your help..
In what directory did you execute the command? You have to execute it in the "/comfyui_windows_portable directory. You will know you are in the right directory when you see the "python_embeded" folder. Its the same directory that holds the run_cpu.bat file.
Yeah I ran it in comfyui_windows_portable
What's the last couple of lines of the error code?
@@TheLocalLab Translation of the whole error message "C:\ComfyUI_windows_portable\python_embeded\VCRUNTIME140.dll is not executable on Windows or contains errors. Reinstall using the original installation media or contact your system administrator or the software manufacturer. Error status 0xc000012f."
which version of windows do you have 32-bit vs. 64-bit?
@TheLocalLab Awesome totorial Finaly flux work on my local machine. Can show us how to use custom lora with this workflow?
Definitely, probably in a future video.
Thank you for this. I kept getting Python Has Stopped Working errors. This fixed it.
Thanks, very easy to follow. It worked for me, but yeah at the end speed is everything, and it takes around 190 seconds with a Nvidia geforce RTX 2060 Super for a 1080x1080
I get the following error: [WinError 126] The specified module could not be found. Error loading "C:\Flux3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies
At what point during the process did you get this message?
@@TheLocalLab , issue solved, it was missing to install the Microsoft Visual C++
@@mandyregenboog Glad to hear, hope your enjoying the models.
I got an error when I added git clone in CMD. CMD cannot recognize it as a command. So I can' install the link. What's my wrong? my computer is windows
You have to install "Git" in order to git clone github repos. Do a search for git downloads, click on the git-scm result and install git for windows.
@@TheLocalLab 🙏🙏🙏🙏🙏🙏
I'm using a 3050ti with 4GBVRAM and it always getting an OOM error, any advice on it?
Yeah try using the Flux GGUF models instead. The nf4 models can take a lot longer to generate images anyways. The GGUF models come in a variety of smaller sizes that can still generate a decent quality image. I got a setup tutorial here - ua-cam.com/video/nncY3dJLV78/v-deo.html.
How to fix generation error "UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen
ative\transformers\cuda\sdp_utils.cpp:455.)" ?
That's just a warning from comfy, stating pytorch wasn't compiled with flash attention( a separate package that can improve the efficiency of transformer models) and you most likely do not have flash attention installed. It's no problem, you should still be able to use comfy just fine.
@@TheLocalLab thank you so much
I don't have Nvidia GPU and this method doesn't work for me when running with CPU. I'm getting errors saying no Nvidia driver found on my system. Is there a way to fix that?
The NF4 models are a bit more computational intensive. You will mostly need a GPU of some form to run those models. GGUF's on the other hand can run smoothly on CPU if you have enough.
I have uninstalled the NF4 and added the GGUF clone in cmd commend. Do I need a new workflow to run the GGUF model?
@@weilinliang It's probably best to use a different workflow instead of configuring the current one. The GGUF model requires different custom nodes, you can follow the guide in my Flux GGUF video here - ua-cam.com/video/nncY3dJLV78/v-deo.html.
Got to this point but said no such file or directory???
C:\ComfyUI\ComfyUI_windows_portable>python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4
equirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'ComfyUI\\custom_nodes\\ComfyUI_bitsandbytes_NF4\
equirements.txt'
Go into that folder using your file explorer. Check to make the python_embeded folder is in the directory your running the command. It should be in the same directory that has the run.bat files.
@@TheLocalLab
D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI\custom_nodes>D:\AI\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable
un_nvidia_gpu.bat
'D:\AI\ComfyUI_windows_portable_nvidia' is not recognized as an internal or external command,
operable program or batch file.
D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI\custom_nodes>
It wooorks like a charm on my 2070Super maxQ (8GB VRAM)
Sure it take a while to render. But man no errors
thank you ❤
Your welcome, enjoy.😁
my advice, use Forge
Tampons toxic metals reference 4:36 lolz
I don’t know how you managed to run that NF model on your 6GB vram device with 6 minutes, I have 6GB vram GTX, I can run the FP8 version in 30 minutes on comfyUI and I have NF Dev version and it only run on forge webui with 10 minutes, but for some strange reason it doesn’t run on comfyUI I get not enough allocated storage error, I tried everything and I couldn’t market it work despite’s the fact that the same model run on forge webui and I could even run the heavier FP8 version on comfy with no issue even if it took longer time, do you have any idea or solution why I’m getting that error ?
It could be because you have a GTX card instead of a RTX. I have a RTX 4050. RTX is known to be better equip for running AI applications due to its modern architecture. Check your network performance in task manager and make sure comfy is using as much of your vram capacity as possible.
@@TheLocalLab I gave up running it on comfy, i'm running the nf4 on Forge, I can hardly wait to wear it's as fast as normal models and I can run multiple images without killing my machine. I have a RTX 3070 with 8GB
@@TheLocalLab I don’t think it’s my GPU otherwise how come the same exact NF model run smoothly on forge, plus I can run the FP8 dev version on the same GPU it is slow but never overflow the VRAM, I believe it’s a memory mismanagement somewhere with the comfyUI version
@@Avalon1951 how much system RAM u have, my 16gb gets full and system hangs.
@@manoharry7988 same 16, are you using the NF4, that's the one I'm using on Forge not Comfy
once nf4 can take on loras it will be amazing!
thank you so much
Great until the "FLUX" version for ComfyUI came out🤩
Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
that 3GB RAM is not true. i have 4gb vram still getting out of memory with nf4
Yeah I don't really use nf4 models because of how slow it is on my 6 gb. Better of with the Flux GGUF models + some upscaling.
Easier and cheaper to record yourself actually do it?
I pay nothing to make my videos. The way it is now, makes it a lot easier for more people to understand.
Good guide! Unfortunately I'm on Arch (I use Arch BTW)
i't taking too much time because you are reaching 120% of vram.
AI showcasing Ai
I got this error when it tried to compile the astronaut sample:
Requested to load Flux
Loading 1 new model
loaded partially 3889.2 5854.812986373901 0
0%|
\Users\Graphic\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen
ative\transformers\cuda\sdp_utils.cpp:455.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
got prompt
got prompt
That's not really an error. It just a warning to that your installed torch package wasn't compiled with flash attention. You don't need to worry about it. You should still be able to generate.
@@TheLocalLab thank you for your attention. If I update PyTorch: Version 1.12 and above, and PyTorch: Version 1.12 and above, may it solve the problem? because I realized that Flash attention is a technique used to speed up certain types of neural network computations particularly on GPU run. these installation don't damage the potable package settings?
@@martinmiciciday5235 Well you would have to uninstall your current torch packages and install the compatible torch cuda wheel package alongside flash attention and hope you don't run into compilation issues. I'm assuming you have a Nvidia GPU as well?
how to fix this?
clip missing: ['text_projection.weight']
Check your clips folder inside the models directory and make sure your clips models are inside. Also make sure you select the correct clips models in the Dualclip node.
I get these erros this on below shown in cmd:
ComfyUI-Manager: installing dependencies. (GitPython)
WARNING: The script pygmentize.exe is installed in 'C:\Users\Max\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location
and this error shown in comfy ui:
Prompt outputs failed validation
CheckpointLoaderSimple:
- Value not in list: ckpt_name: 'v1-5-pruned-emaonly.ckpt' not in ['flux1-dev-bnb-nf4.safetensors']
I downloaded the file and placed it in checkpoint folder..not sure why it isnt working except for having to move it to correct path..but not sure since it also sounds like an issue with the file bot being in the correct folder (but it is) ..do u know why?
The first message is just a warning message and for the second one related to the checkpoint, make sure you select the flux vae file you downloaded in the "Load Vae" node in the ComfyUI webpage. The vae you downloaded from huggingface, should look something like this "diffusion_pytorch_model.safetensors". I think you have "v1-5-pruned-emaonly.ckpt" selected instead.
@@TheLocalLab Hey,,at which part in your video do you mention to download VAE? (did i have to put that file in the VAE folder?)...all i (remember) seeing was download comfy->mangers files->lllyasviel/flux1-dev-bnb-nf4{checkpoint} -> workflow -> then run comfy
I actually didn't show it in that video but did in my GGUF tutorial video - ua-cam.com/video/nncY3dJLV78/v-deo.html. I also have the link to the vae models in my description of that video.
not a fan of the ai script audio. just buy a mic and talk, you silly goose.