How to Run Flux NF4 Image Models In ComfyUI with Low VRAM

Поділитися
Вставка
  • Опубліковано 22 гру 2024

КОМЕНТАРІ • 135

  • @TheLocalLab
    @TheLocalLab  4 місяці тому

    🔴 Created a New Fluxgym Runpod Template for Training Flux Loras Faster at Low Cost 👉 ua-cam.com/video/d9ZyvxZEkHY/v-deo.html
    👉 Want to reach out? Join my Discord by clicking here - discord.gg/5hmB4N4JFc

    • @MS-gn4gl
      @MS-gn4gl 4 місяці тому

      What TTS Model/Service are you using for the voiceovers? I really like it.

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      @@MS-gn4gl I'm glad you do. Its a mixture of RVC and Xtts.

  • @AbsolutelyJason
    @AbsolutelyJason Місяць тому

    Thank you for this video! The barrier to entry with these free UIs and models is the complexity to get them working. This video helped me with each and every step!
    I had an existing installation on my PC and I had to install clean to get things to run. Mentioning that in case anyone else is in the same boat!

  • @matthewanacleto7885
    @matthewanacleto7885 3 дні тому

    You are my hero.

  • @Huang-uj9rt
    @Huang-uj9rt 4 місяці тому

    For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.

  • @kashifrit
    @kashifrit 3 місяці тому

    quite helpful video. Thanks for making the video from end to end

  • @synthoelectro
    @synthoelectro 4 місяці тому +6

    There was a joke in the 80's "this thing reads like stereo instructions." They said this because the manuals for stereos were so verbose that the avg person couldn't understand it and was mostly confused.

  • @FlorinGN
    @FlorinGN 4 місяці тому

    Gorgeous tutorial! Thank you! :D

  • @hebercloward1695
    @hebercloward1695 3 місяці тому

    I have held off trying to install comfyui specifically because I messed up my python Envars. Which caused issues with a whole lotta other things. THIS is just the video I needed THANKS!

  • @shuntera
    @shuntera 4 місяці тому +4

    Just installed and run but got this error when loading the dev workflow and hitting Que Prompt:
    Error occurred when executing CheckpointLoaderNF4:
    load_checkpoint_guess_config() got an unexpected keyword argument 'model_options'
    EDIT: Resolved by running the ComfyUI updater .bat file

    • @Markgen2024
      @Markgen2024 4 місяці тому

      where can i find it? dont see it inside

    • @IamGhe
      @IamGhe 4 місяці тому

      @@Markgen2024 D:\Firefox\ComfyUI_windows_portable>bitsandbytes command prompt - python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4
      equirements.txt
      'bitsandbytes' is not recognized as an internal or external command,
      operable program or batch file.
      Mi-a dat eroarea asta

    • @hebercloward1695
      @hebercloward1695 3 місяці тому

      @@Markgen2024 Right in the main file you should see 3 folders ComfyUI, python_embed, and update. Then in the 'update' folder just click "update_comfyui.bat". Then to back to the main file and "run_nvivia_gpu.. or run_cpu.bat"

  • @aryadas1095
    @aryadas1095 4 місяці тому +1

    Thanks for the awesome tutorial 😀

  • @bonsaika65
    @bonsaika65 4 місяці тому +4

    Great job man ! Thanks to you i set it all up in 10 minutes and it works just fine (I output a 1024x1400 image in 90secs with a RTX3060Ti/8GB) ! 1 more subscriber ;)

    • @manoharry7988
      @manoharry7988 4 місяці тому

      how much ram needed> my 16 gb ram is getting full and taking about 4 mins in my rtx 4060

    • @gauravraj9328
      @gauravraj9328 Місяць тому +1

      @@manoharry7988 its taking 21gb ram for me, 18sec for 512x512 image. specs: 40gb ram, 8gb rtx4060

    • @ApexArtistX
      @ApexArtistX Місяць тому

      Sure it doesn’t crash

  • @myheyang
    @myheyang 4 місяці тому +2

    I have been searching for the last few days, how to run NF4 on comfyui, you helped a lot, thanks

  • @MedinaCliff
    @MedinaCliff 4 місяці тому +1

    will this run on a Surface pro 8 intel gpu i7 16gigs

  • @vicentepallamare2608
    @vicentepallamare2608 2 місяці тому +1

    Any Flux img2vid that could work on 6gigs of VRAM?

    • @TheLocalLab
      @TheLocalLab  2 місяці тому

      I'm not sure about Flux img2vid but Cogvideo is the best open source img2vid we have available currently that can run on some low vram devices. It took a good while per generation but I was able to generate videos on my 6GB laptop. I have a video here to install via pinokio - ua-cam.com/video/wf-BiUN8fSY/v-deo.html.

  • @livinagoodlife
    @livinagoodlife 3 місяці тому

    Thanks for your videos. Very helpful for someone just starting on their locally hosted llm journey. I'm currently using Stability Matrix for managing Comfy UI and Stable Diffusion. What do you think of it?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      Thanks for watching. Honestly I haven't used Stability Matrix yet since I only really use comfy after switching from SD but its seems useful if your using multiple UI's to keep packages and models together.

  • @panzerswineflu
    @panzerswineflu 4 місяці тому

    I'm going to have to clone the repo and go through the steps and see if it works for me. I've had the portable and can't get it to run and have seen at least one comment about that being an issue. The same checkpoint works fine in forge but out of ram in comfyui

  • @NimmDir
    @NimmDir 4 місяці тому

    Thank you for your work, it works great with your instructions

  • @pktron
    @pktron Місяць тому

    awesome content man... why not use civitai to share workflows?

    • @TheLocalLab
      @TheLocalLab  Місяць тому +1

      I currently do sure workflows and content on civitai. I guess not this one since its easy to find everywhere. I like to share my more unique workflows there. Profile - civitai.com/user/TheLocalLab.

  • @Huguillon
    @Huguillon 3 місяці тому

    Plase Help, problem in the bitsandbytes command step at 6:51. I got this error:
    ""bitsandbytes" is not recognized as an internal or external command, program, or executable batch file."

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      Yes, you need to make sure your inside the ComfyUI portable windows directory that has the (python_embeded folder inside) in your terminal before running the command. The command should look like this "python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4
      equirements.txt"

    • @Huguillon
      @Huguillon 3 місяці тому

      @@TheLocalLab Thank you, my bad, I was copying the full text (I'm not good at coding), including the "bitsandbytes command prompt - " part

  • @yamamotosora1912
    @yamamotosora1912 3 місяці тому

    Error pop-up display "Unable to start the application correctly (0xc000012d). Click OK to close the application"

  • @philjones8815
    @philjones8815 4 місяці тому +2

    Can anyone help? I had most of comfyui installed but the 'run files' are missing from the folder...could this be a python issue?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Everything should be included after extracting the files. Try deleting the comfyUI portable folder and extract the files again with 7-zip from the zip file you downloaded from the repo.

    • @philjones8815
      @philjones8815 4 місяці тому

      @@TheLocalLab Thank you for the fast reply. I have it working now...seems to be an issue with my stupid Alienware pc and Windows. Great video and I look forward to your next tutorial.

    • @IamGhe
      @IamGhe 4 місяці тому

      @@TheLocalLab Sorry to bother but I don`t understand about workflow script, where and how? Can explain more detailed? Thx.

  • @Hanimlat
    @Hanimlat 3 місяці тому

    Thanks for the tutorial. I'm running NVIDIA GTX1650 with 4GB VRAM and my run on Schnell with 4 steps takes 12 minutes. There's actually no difference between dev and schnell for me. Both run at 3 minutes per step. Any advice on how to speed things up?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      You can try running the quantized gguf versions instead of the nf4's. There's really small quants like the Q4_0 that still produce pretty decent quality images. I have a video tutorial here - ua-cam.com/video/nncY3dJLV78/v-deo.html.

  • @seanknowles7987
    @seanknowles7987 4 місяці тому

    for the comfyui workflow file you have here..do we extract it first (where the code is shown) or do we download it then extract it and move it to our created workflow folder?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Once downloaded, load the json workflow into ComfyUI. It in the right-side menu once you load Comfy in your browser.

  • @fatfrank22
    @fatfrank22 4 місяці тому

    Easy and simple tutorial, thank you.

    • @Huang-uj9rt
      @Huang-uj9rt 4 місяці тому +1

      yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.

  • @stephnocean1095
    @stephnocean1095 4 місяці тому

    Hello, thank you for this enlightening video.
    As the owner of an AMD graphics card, do you know how to configure it with Zluda under ComfyUI? I've seen a few tutorials but they're hardly explicit.
    Greetings from France.

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      Unfortunately I do not as I'm a Nvidia card holder myself.

  • @seanknowles7987
    @seanknowles7987 4 місяці тому

    I ran your workflow file through a virus scanner n noticed ArcSight Threat Intelligence Suspicious all other checked out but it is other something for user to watch out for or u could check with the vendor to fix that issue

  • @marcoantonionunezcosinga7828
    @marcoantonionunezcosinga7828 3 місяці тому

    I saw a video that said that for greater performance it is good to eliminate extensions that are not used and that gives more speed and fewer conflicts. I don't know how good that is, I'm just trying it.🤔To close, also hear that the correct way is to put control and the letter C.

  • @newsector
    @newsector 3 місяці тому

    I have 3080 and 2060, is it possible to use the ram of these 2 cards. ?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      Last I read, Comfy doesn't support multiple GPUs unfortunately. Could possibly in the future. You might be able to find a work around on reddit maybe.

  • @martinmiciciday5235
    @martinmiciciday5235 3 місяці тому

    is it possible somehow to install GitHub on offline computer. because my graphic computer isn't connected to internet to keep it clean, is there any offline GitHub package? how can I install offline all those you did with CMD?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      If you use the portable version of comfyUI then yes. Use a computer with internet to extract the portable comfyUI files. Follow the steps in the video to install the custom nodes and its bytes and bytes dependency. Once you have that complete and you know its work, transfer the comfyUI entire folder using a USB drive or whatever storage device and run comfy how you normally would on the offline pc. All the dependencies should remain in the comfyUI directory even after the transfer.

  • @DarioToledo
    @DarioToledo 4 місяці тому +3

    I have a 3050 ti 4gb and yes, even the dev model can run onto it thanks to NF4, but it ain't really worth it. Not a big deal if each iteration requires like 1 minute and half. You can just download and run it for some test and then free disk space.

    • @erickbarsa5433
      @erickbarsa5433 3 місяці тому

      How u doing this bro? I´m literally using the same gcard with same vram and it is crashing after 10 steps when generating. Would appreciate some advice!

    • @DarioToledo
      @DarioToledo 3 місяці тому +2

      @@erickbarsa5433 latest version? Latest drivers? Is the video memory being shared with other apps? Btw I've moved on to GGUF models and the Q4 works way better, it went down to 9s/it which makes the model usable at least.

    • @Jadepulse-fx9jj
      @Jadepulse-fx9jj 2 місяці тому +1

      @@DarioToledo Can you share the video to do that, please.

    • @DarioToledo
      @DarioToledo 2 місяці тому

      @@Jadepulse-fx9jj it's nothing new, I've just followed other videos on this topic around here.

  • @AiMeowAi
    @AiMeowAi 3 місяці тому

    Your video is awesome! Could you please let me know if the Intel Arc 770 GPU can run FLUX?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      It should be possible as long as you have enough RAM. I would think you would need more then 16GB to have it running smoothly. Someone posted a guide on reddit - www.reddit.com/r/comfyui/comments/1ev7ym8/howto_running_flux1_dev_on_a770_forge_comfyui/?rdt=35993

  • @buanadaruokta8766
    @buanadaruokta8766 16 днів тому

    i got this error :
    All input tensors need to be on the same GPU, but found some tensors to not be on a GPU:
    [(torch.Size([4718592, 1]), device(type='cpu')), (torch.Size([1, 3072]), device(type='cuda', index=0)), (torch.Size([1, 3072]), device(type='cuda', index=0)), (torch.Size([147456]), device(type='cpu')), (torch.Size([16]), device(type='cpu'))]

  • @MisterSoul-Immortal
    @MisterSoul-Immortal 28 днів тому

    Sorry message: "All input tensors need to be on the same GPU, but found some tensors to not be on a GPU", impossible to get images here

    • @TheLocalLab
      @TheLocalLab  24 дні тому

      I've been getting this error in other workflows as well and believe this is a comfyui bug that needs to be fixed.

  • @李云-f1b
    @李云-f1b 4 місяці тому

    What's the name of the music at the beginning of the video?

  • @64z
    @64z 21 день тому

    I followed step by step but got this error "ValueError(f"Expected a cuda device, but got: {device}")
    ValueError: Expected a cuda device, but got: cpu" I use a 1070 ti. GPU. How can i fix this?

    • @TheLocalLab
      @TheLocalLab  21 день тому

      Yeah I've been getting this error with a variety of other workflows as well. I believe this might be a ComfyUI bug.

    • @64z
      @64z 20 днів тому

      @@TheLocalLab Thanks for the reply. So this is a work flow error? With a different workflow it should work?

    • @TheLocalLab
      @TheLocalLab  20 днів тому

      @@64z I believe its a bug in a recent Comfyui update that affects all workflows that utilizes the CPU and GPU during inference. If your on windows, you can try downloading and using an older release version of Comfyui and seeing if the issue persist. Comfyui release page - github.com/comfyanonymous/ComfyUI/releases.

    • @64z
      @64z 19 днів тому

      @@TheLocalLab Thanks for the suggestions. Will see if i can do that.

  • @vj5qj3qb7d
    @vj5qj3qb7d 4 місяці тому +1

    Can you please help? I get an error "ComfyUI_windows_portable\python_embeded\VCRUMTIME140.dll cannot be executed or has an error" (something like that - my pc is not in English) when trying to execute your bitsandbytes command.
    gemini and ChatGPT4o both says I need to install Visual C++ and Python, which I have done, but still getting the same error.
    Very much appreciate your help..

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      In what directory did you execute the command? You have to execute it in the "/comfyui_windows_portable directory. You will know you are in the right directory when you see the "python_embeded" folder. Its the same directory that holds the run_cpu.bat file.

    • @vj5qj3qb7d
      @vj5qj3qb7d 4 місяці тому

      Yeah I ran it in comfyui_windows_port­able

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      What's the last couple of lines of the error code?

    • @vj5qj3qb7d
      @vj5qj3qb7d 4 місяці тому

      @@TheLocalLab Translation of the whole error message "C:\ComfyUI_windows_portable\python_embeded\VCRUNTIME140.dll is not executable on Windows or contains errors. Reinstall using the original installation media or contact your system administrator or the software manufacturer. Error status 0xc000012f."

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      which version of windows do you have 32-bit vs. 64-bit?

  • @Qizarr
    @Qizarr 4 місяці тому

    @TheLocalLab Awesome totorial Finaly flux work on my local machine. Can show us how to use custom lora with this workflow?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Definitely, probably in a future video.

  • @therookiesplaybook
    @therookiesplaybook 2 місяці тому

    Thank you for this. I kept getting Python Has Stopped Working errors. This fixed it.

  • @easyjapaneseforall
    @easyjapaneseforall 4 місяці тому

    Thanks, very easy to follow. It worked for me, but yeah at the end speed is everything, and it takes around 190 seconds with a Nvidia geforce RTX 2060 Super for a 1080x1080

  • @mandyregenboog
    @mandyregenboog 4 місяці тому

    I get the following error: [WinError 126] The specified module could not be found. Error loading "C:\Flux3\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      At what point during the process did you get this message?

    • @mandyregenboog
      @mandyregenboog 3 місяці тому

      @@TheLocalLab , issue solved, it was missing to install the Microsoft Visual C++

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      @@mandyregenboog Glad to hear, hope your enjoying the models.

  • @martinmiciciday5235
    @martinmiciciday5235 3 місяці тому

    I got an error when I added git clone in CMD. CMD cannot recognize it as a command. So I can' install the link. What's my wrong? my computer is windows

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      You have to install "Git" in order to git clone github repos. Do a search for git downloads, click on the git-scm result and install git for windows.

    • @martinmiciciday5235
      @martinmiciciday5235 3 місяці тому

      @@TheLocalLab 🙏🙏🙏🙏🙏🙏

  • @erickbarsa5433
    @erickbarsa5433 3 місяці тому

    I'm using a 3050ti with 4GBVRAM and it always getting an OOM error, any advice on it?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      Yeah try using the Flux GGUF models instead. The nf4 models can take a lot longer to generate images anyways. The GGUF models come in a variety of smaller sizes that can still generate a decent quality image. I got a setup tutorial here - ua-cam.com/video/nncY3dJLV78/v-deo.html.

  • @bigcraft2069
    @bigcraft2069 4 місяці тому

    How to fix generation error "UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen
    ative\transformers\cuda\sdp_utils.cpp:455.)" ?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      That's just a warning from comfy, stating pytorch wasn't compiled with flash attention( a separate package that can improve the efficiency of transformer models) and you most likely do not have flash attention installed. It's no problem, you should still be able to use comfy just fine.

    • @bigcraft2069
      @bigcraft2069 4 місяці тому +1

      @@TheLocalLab thank you so much

  • @weilinliang
    @weilinliang 4 місяці тому

    I don't have Nvidia GPU and this method doesn't work for me when running with CPU. I'm getting errors saying no Nvidia driver found on my system. Is there a way to fix that?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      The NF4 models are a bit more computational intensive. You will mostly need a GPU of some form to run those models. GGUF's on the other hand can run smoothly on CPU if you have enough.

    • @weilinliang
      @weilinliang 4 місяці тому

      I have uninstalled the NF4 and added the GGUF clone in cmd commend. Do I need a new workflow to run the GGUF model?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      @@weilinliang It's probably best to use a different workflow instead of configuring the current one. The GGUF model requires different custom nodes, you can follow the guide in my Flux GGUF video here - ua-cam.com/video/nncY3dJLV78/v-deo.html.

  • @MedinaCliff
    @MedinaCliff 4 місяці тому

    Got to this point but said no such file or directory???
    C:\ComfyUI\ComfyUI_windows_portable>python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4
    equirements.txt
    ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'ComfyUI\\custom_nodes\\ComfyUI_bitsandbytes_NF4\
    equirements.txt'

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Go into that folder using your file explorer. Check to make the python_embeded folder is in the directory your running the command. It should be in the same directory that has the run.bat files.

    • @MedinaCliff
      @MedinaCliff 4 місяці тому

      @@TheLocalLab
      D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI\custom_nodes>D:\AI\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable
      un_nvidia_gpu.bat
      'D:\AI\ComfyUI_windows_portable_nvidia' is not recognized as an internal or external command,
      operable program or batch file.
      D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI\custom_nodes>

  • @adonisserghini8420
    @adonisserghini8420 2 місяці тому

    It wooorks like a charm on my 2070Super maxQ (8GB VRAM)
    Sure it take a while to render. But man no errors

  • @Hairsaver.
    @Hairsaver. 3 місяці тому

    thank you ❤

  • @Avalon1951
    @Avalon1951 4 місяці тому +2

    my advice, use Forge

  • @ChaudhryWaqasAfzal
    @ChaudhryWaqasAfzal 2 місяці тому

    Tampons toxic metals reference 4:36 lolz

  • @mohmmedalbihany5723
    @mohmmedalbihany5723 4 місяці тому

    I don’t know how you managed to run that NF model on your 6GB vram device with 6 minutes, I have 6GB vram GTX, I can run the FP8 version in 30 minutes on comfyUI and I have NF Dev version and it only run on forge webui with 10 minutes, but for some strange reason it doesn’t run on comfyUI I get not enough allocated storage error, I tried everything and I couldn’t market it work despite’s the fact that the same model run on forge webui and I could even run the heavier FP8 version on comfy with no issue even if it took longer time, do you have any idea or solution why I’m getting that error ?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      It could be because you have a GTX card instead of a RTX. I have a RTX 4050. RTX is known to be better equip for running AI applications due to its modern architecture. Check your network performance in task manager and make sure comfy is using as much of your vram capacity as possible.

    • @Avalon1951
      @Avalon1951 4 місяці тому +1

      @@TheLocalLab I gave up running it on comfy, i'm running the nf4 on Forge, I can hardly wait to wear it's as fast as normal models and I can run multiple images without killing my machine. I have a RTX 3070 with 8GB

    • @mohmmedalbihany5723
      @mohmmedalbihany5723 4 місяці тому +1

      @@TheLocalLab I don’t think it’s my GPU otherwise how come the same exact NF model run smoothly on forge, plus I can run the FP8 dev version on the same GPU it is slow but never overflow the VRAM, I believe it’s a memory mismanagement somewhere with the comfyUI version

    • @manoharry7988
      @manoharry7988 4 місяці тому

      @@Avalon1951 how much system RAM u have, my 16gb gets full and system hangs.

    • @Avalon1951
      @Avalon1951 4 місяці тому

      @@manoharry7988 same 16, are you using the NF4, that's the one I'm using on Forge not Comfy

  • @Tom_Neverwinter
    @Tom_Neverwinter 4 місяці тому

    once nf4 can take on loras it will be amazing!

  • @masterprog48
    @masterprog48 4 місяці тому

    thank you so much

  • @marcoantonionunezcosinga7828
    @marcoantonionunezcosinga7828 4 місяці тому +3

    Great until the "FLUX" version for ComfyUI came out🤩

    • @Huang-uj9rt
      @Huang-uj9rt 4 місяці тому

      Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.

  • @bryanbondoc-y4s
    @bryanbondoc-y4s Місяць тому

    that 3GB RAM is not true. i have 4gb vram still getting out of memory with nf4

    • @TheLocalLab
      @TheLocalLab  Місяць тому

      Yeah I don't really use nf4 models because of how slow it is on my 6 gb. Better of with the Flux GGUF models + some upscaling.

  • @strong8705
    @strong8705 4 місяці тому

    Easier and cheaper to record yourself actually do it?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      I pay nothing to make my videos. The way it is now, makes it a lot easier for more people to understand.

  • @gazik0mamedov
    @gazik0mamedov 4 місяці тому +1

    Good guide! Unfortunately I'm on Arch (I use Arch BTW)

  • @hgato
    @hgato 3 місяці тому

    i't taking too much time because you are reaching 120% of vram.

  • @synthoelectro
    @synthoelectro 4 місяці тому

    AI showcasing Ai

  • @martinmiciciday5235
    @martinmiciciday5235 3 місяці тому

    I got this error when it tried to compile the astronaut sample:
    Requested to load Flux
    Loading 1 new model
    loaded partially 3889.2 5854.812986373901 0
    0%|
    \Users\Graphic\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen
    ative\transformers\cuda\sdp_utils.cpp:455.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
    got prompt
    got prompt

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      That's not really an error. It just a warning to that your installed torch package wasn't compiled with flash attention. You don't need to worry about it. You should still be able to generate.

    • @martinmiciciday5235
      @martinmiciciday5235 3 місяці тому

      @@TheLocalLab thank you for your attention. If I update PyTorch: Version 1.12 and above, and PyTorch: Version 1.12 and above, may it solve the problem? because I realized that Flash attention is a technique used to speed up certain types of neural network computations particularly on GPU run. these installation don't damage the potable package settings?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      @@martinmiciciday5235 Well you would have to uninstall your current torch packages and install the compatible torch cuda wheel package alongside flash attention and hope you don't run into compilation issues. I'm assuming you have a Nvidia GPU as well?

  • @Hecbertgg
    @Hecbertgg 3 місяці тому

    how to fix this?
    clip missing: ['text_projection.weight']

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      Check your clips folder inside the models directory and make sure your clips models are inside. Also make sure you select the correct clips models in the Dualclip node.

  • @seanknowles7987
    @seanknowles7987 4 місяці тому

    I get these erros this on below shown in cmd:
    ComfyUI-Manager: installing dependencies. (GitPython)
    WARNING: The script pygmentize.exe is installed in 'C:\Users\Max\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Scripts' which is not on PATH.
    Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location
    and this error shown in comfy ui:
    Prompt outputs failed validation
    CheckpointLoaderSimple:
    - Value not in list: ckpt_name: 'v1-5-pruned-emaonly.ckpt' not in ['flux1-dev-bnb-nf4.safetensors']
    I downloaded the file and placed it in checkpoint folder..not sure why it isnt working except for having to move it to correct path..but not sure since it also sounds like an issue with the file bot being in the correct folder (but it is) ..do u know why?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      The first message is just a warning message and for the second one related to the checkpoint, make sure you select the flux vae file you downloaded in the "Load Vae" node in the ComfyUI webpage. The vae you downloaded from huggingface, should look something like this "diffusion_pytorch_model.safetensors". I think you have "v1-5-pruned-emaonly.ckpt" selected instead.

    • @seanknowles7987
      @seanknowles7987 4 місяці тому

      @@TheLocalLab Hey,,at which part in your video do you mention to download VAE? (did i have to put that file in the VAE folder?)...all i (remember) seeing was download comfy->mangers files->lllyasviel/flux1-dev-bnb-nf4{checkpoint} -> workflow -> then run comfy

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      I actually didn't show it in that video but did in my GGUF tutorial video - ua-cam.com/video/nncY3dJLV78/v-deo.html. I also have the link to the vae models in my description of that video.

  • @cheezeebred
    @cheezeebred Місяць тому

    not a fan of the ai script audio. just buy a mic and talk, you silly goose.