How To Run Flux Dev & Schnell GGUF Image Models With LoRAs Using ComfyUI - Workflow Included

Поділитися
Вставка
  • Опубліковано 22 гру 2024

КОМЕНТАРІ •

  • @TheLocalLab
    @TheLocalLab  4 місяці тому +1

    🔴 Stable Diffusion 3.5 Large & Turbo Models Just Released - ComfyUI Support 👉 ua-cam.com/video/PMxpmYp3N58/v-deo.html
    👉 Want to reach out? Join my Discord by clicking here - discord.gg/5hmB4N4JFc

    • @dhrubajyotipaul8204
      @dhrubajyotipaul8204 3 місяці тому

      This is amazing. Thanks! 🙂

    • @rabbit1259
      @rabbit1259 3 місяці тому

      I have fine tuned the model and Now I need to chek how good it is generating given a related prompt. Can we use ComfyUI for this. I have lora safetensors and config.json downloaded after fine tuning it

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      @@rabbit1259 If you created a flux lora, it should work fine with this comfyUI set up. Just make sure you download the workflow with the lora node connected in the description. Don't forget to place your lora in the lora folder inside the models directory.

  • @cyanideshep7288
    @cyanideshep7288 4 місяці тому +4

    THANK YOU!!!! This is the first tutorial that worked after searching for so long. Very clear and well put together :)

  • @zoo6062
    @zoo6062 4 місяці тому +6

    After updating ComfyUI, lora worked! I have gtx 960 4gb.
    Model schnell Q8_0.gguf. Results.
    512x512 4 steps. Generated in 2 minutes!
    768x768 4 steps. Generated in 2.40 minutes!
    1024x1024 4 steps. Generated in 4.20 minutes!
    If only on the processor (2x 2630v2), it would be 16 minutes, now 12.
    I didn't think that it would work on such an old video card.

  • @TrevorSullivan
    @TrevorSullivan 4 місяці тому +3

    Which Text-to-Speech model are you using to generate these videos? Sounds really similar to some others I've heard.

  • @derekwang5982
    @derekwang5982 2 місяці тому

    Thank you! Great tutorial.

  • @Hardstylez7132
    @Hardstylez7132 Місяць тому

    Thank you so so much for your video !

  • @DaniDani-zb4wd
    @DaniDani-zb4wd 4 місяці тому +2

    I wanna see a comparison…… what is the drop in quality between versions?

  • @lennoyl
    @lennoyl 4 місяці тому +1

    Thanks for the video. I didn't try yet (still downloading... ^^ ) those GGUF models but to be able to choose the models that suits with your PC config and your will is very nice.
    PS: the GGUF nodes are now available in the comfyUI manager. there's no more need to use the git clone command in powershell.

  • @UCs6ktlulE5BEeb3vBBOu6DQ
    @UCs6ktlulE5BEeb3vBBOu6DQ Місяць тому

    amazing it worked first try. Q6 quant gives amazing quality

  • @FirstLast-ye7qo
    @FirstLast-ye7qo 3 місяці тому

    5 stars!!
    It works for me with a RTX 2050 4GB. takes around 2 minutes with Schnell. Which is a lot better than not working at all. Image quality isgreat as well.

  • @DarioToledo
    @DarioToledo 4 місяці тому

    Thank you for the guide. Have actually experienced an improvement on Schnell from about 20s/it to 15s/it with GGUF on my 3050 ti 4gb.

  • @Huang-uj9rt
    @Huang-uj9rt 4 місяці тому +1

    Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.

  • @weilinliang
    @weilinliang 4 місяці тому

    exactly what I'm looking for. You're the best!👍This is super helpful.

  • @自學成才
    @自學成才 4 місяці тому

    Thanks a lot!! This video really save me. I fuzz this question for a few days! Thank you very much!

  • @Kapharnaum92
    @Kapharnaum92 4 місяці тому +2

    Hi, thanks a lot for your video. Very clear.
    However, when I start ComfyUI, i have the following error: Missing Node Types > LoraLoader|pysssss
    Any idea how to solve this?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Try updating your comfyUI.

    • @Kapharnaum92
      @Kapharnaum92 4 місяці тому +3

      @@TheLocalLab I updated and it still didn't work.
      I then modified your json file and removed the "|pysssss" part and it worked

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Interesting, your the only one that has told me this but I'm glad its working for you now. Enjoy.

  • @kirubeladamu4760
    @kirubeladamu4760 4 місяці тому +9

    Fixes for the issues not mentioned in the video
    - remove the '|pysssss' string on line 143 of the workflow json file
    - rename the 'diffusion_pytorch_model.safetensors' file you downloaded to 'flux_vae' before adding it to the vae folder

    • @super_kenil
      @super_kenil 4 місяці тому +1

      The reason for renaming?

    • @cristianoazevedo8386
      @cristianoazevedo8386 3 місяці тому

      @@super_kenil I think it is because there is two diferents VAEs, one for dev an another for schenell

    • @kawa9694
      @kawa9694 3 місяці тому

      Doesn't matter ​@@super_kenil

    • @henrismith7472
      @henrismith7472 2 місяці тому

      I renamed it but this is still happening: Prompt outputs failed validation
      VAELoader:
      - Value not in list: vae_name: 'flux_vae.safetensors' not in ['taesd', 'taesdxl', 'taesd3', 'taef1']

  • @SebAnt
    @SebAnt 4 місяці тому

    WOW - Great Intro to the latest !!

  • @liborbatek8938
    @liborbatek8938 3 місяці тому

    thanks for great tutorial!! Just curious, Im using the Schnell version (Q4_00) as suggested in your vid but I really cant generate on just "steps 4" as its quite blurry...so I dlike to know if its possible to obtain in this setup nice sharp results with just 4 steps. Thanks for any advice... p.s. on steps 20 it works really nicely!

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      I actually used the dev model in this video as it provides better quality but indeed needs more then 4 steps to get good results. I tried improving my image quality with the schnell model but eventually always end up going back to dev. You can try using a merged dev and schnell model from civitai to generate better 4 step images or maybe add some schnell loras to the workflow.

  • @dongleo-zk4cd
    @dongleo-zk4cd 3 місяці тому

    I completed it step by step according to your steps. The moment the image came out, I was surprised! I feel very accomplished, thank you! Since my graphics card is GTX1060 6G, when I use this GGUF_Q4 model, it took about 3 minutes to generate a picture, which was still very slow. If I want to generate a picture within 1 minute, do you have any graphics card to recommend ?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +3

      I'm happy you successfully installed the workflow. If your looking to get a graphics, I would recommend a used RTX 3090(24gb VRAM), if you can find one, which would not only get you down under a minute but ensure you will be set to run bigger image models, faster but also other AI resource intensive models like open source llm's. GTX cards are known to be quite slower then the RTX cards. If your just looking for a cheaper upgrade for your 1060, instead of getting the 3090, you can upgrade to the RTX 3060(12bg VRAM) which is a couple hundred dollars cheaper then the 3090 but has its limitations too.

    • @dongleo-zk4cd
      @dongleo-zk4cd 3 місяці тому

      @@TheLocalLab Very helpful to me, thanks.😊

  • @schuss303
    @schuss303 4 місяці тому

    Thank you for the video.. Does anyone know what is the best way to use 155h with integrated 8gb, 16gb 7600mhz ram, NPU, very fast hard drive.. It's the zenbook 14th gen.. Than you for any info

  • @expaintz
    @expaintz 4 місяці тому

    very cool intro to GGUF !

  • @Xammblu_Games
    @Xammblu_Games 3 місяці тому

    Is the NO LORA workflow correct? I'm seeing a connected Lora node in the workflow. If I did want to use a Lora with the Schnell Model, can I only use Loras that have Schnell in their name?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      Yes the no lora workflow doesn't have its model node connected to the ksampler 's model node so the lora won't have its effect on the generated image. If you want, you can disconnect the clip nodes from the lora node and reconnect the clip nodes from the dualclip loader node directly to the clip encoder nodes. I believe you can use almost any lora, with these flux models, as I been using some sd loras with the flux dev gguf model and has worked with no issue. You can try it out and see if any errors pop up.

    • @Xammblu_Games
      @Xammblu_Games 3 місяці тому

      @@TheLocalLab Thank you so much!! My power supply can only support a GTX 1050Ti 4GB but it's working! All thanks to you!

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      No problem man, have fun with it!

  • @AInfectados
    @AInfectados 3 місяці тому

    How to control the strength of the LORA, must modify CLIP or MODEL value?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      You can modify both but would focus more on the strength model value.

    • @AInfectados
      @AInfectados 3 місяці тому

      @@TheLocalLab Thx.

  • @KITFC
    @KITFC 4 місяці тому +1

    Thanks but I got an error:
    Warning: Missing Node Types
    When loading the graph, the following node types were not found:
    LoraLoader|pysssss
    Do you know how to fix this?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      Another commenter had this same issue and his solution was to modify the workflow json file and remove the "|pysssss" in the model loader section. You can open the file in a notepad or vs code and see if it works for you as well.

    • @KITFC
      @KITFC 4 місяці тому +1

      @@TheLocalLab thanks it worked!

  • @LinusBuyerTips
    @LinusBuyerTips 4 місяці тому

    thank you for the video which graphic card do you have and which flux model work fast with RTX 4060 8go

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      I have RTX 4050 6gb and I run the Q4_0 dev model which pumps outs images in less then a 1:30. With loras the quality is even better.

    • @LinusBuyerTips
      @LinusBuyerTips 4 місяці тому

      @@TheLocalLab I appreciate your response Keep up the good work and can't wait to see more content

    • @LinusBuyerTips
      @LinusBuyerTips 4 місяці тому

      ​@@TheLocalLab Hi, when I try to use GGUF, I get this error:
      "Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict'."
      have any idea how to solve this? thx

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      @@LinusBuyerTips Update your comfyUI to the latest version.

    • @LinusBuyerTips
      @LinusBuyerTips 4 місяці тому

      @@TheLocalLab yes i do it but still get the same error?

  • @casper508
    @casper508 4 місяці тому +2

    Can't get past this error. I've got all requirements installed correctly.
    Error occurred when executing UnetLoaderGGUF:
    module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict'
    File "/content/drive/MyDrive/AI/ComfyUI/execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "/content/drive/MyDrive/AI/ComfyUI/execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "/content/drive/MyDrive/AI/ComfyUI/execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "/content/drive/MyDrive/AI/ComfyUI/custom_nodes/ComfyUI-GGUF/nodes.py", line 130, in load_unet
    model = comfy.sd.load_diffusion_model_state_dict(

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      Update your comfyUI to the latest version.

  • @DealingWithAB
    @DealingWithAB 8 годин тому

    vae link flux dev isn't downloading. it says file wasn't available on the site.

    • @TheLocalLab
      @TheLocalLab  2 години тому

      The link isn't broken, if it's your first time downloading from the Flux Dev HF Repo, you have to accept the license agreement before having the model files available for download.

  • @wildflower401
    @wildflower401 2 місяці тому

    It says I'm missing a unetLoaderGGUF? How do I get that? Please help!

    • @TheLocalLab
      @TheLocalLab  2 місяці тому

      If you have the comfyUI Manager installed, you can simply install the missing nodes through there. Just click and install the 'one's your missing.

  • @Vanity7k
    @Vanity7k 4 місяці тому

    Hi, mine's just doing a black screen on the output.. I have a 4080. Has anyone had this issue?

  • @rogersnelson7483
    @rogersnelson7483 4 місяці тому

    What type of loras can you use for GGUF workflow. The flux loras I tried from Civit AI (Flux1 D) have have no effect even when the trigger words are used.

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      Just an fyi, If you used my workflow, be aware that you have to connect the GGUF model loader node to the Lora node then connect the lora node to the ksampler node to actually have the loras take affect.

  • @alexhaba7503
    @alexhaba7503 3 місяці тому

    how could we add an image as a parameter to the prompt, by the way, fantastic video. RTX4060TI 16GB 8Q model, like 20 seconds on 512x512 and 20 steps

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      Thanks. If your talking about image to image instead of the text to image, you can you use my recently dropped image to image workflow that you can find in the description here - ua-cam.com/video/sbnMn8nMQgk/v-deo.html. Comes with some upscaling nodes as well.

  • @EricTyler-b3v
    @EricTyler-b3v 3 місяці тому

    This is only for windows? I'm stuck at this command ".\python_embeded\python.exe -s -m pip install -r .\ComfyUI\custom_nodes\ComfyUI-GGUF
    equirements.txt". Trying to run on Google Colab

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      The portable comfyui package is only for windows. You will need to manually install comfy for linux. On colab, if you can navigate back to the main directory that host the "python_embeded" folder and run the command there, it could possibly work.

  • @rickytamta87
    @rickytamta87 4 місяці тому

    It works..Thank you!!

  • @FrostyDelights
    @FrostyDelights 3 місяці тому

    it seems like the drop in quality isn't worth it to me, will the quality get better if i use the 23 gb model?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      You can try the NF4 and fp8 models and see how you like it or use the full precision model if you have the compute to run it locally.

    • @FrostyDelights
      @FrostyDelights 3 місяці тому

      @@TheLocalLab i tried the fulll model and my pc exploded

    • @FrostyDelights
      @FrostyDelights 3 місяці тому

      @@TheLocalLab ty for the reply really helped me !

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      @@FrostyDelights hahahaha. I think the next best step down would be the fp8 model version. Give it a shot.

  • @gsudhanshu3342
    @gsudhanshu3342 4 місяці тому +1

    can you do a similar type of video for forge

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      Could be a possibility in a future video.

    • @GenoG
      @GenoG 4 місяці тому

      @@TheLocalLab Me too please!! 😘

  • @johnnyapbeats
    @johnnyapbeats 3 місяці тому

    i have a lora of myself and i want to make me as a studio ghibli character. I've tried everything in my power but i still can't achieve that. Im trying to combine my lora and a studio ghibli i found on civit which both are trained on the flux dev base model , and still can't generate me as it! Any help?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      DId you check to make sure the lora node in the workflow is connected correctly?

    • @johnnyapbeats
      @johnnyapbeats 3 місяці тому

      @@TheLocalLab yeah I have them daisy chained I really don't know what's left to do...

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      @@johnnyapbeats Have you tested your lora with a different workflow?

  • @darajan6
    @darajan6 4 місяці тому

    Hi, I wonder if 3070 8G card +64G ram could run this workflow?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      My friend you can for sure run this and more with those specs. You should have no issue.

  • @anirudhsays1534
    @anirudhsays1534 3 місяці тому

    The Vae file showd in your video is different from the one in link, also both Vae file names in both links are same, we cant paste 2 files with same names in same folder, can you plz advice?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      Yes I changed the name of the original vae files after i downloaded them. You can just change the names to flux vae shnell or dev, it should still work fine.

  • @leonv_photographySG
    @leonv_photographySG 3 місяці тому

    Hi i am getting blurred image instead when i generate, not too sure on what is the issue though

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      Yeah try not to change the cfg. The cfg should be 1, higher causes blurred images.

  • @antoniojoaocastrocostajuni8558
    @antoniojoaocastrocostajuni8558 4 місяці тому

    Can I use python and diffusers to run this model using line codes, insteade of ComfyUI?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      Well the only two python dependencies for the ComfyUI-GGUF Extension node are gguf>=0.9.1 and numpy

  • @HIMARS-M124
    @HIMARS-M124 2 місяці тому

    The best instruction, but it should be said that before that you still need to install ComfyUI Manager

    • @TheLocalLab
      @TheLocalLab  2 місяці тому

      Yeah, I was pretty early when I first got this installed. I don't even think it was available in the comfyui manager at the time I made this video. If it was I didn't install through there but I believe you can now which should be a lot easier.

  • @HaiNguyen-qt6vc
    @HaiNguyen-qt6vc 3 місяці тому

    hello, can i use lora externally?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      What do you mean? Are you asking if you can use a different lora from a different source?

  • @nkalra0123
    @nkalra0123 4 місяці тому

    why did you delete the workflow.json from google drive?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      You must still be using the old link. I added a new link due to some issues with the previous workflow. The link is in my description.

  • @bikgrow
    @bikgrow 3 місяці тому

    Clear video

  • @JamesPound
    @JamesPound 4 місяці тому +3

    The fp8 t5xx model gives less coherence and details. Try on a fixed seed with fp16.

  • @antiplouc
    @antiplouc 4 місяці тому

    Unfortunately this has no effect on a mac. No speed increase at all and i tried all the gguf models. Any idea why? Or is it simpy not designed to work on macs?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      No GGUFs are also compatible with MacOS but there could be a variety of reasons why your not seeing speed increases, especially with lower quants. There's just not enough information to really tell.

    • @antiplouc
      @antiplouc 4 місяці тому

      @@TheLocalLab what information do you need? I have a mac studio m2

  • @SiddharthSingh-oy5bc
    @SiddharthSingh-oy5bc 3 місяці тому

    Flux vae, file not able to download from hugging face(file not available on the site). Can some one pls help with the file.

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      I just checked, the vae file is still available. Just go into the vae folder in the file and versions section for either the dev or schnell model and download the "diffusion_pytorch_model.safetensors". I renamed it when I downloaded it to flux_vae.

    • @iBluSky
      @iBluSky 3 місяці тому

      You need to sign in to agree the terms popup then you can download

  • @didichung4377
    @didichung4377 4 місяці тому +1

    lora not working with this flux gguf version...

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Connect the GGUF model loader node to the Lora node, then connect the Lora node to the ksampler node. Be advised that you will need to also make sure there's always Lora loaded to use the workflow. If you no longer want to use the Lora revert back to the default workflow.

  • @alifrahman9447
    @alifrahman9447 4 місяці тому

    hey man...i've installed everything accordingly but UNET LOADER (GGUF) gives me this error everytime
    Error occurred when executing UnetLoaderGGUF:
    module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict'
    im using flux1-dev-Q6_K.gguf file
    tried different workflow, same error...everything is updated.

    • @superfeel1275
      @superfeel1275 4 місяці тому +1

      u dont have the latest comfy version. when in the comfy folder,open the cmd and run "git pull"

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Yeah I think you need to update your comfyUI. I would also look into installing the comfyUI manager to make updating and installing new nodes a breeze.

    • @alifrahman9447
      @alifrahman9447 4 місяці тому

      @@TheLocalLab already done it man. still same error. cant find a solution, there's a pink border on Unetloader
      Update: Thanks ,,it wortked!

    • @alifrahman9447
      @alifrahman9447 4 місяці тому +1

      @@superfeel1275 thanks man, it worked,m i updated through manager, but when i update using cmd, it worked😊😊

  • @cgdtb
    @cgdtb 3 місяці тому

    Thanks

  • @mashrurmollick
    @mashrurmollick 3 місяці тому

    Prompt outputs failed validation
    DualCLIPLoader:
    - Value not in list: clip_name1: 'clip-vit-large-patch14 .safetensors' not in ['model.safetensors', 't5xxl_fp8_e4m3fn.safetensors']
    LoraLoader:
    - Value not in list: lora_name: '{'content': 'flux_realism_lora.safetensors', 'image': None, 'title': 'flux_realism_lora.safetensors'}' not in ['lora.safetensors'] I am getting this error, I used the schnell model, what should I do?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      Check your clip folder inside the models directory to make sure your clip-vit-large-patch14 .safetensors is inside.

    • @mashrurmollick
      @mashrurmollick 3 місяці тому

      It is there with the name "model.safetensors"

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      @@mashrurmollick In the dualcliploader node in the workflow, use the arrows to select 'model.safetensors' to use the model. Or you can also rename the file if you like to 'clip-vit-large-patch14 .safetensors'.

    • @mashrurmollick
      @mashrurmollick 3 місяці тому

      Thanks man, I managed to overcome all the error messages, one thing that I'm facing is when I'm pressing the "Queue prompt" option, this green border that highlights the nodes, is jumping from one node to the other, up until "Ksampler" node, it's remaining stuck there, I reduced the number of steps from 20 to 4, still it's stuck, can you help me out?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      @@mashrurmollick That's actually normal. The ksampler is the longest step in the process. That's where the model actually starts generating the image. Check your terminal when it reaches the ksampler node and watch it for a few minutes, you should see the progress bar increase as each step gets executed. Depending on your pc specs and model you use, this can happen fast or take a while.

  • @defidigest9
    @defidigest9 3 місяці тому

    I'm getting errors bro...
    when I press queue prompt
    C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF
    odes.py:79: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:212.)
    torch_tensor = torch.from_numpy(tensor.data) # mmap
    how do I fix this?
    Thanks

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      Are you still able to generate images? I believe this is just a warning that can be safely ignored.

  • @AmerikaMeraklisi-yr2xe
    @AmerikaMeraklisi-yr2xe 4 місяці тому

    How much gpu ram I need for 1024x1024 px?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Well it depends on the quant you use and ram(normal) you have, but anything over 3gb vram can produce a 1024x1024 image with the right quant. You would probably just wait longer if your using less vram.

  • @bishwarupbiswas4234
    @bishwarupbiswas4234 3 місяці тому +1

    Prompt outputs failed validation
    DualCLIPLoader:
    - Value not in list: clip_name1: 'clip-vit-large-patch14 .safetensors' not in ['model.safetensors', 't5xxl_fp8_e4m3fn.safetensors']
    LoraLoader:
    - Value not in list: lora_name: '{'content': 'flux_realism_lora.safetensors', 'image': None, 'title': 'flux_realism_lora.safetensors'}' not in ['lora.safetensors']
    Can nayone help me with the error :)

    • @TheLocalLab
      @TheLocalLab  3 місяці тому +1

      Did you download the correct clip models and place them in the clip folder? Click the arrows in the dualclip node to check to make sure you can see them. If you cant select the models in the selector fields then the model may not be in the clip folder.

    • @slavazarkeov4600
      @slavazarkeov4600 2 місяці тому

      @@TheLocalLab Works very well, I select the correct models, thank you for the tuto!

  • @haon2205
    @haon2205 3 місяці тому

    The opening music sounded like Think About The Way by Ice Mc

  • @alifrahman9447
    @alifrahman9447 4 місяці тому

    its just....um ..i am confused with lots of model versions! i habe 2060 12 gb , and im using nf4 model, it takes 90 sec to generate 1024*1024 image. So, if I prefer quality over speed, well i lil bit faster generation will definitely help, so which model should I choose bro?
    AND, please make video with your own voice man🙂🙂👌👌

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      You can try the 6_K and the 8_0 quants and see how the output quality compares with the nf4. Its best to experiment to really find the sweet spot, especially if you can improve results with Loras which is why I like the lower quants(4_0).

    • @alifrahman9447
      @alifrahman9447 4 місяці тому

      @@TheLocalLab thanks man, gonna try both

  • @newreleaseproductions9150
    @newreleaseproductions9150 4 місяці тому

    Can you make a tutorial for training Your own models using Ai tool kit.

  • @zdrive8692
    @zdrive8692 4 місяці тому

    on apple silicon m1 pro no matter what it always output green blue black boxes or traingular small shapes like TV of 90's when we he have no signal, tested all Q models tried with both cpu gpu it does not work... it is for windows with Nvidia GPU only

  • @Ccbysa-f2i
    @Ccbysa-f2i 4 місяці тому

    I have nvidia 3050 4gb, does it runs?

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      You should be able to run one of the gguf quants for sure.

  • @cgdtb
    @cgdtb 3 місяці тому

    Thanks...

  • @Blucee-w3k
    @Blucee-w3k 4 місяці тому

    One job and fail.. The Flux_Vae you have is not in the description o you ename it, but in all case don't work

    • @droidJV
      @droidJV 4 місяці тому

      It's on the description, he just renamed it on his computer. It's the file called "diffusion_pytorch_model.safetensors".

  • @alex.nolasco
    @alex.nolasco 4 місяці тому

    I assume xlabs control net is incompatible

    • @lennoyl
      @lennoyl 4 місяці тому

      with Q4_0 schnell model, it works but not very well : I had to reduce the controlnet strength below 0.55 to make it look correct. above 0.65, a photo looked like an illustration with plastic faces.
      I will download and try with the f16 dev model to test if it works correctly with it .
      edit: I tested with some non schnell models (Q6 K, Q5 KS, Q8 0, F16) and it works very very well with all of them. The issue I had was only with the schnell versions

  • @xD3NN15x
    @xD3NN15x 4 місяці тому

    Thx!
    but i get an error when trying to use:
    Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute
    'load_diffusion_model_state_dict' File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF
    odes.py", line 130, in load_unet model = comfy.sd.load_diffusion_model_state_dict( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      You have to update your comfyUI, either through the ComfyUI manager and restart(recommended), git pull via command line, or just install the latest version.

  • @AcamBash
    @AcamBash 4 місяці тому +2

    There is a pretty important error in your workflow. You have to manually link the "Load Lora" Node to the "KSampler" via the model-link. Otherwise the Lora won't be applied.

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      I do understand what you mean but honestly I'd rather keep the use of Loras optional. Maybe I should've mentioned this in the video. If I'd attached the Lora node in the workflow, you would have to use it in order to generate or detach the node manually as well if you don't.

    • @AcamBash
      @AcamBash 4 місяці тому

      @@TheLocalLab Okay. It's all good. Watching the video i thought you used a Lora and wondered why it didn't work for me. See you included a hint now.

    • @TheLocalLab
      @TheLocalLab  4 місяці тому +1

      Yes yes, I'll be sure to mention that again in the future. Hope your enjoying these ggufs.

    • @Hood_History_Club
      @Hood_History_Club 4 місяці тому

      @@AcamBash we all did. Not sure how to 'connect' the lora node to whatever, because the nodes dont match.

  • @henrismith7472
    @henrismith7472 2 місяці тому

    Prompt outputs failed validation
    VAELoader:
    - Value not in list: vae_name: 'flux_vae.safetensors' not in ['diffusion_pytorch_model.safetensors', 'taesd', 'taesdxl', 'taesd3', 'taef1'] man so close...

    • @TheLocalLab
      @TheLocalLab  2 місяці тому

      Ok, two things. 1. Did you download the VAE file from huggingface? and 2. If you did, change the name of the VAE file you downloaded from huggingface to "flux_vae".safetensors

    • @henrismith7472
      @henrismith7472 2 місяці тому

      @@TheLocalLab Would you mind telling me the url to the correct one please? Pretty sure I have the right one, trying the renaming thing now. I think it's working, I hear my GPU fans firing up lol. Yep, it's working thank you so much.

    • @TheLocalLab
      @TheLocalLab  2 місяці тому

      @@henrismith7472 You should find all the links in the description.

  • @didichung4377
    @didichung4377 4 місяці тому +1

    missing lora loader nodes right here....

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      Look closer, the Lora node is included in the workflow towards the bottom left.

    • @Enigmo1
      @Enigmo1 4 місяці тому

      @@TheLocalLab It's not connected to ksampler, so you're not getting any results out of it

  • @mrsam2822
    @mrsam2822 3 місяці тому

    on MAC M2 16 gb 512x512 Generated in 7 minutes! 😅

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      No gpu? Also which GGUF quant you use?

  • @Blucee-w3k
    @Blucee-w3k 4 місяці тому

    Whee is VAE ???????????

  • @spiritform111
    @spiritform111 3 місяці тому

    hm... for some reason it generates white noise.

    • @spiritform111
      @spiritform111 3 місяці тому

      working now... idk what i did. lol

  • @oszi7058
    @oszi7058 4 місяці тому

    i only get blue pixels

  • @TrevorSullivan
    @TrevorSullivan 4 місяці тому +1

    The photo of President Trump with a rifle is awesome! Nice one! 😉

  • @brunozarrabe7122
    @brunozarrabe7122 4 місяці тому

    I'm just getting a black box instead of an image

  • @erans
    @erans 4 місяці тому +1

    1.70it/sec (around 30 seconds) per generation of 512x512 on a rtx 3060ti

  • @falconbmstutorials6496
    @falconbmstutorials6496 4 місяці тому +1

    Would love to see a list of WGET instead of a list of websites ... for a beginner this is too confusing

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      If a user can't download a model from HF and drag it into their models folder once its complete, I'm not sure I would trust them running wget commands in the correct directories. I wouldn't be surprised if the vae model somehow ends up in the python library packages folder lol.

  • @Hood_History_Club
    @Hood_History_Club 4 місяці тому

    "You have to manually link the "Load Lora" Node to the "KSampler" via the model-link"
    I dont have telepathy. How do I do this? The nodes dont match. And should I use Lora Loader with the snake or without the snake. Remember I cant read your minds.

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      It's all a learning process my guy. I don't know what what you meant by "the snake" but to link the nodes, simply click and drag the link line from the purple node next to" model" in the unet model loader to the purple node on left "model" of lora loader node, then connect the right purple node "model" in the lora node to the left purple node "model" in the ksampler. Its easy as cake.

  • @KlausMingo
    @KlausMingo 4 місяці тому +4

    AI is moving so fast, every day there's something new, it's hard to keep up and try everything.

    • @1lllllllll1
      @1lllllllll1 4 місяці тому +2

      There’s an AI that’ll keep up with progress and distill it all for you to consume once a week.

  • @PunxTV123
    @PunxTV123 3 місяці тому

    i got same result on your image without using lora

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      Yeah during the video, I don't think I used a Lora. If you look at the Unet loader node, it was connected directly to the Ksampler skipping the Lora Loader node.

    • @PunxTV123
      @PunxTV123 3 місяці тому

      @@TheLocalLabdo u have new workflow? Sorry im new on this comfyui, dont know what to conmect and where

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      @@PunxTV123 Yeah I just updated the workflow with the lora node connected. Here's the google link - drive.google.com/file/d/1zznjgT4zvE9PTNitHPAXmd5N_oFArhn-/view?usp=sharing. I will also add this link to the description for later if needed. Let me know if its good.

    • @PunxTV123
      @PunxTV123 3 місяці тому

      @@TheLocalLab it works now thanks

  • @LowellMerle-f8p
    @LowellMerle-f8p 3 місяці тому

    Windler Locks

  • @RuthannPerzanowski-h6t
    @RuthannPerzanowski-h6t 3 місяці тому

    Irwin Mission

  • @ButlerGrover-z6d
    @ButlerGrover-z6d 3 місяці тому

    Denesik Spur

  • @BrowningHazlitt-f1y
    @BrowningHazlitt-f1y 3 місяці тому

    Clay Crossing

  • @casper508
    @casper508 4 місяці тому

    They just won't let us stick to one setup... lol

  • @KINGLIFERISM
    @KINGLIFERISM 3 місяці тому

    pee pee pee

  • @zinxd_b
    @zinxd_b 4 місяці тому

    19 hours straight with no sleep food or break, and not a single image, or glimpse at a UI for that matter. It's just internet dumpster diving for convoluted code snippets that dont work. Even the official code on AMD's website is broken, and forget troubleshooting thats not possible. I'm so hungry tired and tilted this text is like walking barefoot on legos to my eyes. This 3090 is just sitting there looking at me like I would ever consider putting it into one of my systems, I dont care if it would work if i did or not. actually i may just go give it the office space treatment with an estwing hammer. That would make me feel much better. because this sadomasochistic linux entropy is easily the most irritating thing I've ever dealt with in my life. sorry i'm exploding on your page i'm delirious but no food or sleep until it works or i die trying. Well, time to wipe the partition and try again.

    • @TheLocalLab
      @TheLocalLab  4 місяці тому

      I'm rooting for you buddy!

    • @Ashutoshprusty
      @Ashutoshprusty 4 місяці тому +1

      Use miniconda to make your life simple

    • @zinxd_b
      @zinxd_b 4 місяці тому

      @@TheLocalLab i got it! and as per standard operating procedure with my lovely life as a die hard AMD fan the road less traveled sucked really bad there for a bit, but i was able to teach myself (with some help from forums and my gpt2's impeccable knack for Web searches) a very large chunk of Linux terminal commands, Python, and this lovely utility called docker. it's been like that since y2k for me, every issue that's brought up leads to a challenge and by overcoming those challenges you get juicy exp gains. and yeah AMD is a company at the end of the day but at least they don't cross the line like skynet and Intel(who is getting a nice slice of what they deserve pie) either way apologies for stumbling in here grumpy and delirious, and thank you, sincerely, for the vote of confidence. May your code never throw errors. cheers!

  • @mashrurmollick
    @mashrurmollick 3 місяці тому

    diffusion_pytorch_model.txt "file wasn't available on site" can someone please help me?

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      The file is diffusion_pytorch_model.safetensors, if your talking about the flux vae. Its not a text, I just use that for demonstration purposes.

    • @mashrurmollick
      @mashrurmollick 3 місяці тому

      @@TheLocalLab yes, I'm talking about the vae file of the flux dev model

    • @TheLocalLab
      @TheLocalLab  3 місяці тому

      @@mashrurmollick Yes then you can simply download the .safetensors file from either the dev or schnell model page.

    • @mashrurmollick
      @mashrurmollick 3 місяці тому

      When I am pressing the download button of flux dev vae file"diffusion_pytorch_model.safetensors", my browser's download progress bar says "diffusion_pytorch_model.txt file wasn't available on site".

    • @VirtualDarKness
      @VirtualDarKness 2 місяці тому

      @@mashrurmollick you need to sign in and accept the terms