Flux Ai Realistic Guide - How To Make Realistic Images With Flux Ai in ComfyUI (With FREE Workflow)

Поділитися
Вставка
  • Опубліковано 24 гру 2024

КОМЕНТАРІ •

  • @ragemax8852
    @ragemax8852 Місяць тому +1

    OMG! Thank you so much for providing this guide as I was struggling the past few days trying to get the right workflow and base model to work the right image. This workflow is a beast, as I can finally not having to experiment with hundreds of workflows seeing which one works the best. This provides realistic images and works well with my character LoRAs.

  • @jakkalsvibes
    @jakkalsvibes 2 місяці тому +1

    Your description and workflow works perfectly thank you so much 🙂

  • @rmeta3391
    @rmeta3391 4 місяці тому +3

    Thanks for the multiple LoRA node, lives in my workflow now. CivitAI is your friend for Flux LoRA's.

  • @WasamiKirua
    @WasamiKirua 3 місяці тому +3

    bro for free content you are one of the best on youtube TOP

  • @Giorgio_Venturini
    @Giorgio_Venturini 4 місяці тому +1

    Well done, excellent help and suggestions for me as a newbie. I follow you with interest. Continue like this. I like GGUF because i can use in ComfyUI and also Forge. Thanks

  • @switzerland
    @switzerland 2 місяці тому

    Thanks. You helped me already🙏

  • @cr_cryptic
    @cr_cryptic 28 днів тому +1

    THANK YOU!!!!

  • @jamesdenny1131
    @jamesdenny1131 4 місяці тому +1

    Great video, thanks.

  • @bnthsacklayr3263
    @bnthsacklayr3263 Місяць тому +1

    Hi, i have a trouble to query your workflow, because i got error with "`newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.
    " .. do you have any idea why? Are you using Python 2 or 3 ?

    • @mouliksatija3345
      @mouliksatija3345 Місяць тому

      i hv the same issue did u find a fix?

    • @dejuak
      @dejuak Місяць тому

      @@mouliksatija3345 someone fixed that?

    • @dejuak
      @dejuak Місяць тому

      Did u fix it?

  • @juliana.2120
    @juliana.2120 3 місяці тому

    amazing and helpful video. id love to understand what all those files are used for that you showed at the beginning, like what each of their part is in the whole process. getting it to work is one thing, but im trying to understand what those files do :D

  • @0x524c
    @0x524c Місяць тому

    Congratulations on the video. I keep trying to figure out where I save the files Text Encoders on ComfyUI.

  • @ModRebelMockups
    @ModRebelMockups 14 днів тому

    could u pls show how to add pose control to this workflow?

  • @powerfalcon2329
    @powerfalcon2329 3 місяці тому

    Thank a lot for your help for install manager of comfyui to solve my probleme of Missing Node Types

  • @triodine
    @triodine 3 місяці тому +1

    I feel like i'm doing something wrong, but when I load this template into my instance of comfy ui, the UNET Loader GGUF node errors out for me and nothing I do seems to fix it. Any suggestions on what I can try doing?

    • @xclbrxtra
      @xclbrxtra  3 місяці тому

      Can you mention what the error is ?

    • @triodine
      @triodine 2 місяці тому

      @@xclbrxtra I get the error:
      "Warning: Missing Node Types
      When loading the graph, the following node types were not found:
      UnetLoaderGGUF
      No selected item
      Nodes that have failed to load will show as red on the graph."
      Whenever I try install/reinstalling this I keep getting the same error with no ability to fix it

  • @TheSd1cko
    @TheSd1cko 6 днів тому

    No matter what Lora options I select / use it makes no difference to the output image. Is there a reason why this might be? I have downloaded various Lora models and changed them from none to a downloaded, turned it on and off, no difference to the output at all (using the same seed for comparison).

  • @alexandreb.8350
    @alexandreb.8350 Місяць тому

    Hello, thank you very much for your job, for me the output is very different from my portrait input "Load Image" ; on which parameters can i operate to get more similar face between input and output? Fluwguidance? Crop image? denoise Basic Scheduler??

    • @alexandreb.8350
      @alexandreb.8350 Місяць тому

      for me, with your workflow, it seems like the input image has ZERO influency on image OUTPUT: maybe it's me non understand the possibility and the reason from this workflow.I thought to get OUTPUT with the same face than INPUT , with an other context gived by the prompt and that's not that at all(sorry for my english, i speak usually french)

    • @xclbrxtra
      @xclbrxtra  Місяць тому

      Hi, make sure that the switches are set in a way to take loaded image (not empty latent and prompt) and set the denoise to 0.2 and then start increasing it till you like the changes. If denoise is 1 then it means whole image is denoised so no input image effect and 0 means no change in input image. 0.3-0.4 should give best results

    • @alexandreb.8350
      @alexandreb.8350 Місяць тому

      @@xclbrxtra Thanks for your explanation of the switch.
      Aaah ok, in concrete terms, does this mean that I have to “unplug” (or destroy) the pink wire between the EMPTY LATENT IMAGE node (in the SET PARAMETERS group) and the input 2 node of the SWITCH ANY node? and so the input image (top left) will be taken into account? and influence the final result?
      For the denoise part, yes, I'll test it with 0.3 0.4 .

    • @alexandreb.8350
      @alexandreb.8350 Місяць тому

      I've just tried it, it doesn't work if I remove this link...I don't understand how to manage the switch so that the INPUT image is taken into account.Sorry again.

    • @alexandreb.8350
      @alexandreb.8350 Місяць тому

      absolutely sorry, the commutateur is in the button select!!, switch to 1 2 3 ! so stupid from my part.

  • @ElMauilefotodelmaui
    @ElMauilefotodelmaui Місяць тому

    Thank you for your content! When I start your workflow it miss some nodes... What can I do? Thanks

    • @xclbrxtra
      @xclbrxtra  Місяць тому

      You can open the ComfyUI manager and there is 'Install missing custom nodes' You can install all those which are missing.

  • @viktorsimeonov8498
    @viktorsimeonov8498 3 місяці тому

    Thank you so much for this incredible video. I am new to this AI image generation, but I've managed to run the workflow and tested different prompts in order to see if I will be able to create a CONSISTENT AI character 🤔, but unfortunately I was unable to do so☹, the face is really different every time I generate a new prompt even when I use fixed seed. Can somebody give me some kind of advice or idea on how to achieve CONSISTENCY in the images, and especially in the faces of the generated characters ? ⁉Plese any advice or suggestion will help 😄

  • @ginesparracaballero7879
    @ginesparracaballero7879 2 місяці тому

    Thank you for the videos, it really helped me a lot. I had no idea how to implement flux in comfyui. I have a question, if anytone could help me please. I would like to know whi do you use clip_I which is only 246 MB and another one of some GB. Why dont you use both heavy clips or only one? Thank you so much

    • @xclbrxtra
      @xclbrxtra  2 місяці тому

      Clip L is good at understand comma separated smaller keywords while another one is good for complex sentence understanding. As they both are trained differently we try to use both. Also if you want outputs with text, there's VitL with enhanced text but it focuses more on generating amazing text but messes up eyes and faces more. It's all about usage

  • @SuvethaGurusamyMohanram
    @SuvethaGurusamyMohanram 4 місяці тому

    Thanks for sharing. I like to know whether flux gguf support controlnet and ipadapter. Can you able to do workflow based on living room interiors. Right now, i am using SDXL for creating different interior designs.

    • @xclbrxtra
      @xclbrxtra  4 місяці тому

      Actually the flux controlnet and ipadapter models are not stable, I couldn't make them give consistent results. Feels like every image needs different tuning. But I'll look into it 💯

  • @RoguishlyHandsome
    @RoguishlyHandsome 4 місяці тому

    city96 now has quantized gguf text encoders also, supported by the same GGUF extension (new clip loader nodes).
    [seems like providing the link makes the comment non visible]

    • @xclbrxtra
      @xclbrxtra  4 місяці тому +1

      Yes, I've uploaded a video for upscaler today and I have updated the GGUF text encoders. The Q6_k is pretty good. It's smaller than fp8 but the quality is more near fp16 🔥💯

  • @massibob2004
    @massibob2004 3 місяці тому

    Good job ! I don't nderstand why do you use the image for ? It seams the prompt is the only controller

  • @edwardferry8247
    @edwardferry8247 4 місяці тому +1

    Need a serious rig to be doing this locally 🙌…

    • @xclbrxtra
      @xclbrxtra  4 місяці тому

      Actually you can try this out with just 6GB vram. This tutorial is made on a laptop with RTX4060 with 8gb Vram. It takes around 1.5-2 mins for an image with 1 lora but it's still not bad for a gaming laptop 💯🔥

  • @0x524c
    @0x524c Місяць тому

    😁🤗👋👋👋

  • @SK-S2N
    @SK-S2N Місяць тому

    HELP : i have compfyui installed through stability matric. i copy pasted all the files in their folder. then i launched comfyui and dropped your workflow file on to the existing workflow area . it says that come modeuls are missing and they are marked in red color. what am i doing worng ?

    • @Ram-j2h3b
      @Ram-j2h3b День тому

      I got the same isssue too, did u solve it?

  • @adastra231
    @adastra231 3 місяці тому +1

    getting this " Error occurred when executing UnetLoaderGGUF:
    cannot mmap an empty file
    File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
    File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\custom_nodes\ComfyUI-GGUF
    odes.py", line 191, in load_unet
    sd = gguf_sd_loader(unet_path)
    File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\custom_nodes\ComfyUI-GGUF
    odes.py", line 39, in gguf_sd_loader
    reader = gguf.GGUFReader(path)
    File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\gguf\gguf_reader.py", line 90, in __init__
    self.data = np.memmap(path, mode = mode)
    File "C:\Users\jackp\Downloads\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages
    umpy\core\memmap.py", line 268, in __new__
    mm = mmap.mmap(fid.fileno(), bytes, access=acc, offset=start)
    Queue size: 0
    Extra options

  • @Flazum
    @Flazum 4 місяці тому

    What custom nodes are you using? The workflow you shared won't work without them

    • @xclbrxtra
      @xclbrxtra  4 місяці тому

      Just go to comfy manager and click on Install Missing Custom Nodes. You will get a list of all nodes missing and can directly install from comfy. Install them all and just restart 💯

  • @shreyashkumar5940
    @shreyashkumar5940 3 місяці тому

    Bro plz make videos on image to image using flux and boreal lora

  • @yapasphotoCom
    @yapasphotoCom 3 місяці тому

    Interesting but Load Image doesn't do anything despite the switch.
    I tested by blocking the seed, and varying the switch to 1 or 2 and the result is always the same, while I expect a generated image inspired by the input image, any idea?
    Have other people managed to have an image generated inspired by the input?

    • @xclbrxtra
      @xclbrxtra  3 місяці тому +1

      When you are using Img2Img, you'll need to reduce the denoise in Basic Scheduler. It would be set at 1 by default, try adjust it to 0.5-0.6
      Complete denoise means even if you use a loaded image, it is getting completely denoised.

    • @yapasphotoCom
      @yapasphotoCom 3 місяці тому

      @@xclbrxtra Thanks, I missed that point in the explanations. that's perfect

  • @Onits29
    @Onits29 2 місяці тому +1

    how to find lora file?

  • @thevfxguy_tvg
    @thevfxguy_tvg 22 дні тому

    works fine but generated images are low resolution 1344/786 and looks pixeleted..how to improve quality of image..AT least to HD (1080P)

  • @paytowin8468
    @paytowin8468 4 місяці тому

    How many minutes does it take you to generate one image and with what kind of graphic card are you using?

    • @xclbrxtra
      @xclbrxtra  4 місяці тому

      This tutorial is made on a gaming laptop with RTX 4060 (8GB VRAM). It takes around 2 mins for a single image with 1 lora, without any lora it's around 1 min, 40 sec. (You can reduce this timing if you choose a smaller GGUF of flux and T5xxl Model)

  • @The.Fake.Guruuu
    @The.Fake.Guruuu 2 місяці тому

    Can we use a photo as a "model" so that the ai know knows what to inspire on

    • @xclbrxtra
      @xclbrxtra  2 місяці тому

      You can try img2img with high denoise to achieve it, or you can check out my flux controlnet video to use the depth map to guide the generation.

  • @Ram-j2h3b
    @Ram-j2h3b День тому

    bro next time if you are creating a video show each and every step or attach the previous video where you have done, it is impossible to follow

  • @lechad9232
    @lechad9232 3 місяці тому

    Hi thanks for the great video. Any advice on speeding up image generation apart from the obvious such as smaller images, and one image at a time etc etc?
    I have 8GB VRAM and each image takes around 10-15 minutes which is a little bit annoying.
    Thanks again!

    • @xclbrxtra
      @xclbrxtra  3 місяці тому

      Which model are you using ? The Q4 ks ? I am using RTX 4060 with 8gb VRAM and it takes around 1 min 50 sec without loras and 2 min 30 sec with 2-3 Loras. 10-15 mins for single images seems like wrong 🤔

    • @lechad9232
      @lechad9232 3 місяці тому

      @@xclbrxtra Thanks for the fast response. Yes I'm using the Q4 ks. 1 lora. In all fairness I got this pc about 6 years ago, things might be outdated.

  • @carlosrodrigues705
    @carlosrodrigues705 3 місяці тому

    👋👋👋

  • @LinhLe-ib9gi
    @LinhLe-ib9gi 3 місяці тому

    I don'' know copy flux 1 Q8 to folder ??? help me

    • @xclbrxtra
      @xclbrxtra  3 місяці тому

      In comfyUI folder...go to models and then unet folder. Paste the flux GGUF there

  • @pandami1982
    @pandami1982 4 місяці тому

    I want to learn it

  • @chandrachudgowda22
    @chandrachudgowda22 4 місяці тому

    Im just getting a bunch of random pixels - running it on m3 air 16gb ram

    • @taucalm
      @taucalm 4 місяці тому +1

      you shouldnt use mac for stable diffusion as stable diffusion need computing power and shit macs have no gpu.

  • @ZekeTheReal
    @ZekeTheReal Місяць тому

    can you please help me with this, when i try and que the prompt it says this
    Prompt outputs failed validation
    DualCLIPLoader:
    - Required input is missing: clip_name1
    - Required input is missing: clip_name2
    UnetLoaderGGUF:
    - Value not in list: unet_name: 'flux1-dev-Q4_K_S.gguf' not in []

  • @ALDUIINN
    @ALDUIINN 3 місяці тому

    it's not free, if it's the schnell version of flux
    We need to stop using versions of flux that aren't actually free

  • @romanioamd5319
    @romanioamd5319 3 місяці тому

    flux1-dev-Q8_0.gguf vs flux1-dev-Q6_K.gguf what's best?

    • @xclbrxtra
      @xclbrxtra  3 місяці тому

      If you have good GPU and VRAM then Q8

    • @romanioamd5319
      @romanioamd5319 3 місяці тому

      @@xclbrxtra i have 3090.

    • @romanioamd5319
      @romanioamd5319 3 місяці тому

      @@xclbrxtra i have gpu rtx 3090.