The SIMPLEST workflow for FLUX Comfyui

Поділитися
Вставка
  • Опубліковано 25 гру 2024

КОМЕНТАРІ • 110

  • @mpwelton
    @mpwelton 4 місяці тому +9

    Just wanted to say thanks for the video. Straightforward and easy to understand. Managed to finally get up and running by following this having previously been struggling

    • @goshniiAI
      @goshniiAI  4 місяці тому

      You're very welcome! i appreciate sharing your feedback and am glad the video helped you get up and running with FLUX.

    • @ctrlartdel
      @ctrlartdel 4 місяці тому +1

      Right! After 20 seconds I felt this is a non bu||s#|+ video that makes me hate nodes and the word comfortable. Thank Christ! I think i will finally understand this crap! So used to A1111

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      @@ctrlartdel the shift from A1111 to ComfyUI can feel like a huge leap, but once you get past the initial hurdles, it opens up a whole new world of possibilities
      you're on the right path, and soon enough, those nodes will feel like second nature.

  • @JimGaleForce
    @JimGaleForce 4 місяці тому +11

    Although I have it running already, finding a 'here's the basic start from scratch' video is SO hard to find. Thanks for doing it!

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Thank you for the awesome feedback!

  • @rendraco
    @rendraco 4 місяці тому +1

    Just an amazingly simple video with fantastic results! Thanks for this short & precise to the point tutorial. Very much appreciate it.

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      It's great to hear you found it meaningful. Thank you for the kind words!

  • @memoryhero
    @memoryhero 4 місяці тому +2

    Great tutorial, my man. The speed is a great middle ground approach - not too fast, not to slow. One love.

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Thanks for the awesome feedback! I’m glad you found the pacing just right. happy Craeations, Much love to you too.

  • @Sonofmanstudio
    @Sonofmanstudio 4 місяці тому +1

    I don't usually comment the videos but as our friend said, Thank you for explaining the process from scratch ❤.

    • @goshniiAI
      @goshniiAI  4 місяці тому +2

      I'm really glad you made an exception to comment, your feedback means a lot! ♥

  • @g4p5l6
    @g4p5l6 4 місяці тому +2

    This workflow runs much faster on the initial runthrough than the flow included on the HF page. Thanks for posting.

    • @goshniiAI
      @goshniiAI  4 місяці тому +2

      Thanks for the feedback, & you're welcome! -I'm glad to know the workflow is speeding things up for you!

  • @gaidjin
    @gaidjin 4 місяці тому +1

    superbly well explained one by one, well done 👍

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Thank you for the compliments
      !

  • @Giuliano_Photo
    @Giuliano_Photo 2 місяці тому

    thanks man

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      thanks for the Luv! you are welcome.

  • @abktestingtesting
    @abktestingtesting 3 місяці тому

    great video!

    • @goshniiAI
      @goshniiAI  3 місяці тому

      i am glad you enjoyed it

  • @gothbynoche-p2r
    @gothbynoche-p2r 2 місяці тому

    thank you! liked and subscribed :D

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Welcome aboard and thank you as well. :)

  • @armauploads1034
    @armauploads1034 2 місяці тому

    Thank you!

    • @goshniiAI
      @goshniiAI  2 місяці тому

      You are welcome and Thank you for the Love.

  • @kewlorsolutions
    @kewlorsolutions 3 місяці тому

    Thanks for this.. major help.

    • @goshniiAI
      @goshniiAI  3 місяці тому

      Glad to hear it! You are most welcome.

  • @MaximusProxi
    @MaximusProxi 4 місяці тому +2

    hey what was the prompt with the half tree half human face, from the start of the video? I really liked that one :)

    • @goshniiAI
      @goshniiAI  4 місяці тому +4

      Hello there, thank you, and the prompt is right here.
      " A hyper-realistic human face, left side erupting with a gnarled oak tree, bark fusing seamlessly with flesh. Right side crumbling granite, veins of quartz glinting. Roots intertwine with stone fragments. Dappled sunlight, moss-covered ground. Renaissance-inspired composition, evoking nature's reclamation of humanity "

    • @MaximusProxi
      @MaximusProxi 4 місяці тому +1

      @@goshniiAI thank you very much for the prompt! will play around with it once back from vacation :)

  • @alexandrafejesnebanhidi6396
    @alexandrafejesnebanhidi6396 2 місяці тому

    What are you using that shows the hardware performances? :)
    Great tutorial!

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Thank you! it is the resource monitor.
      To install it you can watch the video right here. ua-cam.com/video/tIfr_duWyZQ/v-deo.htmlsi=Pwz-i9EXQwVOhLQw

    • @alexandrafejesnebanhidi6396
      @alexandrafejesnebanhidi6396 2 місяці тому

      @@goshniiAIThank you very much 😊

  • @BenSnoil
    @BenSnoil 4 місяці тому +4

    Edit.. When I used 'Load Diffusion Model' it will detect the model type so I used that instead, but still confused as to why 'Load Checkpoint' wont detect it. *shrugs*
    ---
    ComfyUI cant seem to 'detect' the type of model for flux.
    Error occurred when executing CheckpointLoaderSimple:
    ERROR: Could not detect model type of: E:\ComfyUI\ComfyUI\models\checkpoints\flux1Dev_v10.safetensors
    If I swap it with anyone of my SD or SDXL models it renders fine.
    Anyway to tell comfyUI what type of model Im using ?
    Thanks.

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Hello there, If you intend to use 'Load Checkpoint,' ensure that all model files are in the correct format and directory. You should also make sure that your ComfyUI is up to date, as upgrades can sometimes fix integration problems.

  • @gilgamesh.....
    @gilgamesh..... 4 місяці тому +2

    I run flux as the checkpoint version. It's been working great for me. Hands though, haven't lived up to the hype.They still end up looking like they just got pulled out of a garbage disposal. I'm trying to see if I get better results with different samplers and schedulers.

    • @sorijin
      @sorijin 4 місяці тому

      Have you tried the realism lora? Ive had incredible hands with it

    • @goshniiAI
      @goshniiAI  4 місяці тому +2

      Experimenting with different samplers and schedulers is a smart move. I'd suggest really focusing on refining your prompts too, it can sometimes make a world of difference. .

  • @RafaelKluender
    @RafaelKluender 4 місяці тому +1

    Exactly what I was looking for! Any idea to put all this into a docker container for easy deployment?

    • @goshniiAI
      @goshniiAI  4 місяці тому

      I'm glad you found exactly what you were looking for.. While I haven't put together a Docker container for this workflow myself, it's definitely an interesting idea to look into. Thank you for the great suggestion!

    • @RafaelKluender
      @RafaelKluender 4 місяці тому

      @@goshniiAI I will try to do so and let you know. It would allow people to deploy it on online services without a hassle. Most of us, do not have adequate GPUs to run it locally.

    • @goshniiAI
      @goshniiAI  4 місяці тому

      @@RafaelKluender Please do, and I will be glad to look into any information you provide. What online service are you currently using?

  • @KoganeNoYubi
    @KoganeNoYubi 2 місяці тому

    when i use the flux 1 dev standard flow, i cant use a ksampler, seems this only worls with checkpoints ?
    ive been trying for a while

    • @goshniiAI
      @goshniiAI  Місяць тому

      Hello there. You are correct. The FP8 model works in conjunction with the checkpoint node and the Ksampler; however, in order to use FLUX 1 Dev, you must also have the Load Diffussion Model and the Sampler Custom Advanced node.

  • @EfekanOrtanci
    @EfekanOrtanci 3 місяці тому +1

    I have trained my own face as a lora model and I can use it on a site called “repilcate” but I have limitations in creating photos there. Is it possible to generate images using my own face model lora in Flux with Comfyui?

    • @goshniiAI
      @goshniiAI  3 місяці тому

      YES! You can definitely use your custom LoRA model with FLUX in ComfyUI. However, make sure your LoRA training is compatible with FLUX

  • @LeoXavierFuchs
    @LeoXavierFuchs 4 місяці тому +1

    HOW did you load that flux model with a load checkpoint? It doesn't appear in my load checkpoint so already this doesn't work...

    • @goshniiAI
      @goshniiAI  4 місяці тому +2

      the model might not be placed in the correct folder, or there could be a naming issue.
      Make sure the FLUX model file is in the checkpoints directory within your ComfyUI setup and that it’s named correctly.
      Also, double-check that the file extension is ".safetensors" recognised by ComfyUI.
      If it still doesn’t show up, try restarting ComfyUI to refresh the list. Sometimes, just a simple reboot can resolve these.

  • @cemguney2
    @cemguney2 4 місяці тому +1

    is there a way to use flux with animatediff for creating videos?

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Flux with animatediff to create videos is super exciting, and I’m hoping the nodes to make that happen will be available soon.

  • @machnaut
    @machnaut 4 місяці тому +2

    can u make a video or reply comment . how to have a controlnet settings like prompt is more imortant, balanced or controlnet is more important in comfiui... this settings is available in automatic1111 or stable forge but i dont see this kindof settings in comfyui in whole of internet search ... thankyou... i want this settings because it is important for my workflow.. ty

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      That's a good point! ComfyUI does not have a direct setting for this, like Automatic1111 or Stable Forge, but you may get a similar effect by (tweaking the weights and fine-tuning the apply controlnet node) connections in your workflow.
      -Less weight provides the checkpoint model more flexibility.
      -Higher weight directs more attention to the controlnet processor.
      Try playing with node weights; you might find a perfect balance for you!

  • @user-ck8ec7pj1l
    @user-ck8ec7pj1l 2 місяці тому

    You can't select a VAE with this workflow?

    • @goshniiAI
      @goshniiAI  2 місяці тому

      You are right. The VAE for this workflow is provided by the same checkpoint you may select from the checkpoint node

  • @samehshams3323
    @samehshams3323 Місяць тому

    first of all thanks for this video, how can i add the color indicator that show CPU, RAM ....?

    • @goshniiAI
      @goshniiAI  Місяць тому +1

      You are most welcome. you can install the resource monitor here by viewing this video ua-cam.com/video/tIfr_duWyZQ/v-deo.htmlsi=C9HuqHnAx3IC2XLU

  • @joshc7656
    @joshc7656 4 місяці тому +1

    Rtx 3050 with 16gb of VRam am I boned?? I keep trying but get errors each time.

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      I have a 3060 and am running on 12GB of RAM, it is doable.
      Save the workflow and restart your PC or system to ensure that no intensive programs are running.
      Run comfyUI first thing after restarting,
      Load the saved workflow and then queue your prompt again.
      For the First generation it will take longer but remain patient until it is completed.

  • @ThierryPianoSolo
    @ThierryPianoSolo 2 місяці тому

    Hi, how do you show VRAM use? I use crystals but it only show CPU and RAM usage. :)

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Update all of your nodes and run a comfy update. I am sure that will take care of that.

    • @ThierryPianoSolo
      @ThierryPianoSolo 2 місяці тому

      @@goshniiAI Nope. Maybe it's because I run ComfyUI via Zluda (because I have an AMD GPU)...

  • @x9v8k
    @x9v8k 4 місяці тому +1

    24gb gpu, but some reason the flux image queue is stuck processing, is this a known issue?

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      I also got stuck in my process. Save the workflow and restart your PC or system to ensure that no intensive applications are running. Run comfy first thing after resuming, then queue your prompt again. Remain patient until completion for the first generation.

  • @theandroids
    @theandroids 3 місяці тому +1

    what about using loras with this?

    • @goshniiAI
      @goshniiAI  3 місяці тому +1

      Hello there, yes, that is possible, however, the Lora base models used must be for FLUX

  • @hpsbath
    @hpsbath 4 місяці тому +1

    How to on Queue Prompt Menu little bit Down [CUP Ram Gpu Vram Temp Hdd] Thanks for the video and easy to understand..

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      You are very welcome, and I am glad to hear that. you can find the link here to install the CPU and GPU monitors. ua-cam.com/video/tIfr_duWyZQ/v-deo.htmlsi=VHNZvuuaBuI6v23D

  • @ismgroov4094
    @ismgroov4094 4 місяці тому +1

    Controlnet, ipadapter possible,sir?😊

    • @goshniiAI
      @goshniiAI  4 місяці тому +3

      Yes, for controlnet. At the time of writing, we only have canny detection for flux. tinyurl.com/yefcavna
      however I feel it will only be a matter of time until we receive the other versions.
      I'm not sure about the Ipadapter at the time of writing, but possibly it will also be available soon.

    • @ismgroov4094
      @ismgroov4094 4 місяці тому +1

      @@goshniiAI sound great, sir🙏🏻🥹❤️

  • @CHARLESMANSFIELD
    @CHARLESMANSFIELD 4 місяці тому +1

    i get an error that says "failed to fetch"

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Failing to fetch is sometimes a related glitch. Try clearing your browser cache and refreshing ComfyUI or updating any dependencies

  • @ismgroov4094
    @ismgroov4094 4 місяці тому +1

    ❤🙏thx

  • @DerGrozen
    @DerGrozen 4 місяці тому

    thank you!

    • @goshniiAI
      @goshniiAI  4 місяці тому

      You're welcome! thank you for your response

  • @belenseoane2075
    @belenseoane2075 2 місяці тому

    KSampler
    Allocation on device
    this error happens to me

    • @goshniiAI
      @goshniiAI  2 місяці тому

      The GPU is running out of memory while sampling. Begin with a lower resolution and gradually increase it as you fine-tune the settings. Alternatively, close any background apps that are using GPU resources.

  • @BrainBoyle
    @BrainBoyle 4 місяці тому

    cheers... 👍

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Thank you, mate!

  • @RudmerCR
    @RudmerCR 3 місяці тому

    it crashes with a "reconnecting..." dialog

    • @goshniiAI
      @goshniiAI  3 місяці тому

      Hello there , you can follow these steps.
      -Save the workflow and restart your computer to make sure no intensive programmes are running.
      -When you resume, run ComfyUI first, and then queue your prompt once more.
      -Wait patiently for the first generation to finish.

  • @iamarto
    @iamarto 3 місяці тому

    🙏

  • @artzi27
    @artzi27 4 місяці тому +1

    5:55 you forgot to change the seed to 50 for comparison ;)

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Yes, you are correct. I forgot about the seed to compare, so I changed it to fixed when I noticed. I appreciate your observation.

  • @TailspinMedia
    @TailspinMedia 4 місяці тому +1

    why is everyone using the fp8....why not the fp16? I get amazing results with the fp16

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Awesome! I believe FP8 is gaining traction because of its efficiency and decreased VRAM usage.
      However, FP16 can also produce excellent results, especially if you desire higher accuracy in your outputs.

  • @Avalon1951
    @Avalon1951 4 місяці тому

    Followed your workflow exactly and I still get a AttributeError: module 'torch' has no attribute 'float8_e4m3fn, I have a RTX 3070 w/8GB?

    • @EmeranceLN13
      @EmeranceLN13 4 місяці тому +2

      I saw on some other videos that 12 GB VRAM is the minimum needed right now for the model. Not sure how true that is or if there is a work around.

    • @Avalon1951
      @Avalon1951 4 місяці тому +1

      @@EmeranceLN13 Well I guess i'll just wait till they optimize it

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      The error "float8_e4m3fn" could be due to a problem with PyTorch or the CUDA version installed. I propose that you double-check that you have the most recent versions of PyTorch and NVIDIA drivers. Sometimes, a quick update may resolve compatibility issues.

    • @Avalon1951
      @Avalon1951 4 місяці тому +1

      @@goshniiAI ok will do

    • @Avalon1951
      @Avalon1951 4 місяці тому

      @@goshniiAI Well I have the latest Nvidia drivers and downloaded and installed latest version of pytorch and here is the curious thing, when I do the pip show torch command in either Comfy cmd or my windows cmd it still show version 201 but latest is 240, any thoughts??

  • @nemesisone8927
    @nemesisone8927 4 місяці тому

    why do difficult, there are already simple workflows out there to download lol huggingface etc

    • @goshniiAI
      @goshniiAI  4 місяці тому +2

      I totally get where you're coming from! But for those who want to understand the nuts and bolts of how it all works-and maybe even customise things to their liking-mastering the process gives you full control.

  • @WillFalcon-b7l
    @WillFalcon-b7l 2 місяці тому

    size mismatch for double_blocks.0.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1])

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Hello, there. Your node settings or input may be incompatible. Double-check your node connections, or if you are using older nodes, updating to the latest version may solve the size mismatch problem.

  • @cosymedia2257
    @cosymedia2257 Місяць тому

    Thank you!

    • @goshniiAI
      @goshniiAI  Місяць тому

      You are most welcome.