Install Flux 1.0 Dev 23G in ComfyUI - The Fastest AI for 15s Image Generation!

Поділитися
Вставка
  • Опубліковано 5 лют 2025

КОМЕНТАРІ • 219

  • @Jockerai
    @Jockerai  3 місяці тому +9

    Don’t forget to like and subscribe for more tutorials about Ai!👇🏻
    www.youtube.com/@Jockerai?sub_confirmation=1

    • @TheGuitarnob
      @TheGuitarnob 3 місяці тому

      i can do it with 11gb vram??

    • @Jockerai
      @Jockerai  3 місяці тому

      @@TheGuitarnob yes you can

    • @rolandschimpf4502
      @rolandschimpf4502 3 місяці тому

      Could I get the prompt of the picture in the middle at 0:08 ? I'm asking for a friend.

  • @CosmicCreek
    @CosmicCreek 23 дні тому

    THANK YOU!! I have watched so any videos by others and felt lost. This is amazing resource for me as a comfy UI and Flux beginner.

    • @Jockerai
      @Jockerai  23 дні тому +1

      I'm really happy to hear that! welcome bro glad to have you here✨

  • @BoatRocker619
    @BoatRocker619 2 місяці тому +3

    Many channels dont show the stuff properly, some just sell the paid stuff, some dont have the pro tips. Glad to found your channel.

    • @Jockerai
      @Jockerai  2 місяці тому

      you're very welcome my friend💚

  • @pixelzen007
    @pixelzen007 3 місяці тому +3

    Exceptionally done! I mean the video. This is what content creators should do. Well done mate!

    • @Jockerai
      @Jockerai  3 місяці тому

      Thanks a ton! 🙌 I really appreciate your kind words. It means a lot coming from someone who knows the effort that goes into creating content. Glad you enjoyed the video, and I’ll keep working hard to bring more valuable stuff. Cheers, mate!

  • @bluedynno
    @bluedynno Місяць тому

    I made it on my GTX1060 6GB for about 5 minutes, using flux fp8. Previous workflow took about 14 minutes. Thanks for the tutorial!!

    • @omarei
      @omarei 17 днів тому

      Use wave speed node to shave off a minute using block caching.

    • @juliandiaz9902
      @juliandiaz9902 12 днів тому

      @@omarei hi!, can you explain me how to do it please?, or give me the link or name to learn how to do it, in my pc the generation time is very low, thanksss

  • @cryyc
    @cryyc 2 місяці тому

    Thank you for the video. It helped me quite a lot and bc of it I can now generate images in between 35s - 40s on a NVIDIA RTX 3060 12GB.

  • @CalladsEssence
    @CalladsEssence Місяць тому

    Thank you for this amazing workflow! I ran it with ' lumiere_flux_alpha-fp8.safetensors' and all of your settings. It produces 1344x1024 stunning images in just 7.5 seconds. (Had 2 additional Lora's loaded and ran it on a GTX4090). Thanks again!

  • @Oxes
    @Oxes 3 місяці тому +9

    I am sometimes very skeptical about these things, but this worked! 1 gen at an extremely high quality took 32 seconds on my Nvidia 3060 12 gigs RAM, bro thank you so much! This is crazy!

    • @Jockerai
      @Jockerai  3 місяці тому

      I’m really glad it was useful for you, bro! It’s awesome to hear that it worked so well and so fast for you. Makes all the effort worth it!

    • @Oxes
      @Oxes 3 місяці тому +1

      @@Jockerai can you make a tutorial with this formula including controlnet? Tried it yesterday but i get an error due to out of memory hahah
      Keep doing the awesome work!

    • @Jockerai
      @Jockerai  3 місяці тому +2

      @@Oxes yes probably it will be the next video about flux controlnet

    • @monah62rus
      @monah62rus 3 місяці тому

      You're lucky, I have a 3060Ti video card with 8 gigs of memory and FLUX itself takes 20 minutes to load, that's just too long.

    • @amritbanerjee
      @amritbanerjee 2 місяці тому

      @@monah62rus Please update ComfyUI or there is some other issue, 2070S here with 8GB and this takes around 2 mins

  • @SamBeera
    @SamBeera 3 місяці тому

    UA-cam surfaced your video in my feed, glad it did. You are an amazing creator. Thanks much mate. Subscribed

  • @biggreg100
    @biggreg100 3 місяці тому +1

    I run a i7 Q7700K bios overclocked. 64 gigs of ram and a 4080 12 gig Nvidia and generate in under a minuet with your settings. Thanks a ton Jockerai...

    • @Jockerai
      @Jockerai  3 місяці тому +1

      @@biggreg100 you're welcome bro ✨

  • @sephirothcloud3953
    @sephirothcloud3953 3 місяці тому +5

    With this turbo mode I render image at 37s with 306012gb and 10years old CPU 16gb ram, insane I can finally use flux! Thank you!

    • @Jockerai
      @Jockerai  3 місяці тому

      @@sephirothcloud3953 happy to here that 🤩
      You're welcome mate✨

  • @thisnametaken3735
    @thisnametaken3735 3 місяці тому +1

    I tried this and I have to say what a revelation! I use a regular prompt of my own devising with a specific seed for testing and comparison.
    At 20 steps it produces vastly superior lighting, shadows, and details of landscape, for a second or so more processing time. The skin tones are much better detailed.
    At 8 steps it was a lot faster of course, but mildly disappointing. Nothing wrong with the result or any errors, just a little flat.
    But a compromise of 12 steps gave a better result than 8 with, and slightly better than 20 without the new Lora.
    Test and iterate at 8, again at 12, and produce at 20 for a great result.
    Thanks for the heads-up man. I love it.

    • @Jockerai
      @Jockerai  3 місяці тому +1

      Thanks for sharing your experience! I appreciate it, but I have to disagree. The images I've generated with Lora at 8 steps, especially with weight type set to default, turned out amazing-both in depth and quality. And honestly, the difference in processing time between 20 steps and 8 steps is definitely more than just a second. I mentioned that in the video, my friend!

    • @thisnametaken3735
      @thisnametaken3735 3 місяці тому

      @Jockerai My testing was without any other changes than the number of steps. Weighting and other tweaks will always change the end results. From what I tried, the speed increase was in line with your results. 12 iterations with no other variations gave a better result, and 20 is massively better. The LoRa gives beautiful results with minimal effort and having the choice of anything between 8 and 20 steps, with equal and better results, gives people a hell of a lot of options. Changing weighting refines to potentially even better output, as is normal.

  • @0x0abb
    @0x0abb 2 місяці тому

    Great tutorial, thank you!

  • @coffeepod1
    @coffeepod1 3 місяці тому +1

    The fastest and great quality, no jokes. I was getting 60-90 secs to generate flux1dev, and now I just need 25 secs to generate it on my 3080ti laptop

  • @Martin_Level47
    @Martin_Level47 2 місяці тому

    Realy amzing stuff 🙂

  • @RDUBTutorial
    @RDUBTutorial 2 місяці тому

    great video ....if you want a niche audience.. start making videos for the Mac community as most of the film community lives on macs and that is where a lot of this is destined for eventually. Either way your video will eventually get redone for us. Keep up the good work.

    • @Jockerai
      @Jockerai  2 місяці тому

      @@RDUBTutorial Thank you so much for your suggestion! I'll definitely try to look into it. However, the downside is that Mac's GPUs and overall hardware compatibility with AI image generation tools can be quite problematic. There are a lot of bugs that might make the tutorial process significantly more challenging.

  • @r.gregoirefogliami8981
    @r.gregoirefogliami8981 7 днів тому

    magnifique !!!

  • @funnymataleao
    @funnymataleao 3 місяці тому +1

    My god, bro. You are my saver! Thank you for this lifehack, now I got amazing results only in 12 seconds on my RTX 4080 Super in Forge with large 23Gb model. I'm really happy!

  • @999hamstein
    @999hamstein 3 місяці тому

    Wow, awesome!!! 13 minutes on the GTX 1650 haha... thank you, bro!

    • @Jockerai
      @Jockerai  3 місяці тому

      @@999hamstein you're welcome bro ✨

  • @Jcool721
    @Jcool721 2 місяці тому

    Thanks, I tested. Not natural looking (digital art looking indeed) and AI makes lot of mistakes. Too many hands, fingers, legs etc. Great video!

  • @CrustyHero
    @CrustyHero 2 місяці тому

    great work thank you

    • @Jockerai
      @Jockerai  2 місяці тому

      @@CrustyHero you're welcome my friend

  • @Zanroff
    @Zanroff 3 місяці тому +1

    Game changer. This works very well.

    • @Jockerai
      @Jockerai  3 місяці тому

      @@Zanroff 🤘🏻🔥

  • @ParvathyKapoor
    @ParvathyKapoor 3 місяці тому +1

    Ali mama

    • @Jockerai
      @Jockerai  3 місяці тому

      Ali mama is Ali's mom and Ali baba's wife probably 😁😁

  • @cesareric9435
    @cesareric9435 3 місяці тому

    ¡Muchísimas gracias, buen trabajo!

  • @Vnull-x2z
    @Vnull-x2z 3 місяці тому

    perfect ❤❤

  • @freneticfilms7220
    @freneticfilms7220 3 місяці тому +2

    Sure that can work. Problem is when you wanna use other LoRAs on top of the Turbo LoRA. Then you run into problems, since you cant have too many LoRAs active at the same time and excepting great results.

    • @Jockerai
      @Jockerai  3 місяці тому

      @@freneticfilms7220 I had amazing result even with 4 loras at the same time.

    • @Jockerai
      @Jockerai  3 місяці тому +1

      @@freneticfilms7220 the trick is to use multiple loras with this method(8 step lora) it's better to use default weight type

  • @JabyerDesigner
    @JabyerDesigner 28 днів тому

    thanks

  • @epelfeld
    @epelfeld 6 днів тому

    How to add negative prompt here?

  • @RDUBTutorial
    @RDUBTutorial 20 днів тому

    What happens if you add more Lora’s to the power loader? How should the weight be divided up?

  • @researchandbuild1751
    @researchandbuild1751 2 місяці тому +2

    Does not speed up anything on my RTX 3090. Actually seems to slow it down. It takes less steps yes but the time each step is longer so ur doesn't go any faster

  • @parastoomohammadzadeh
    @parastoomohammadzadeh 3 місяці тому

    Very good thanks ❤

    • @Jockerai
      @Jockerai  3 місяці тому

      You're welcome eshgh😍❤️

  • @EricJamesHurley
    @EricJamesHurley 3 місяці тому

    Very cool! Try Valhalla if you want an easy start!

    • @Jockerai
      @Jockerai  3 місяці тому

      what's that could you explain?

  • @corwinblack4072
    @corwinblack4072 3 місяці тому +2

    Yea, its cause HYPER has Hyper Lora inside it, which is ByteDance creation, if you want that, you can simply add it as LORA, its on their huggin site. Thing is, it has fairly detrimental impact on image quality. IMHO, all their HYPERs do.
    Q8 GGUF, well Q8 is simply reduced quality INSIDE that model. Imagine it like JPEG compression and you get idea how it works. Another issue with GGUF is that its "dumb" compression, so reason why its slow is that it basically needs to be unpacked (or unzipped :D) to use it, which costs some extra HW processing time.
    Solution outside this? Well, NF4 obviously. NF4 v2 to be precise. Only downside is that on ComfyUI to use it with LORA, you need a metric ton of VRAM as it needs to load everything into VRAM.
    Could be solved if someone wrote something to "simply" convert LORA to QLORA (NF4), so it wouldnt need to be done on-the-fly.

  • @sephirothcloud3953
    @sephirothcloud3953 3 місяці тому

    glad i subscribed

    • @Jockerai
      @Jockerai  3 місяці тому +1

      Welcome bro happy to hear that✨😉

  • @silentage6310
    @silentage6310 Місяць тому

    with this turbo lora, can i add any other loras?

  • @vidokk77
    @vidokk77 Місяць тому

    hey i am working on your old workflow and it is great take me 30 sec for 1 mage on 3070ti, but i tried this workflow and put all the same exept changing the name of the lora which i think is not important and it takes me 108 sec, i get the same image as you but i don t know where is problem

  • @岩田邦夫-v8k
    @岩田邦夫-v8k 3 місяці тому

    Hmmm, it is very interesting, but it is a hassle to change my working environment because there are so many updates every month, so I will wait until the lightweight version of Flux 1.1 is released.🤔

  • @Sergei_CG
    @Sergei_CG 3 місяці тому

    Thank you for such a detailed video! The only question is where I can get the DualClipLoader files?

    • @Jockerai
      @Jockerai  3 місяці тому

      @@Sergei_CG you're welcome my friend. You can find the download link in the description of this video : ua-cam.com/video/QmYoGPHdQfA/v-deo.htmlsi=kcgrTfd_o9miAkHs

  • @phenix5609
    @phenix5609 3 місяці тому +1

    Did you know about this node called, "flux sampler parameters" from the comfy essentials node, the node combine, the seed, sampler, schreduler, steps, guidance, max shift and base shift and denoise, but with the ways she design she even let you do plot comparaison easily, between any of the previous parameter annonce, as it's not a drop down menu to select the option, you juste write anything you want to compare, like ( ex: "euler, deis", like that for the sampler, or "8,12,20" for the step in this new node, and it will generate consecativly, the picture while applying nthe parameter you enter. i fnd it really great surprised not a lot of flux workflow have it.

    • @Jockerai
      @Jockerai  3 місяці тому

      very interesting mate. I haven't use that yet. send me a workflow or link and I will try it out thank you for sharing your experiences💚

    • @phenix5609
      @phenix5609 3 місяці тому

      @@Jockerai hi, i don't know if it youtube, or else but i respond to your com a few hours ago, and now that i check i don't see it anymore... i can try sending it back to you in the comment ( the workflow i mean), but not sure that happen again, if you prefer tell me an other way i can send it to you, because i don't know but don't think there dm on youtube right ?

    • @Jockerai
      @Jockerai  3 місяці тому

      @@phenix5609 yes . Please send me in telegram. T.me/graphixm

  • @TheCpWinters
    @TheCpWinters 2 місяці тому

    Thanks a lot for your guide mate! Strange thing is it works fine with ComfyUI, but absolutely doesn't work with Forge. It just crashes - "Connection errored out" in a browser and no errors in console. Both installed via StabilityMatrix. 4070, 32gb, SSD.

  • @vinusharma2009
    @vinusharma2009 3 місяці тому

    Hello , Greeting from India.
    I had a question .
    Is there a way where we can create images with a shirt (or any other cloth) image as an input ?
    Like an avatar to wear a specific cloth created speratly in a different picture? To keep the input clothing consistent for different characters.
    Its for a personal styling poc project.
    Thank you for your effort and time which you give back to community !!

  • @rogercabo5545
    @rogercabo5545 12 днів тому

    Just wondering what this AI is interesting for.. ? a single picture.. ? Tell me plz..

  • @Kostya10111981
    @Kostya10111981 2 місяці тому

    10:50 - Nvidia 3060 faster than 6900xt. What an AMD's joke!

  • @primostone806
    @primostone806 3 місяці тому

    Absolutely awesome! Can it further extended with control net etc.?

    • @Jockerai
      @Jockerai  3 місяці тому

      yes it works with everything. 😎😎
      I have tested it with : Canny, inpainting, outpainting, multi loras, depth controlnet and...

    • @genAIration
      @genAIration 3 місяці тому

      Sir hi again, did u try fp8 versions?

    • @Jockerai
      @Jockerai  3 місяці тому

      @@genAIration yes it works easily

    • @genAIration
      @genAIration 3 місяці тому +1

      @@Jockerai yea but generation speed is also same according to my expreinces. So we don't need fp8 versions as well I guess

  • @камскоеустье
    @камскоеустье 25 днів тому

    4070 ti super - 12 s. thx bro

  • @kaptainkraken
    @kaptainkraken 3 місяці тому +1

    Am i missing something here? I dont understand how my videocard with 16GB can load a 23GB model, infact when i try your workflow with everything exactly like yours it crashes when trying to load the Flux1-dev???

    • @youdig-detection
      @youdig-detection 2 місяці тому

      Probably the weight of the model in your hdd not on your vram .

  • @yomi0ne
    @yomi0ne 3 місяці тому

    thank you very much, I don't find anywere the load diffusion model, could you let me know where is it?¡ thank you!

  • @myta6op402
    @myta6op402 2 місяці тому

    Thank you very much! It really works! rx 5700 xt and 32 memory - 278 sec. How can this assembly be connected to "Flux in painting" and "image to image"?

    • @Jockerai
      @Jockerai  2 місяці тому

      @@myta6op402 you're welcome bro. It is simple and you can use it in all workflows. Just make the first and second nodes (load diffusion model node, Dual clip loader node) the rest can be any workflow such as Inpainting, img2img, controlnet etc.
      Check this video for a controlnet example : ua-cam.com/video/pvU5fkBVHwI/v-deo.html

    • @myta6op402
      @myta6op402 2 місяці тому

      @@Jockerai Thanks, buddy! That's about how I imagined it! I got carried away with all this quite recently, so there are a lot of difficulties associated with it!

    • @Jockerai
      @Jockerai  2 місяці тому

      @myta6op402 Glad to hear it’s working out for you! Totally get the excitement, it’s easy to dive deep into this stuff once you start. Don’t worry about the challenges; everyone goes through a learning curve with these setups. If you ever run into specific issues or need help, feel free to reach out. Keep going, you’re doing great!

    • @myta6op402
      @myta6op402 2 місяці тому

      @@Jockerai You can't even imagine how you help people! I've spent so many hours on non-working, outdated and crooked builds! Your presentation style is just great! Everything is simple, accessible and the main thing is clear! You save people time and keep them motivated! Keep it up ! I hope the universe will reimburse you for your expenses! Thanks! I've been struggling with Flux inpaint for three days! I've tried many Flux models! The generation sometimes took 40 minutes and sometimes 30 seconds, but it didn't work properly! I needed to add a black cat sitting on the floor to the image! Anything but a CAT appeared in my image! And so, literally now, after reading your message and comparing everything that you described, I have a beautiful black cat in the picture!!!! 150 seconds, which is great for me regarding my PC and the output quality! It's epic, a delight and a sea of emotions!

    • @Jockerai
      @Jockerai  2 місяці тому

      @@myta6op402 Wow! Your message just made my day! Seriously, it’s awesome to see all your hard work and persistence paying off and that black cat story had me smiling! Thank you so much for the wonderful wish; it really means a lot . It’s messages like yours that keep me motivated to keep sharing and helping out. Keep experimenting and having fun with Flux you’re doing amazing, and I’m here if you ever need anything else! Let’s keep that creative energy going! 🚀✨

  • @StrikerTVFang
    @StrikerTVFang 3 місяці тому

    Thank you for this! I'm wondering, does this negatively affect stacking custom LoRas? Will they behave oddly with only 8 steps?

    • @Jockerai
      @Jockerai  3 місяці тому +1

      @@StrikerTVFang actually I tested it with multiple loras and the results were different. With personal loras which I taught in my previous videos , "default" weight had much better results.
      For other loras you have to test because every lora could have different results. But in general there are no issues for stacking loras

  • @idoshor163
    @idoshor163 2 місяці тому

    I am trying to add a ControlNet to this workflow but with no success, do you have a workflow in which you did so already?

    • @Jockerai
      @Jockerai  2 місяці тому

      I have already uploaded 2 videos including controlnet with this 8 step turbo watch them in the channel. Openpose and Depth map

  • @InfoRanker
    @InfoRanker 3 місяці тому +1

    Flux1 Dev main Model download doesn't work. Says "File wasn't available on site"

    • @Jockerai
      @Jockerai  3 місяці тому

      Change your internet or VPN or browser and test again

    • @Sergei_CG
      @Sergei_CG 3 місяці тому +1

      @@Jockerai In order to download this file you need to log in and accept license.

    • @InfoRanker
      @InfoRanker 3 місяці тому

      @@Sergei_CG Didn't see that originally, thanks

  • @Zuluknob
    @Zuluknob 3 місяці тому

    Image quality goes up even higher if you use the turbo but with 20 steps.

    • @Jockerai
      @Jockerai  3 місяці тому

      @@Zuluknob very good idea thanks for sharing bro

  • @karansonkar2434
    @karansonkar2434 3 місяці тому

    It's working but taking some time. And after the image is generated It's showing all black. It's like the image is not loading . After the image is generated the Save Image node is still black. There is nothing in there. Am I doing something wrong?

  • @monah62rus
    @monah62rus 3 місяці тому

    And where can I get these "weight dtypes" in Load Diffusion Models ?

  • @FiejaMatteo
    @FiejaMatteo 3 місяці тому

    i get "does not accept copy argument" error on ksampler everytime try to use flux nf4 with lora

  • @TeemuKetola
    @TeemuKetola 3 місяці тому

    MAC doesn't support FP8. Do you have any secondary recommendations to replace the FP8 model that would still give good quality images fast? MAC is optimized for FP16. And another question: is there a reason why you didn't pair the GGUF models with the turbo lora?

  • @marsonal
    @marsonal 3 місяці тому

    please make a work flow like this where inpainting is supported . thanks

    • @Jockerai
      @Jockerai  3 місяці тому

      @@marsonal it is next video stay tuned my friend

    • @marsonal
      @marsonal 3 місяці тому

      @@Jockerai thanks for great work

  • @InfoRanker
    @InfoRanker 3 місяці тому +2

    Dunno what I'm doing wrong but I get this error listed below, I noticed that the DUALCLIPloader boxes had a red ring around them:
    Prompt outputs failed validation
    DualCLIPLoader:
    - Value not in list: clip_name1: 'ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors' not in []
    - Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in []

  • @am0oma
    @am0oma 3 місяці тому

    what about show text phrases right?

  • @kovanjola
    @kovanjola 3 місяці тому

    "I can't download because this model doesn't support loading. File wasnt available on site

  • @ThorIA-s1b
    @ThorIA-s1b 3 місяці тому

    Excelente, me las crea en 15 segundos con mi RTX 4080

  • @cyberbol
    @cyberbol 3 місяці тому +1

    When i loading your setting to COmfy I have this error: Power Lora Loader (rgthree)

    • @Jockerai
      @Jockerai  3 місяці тому +3

      Click manager and then click "install missing custom nodes" then restart comfyui and you are done

  • @yomi0ne
    @yomi0ne 3 місяці тому

    excuse me, were is Load DIffusion Model?

  • @Sujal-ow7cj
    @Sujal-ow7cj 3 місяці тому

    Will original version will work on 16 gb vram

  • @CarlosDiaz-t2j
    @CarlosDiaz-t2j 3 місяці тому

    do you have guf AI model workflow where it can generate random images using an llm>??>

  • @Mircarin
    @Mircarin 3 місяці тому

    I don't know why Comfy UI is using my RAM instead of my GPU, which makes the image generation process very slow :(

  • @ALEXDIAMONDX_X
    @ALEXDIAMONDX_X 21 день тому

    I have 32 gb ram 3200mhz and rtx4070super ¿Why my cworkflow run slow?

    • @Jockerai
      @Jockerai  21 день тому +1

      @@ALEXDIAMONDX_X use fp8-e4m3fn for weight type in first node

  • @imagine_84
    @imagine_84 Місяць тому

    does it work with other loras ?

    • @Jockerai
      @Jockerai  Місяць тому

      @@imagine_84 yes it does

  • @ausbildungbew2189
    @ausbildungbew2189 3 місяці тому

    So I tried it the overall quality looks really good but in almost every picture containg hands the look awful. almost every hand was garbage.

  • @mohsen1208
    @mohsen1208 3 місяці тому

    i have a problem, whenever i change the prompt, my system load the model all over again. is there a fix for this?
    other than that, its all good. with 3060ti i get around 30sec gen time.

    • @Jockerai
      @Jockerai  3 місяці тому +1

      @@mohsen1208 if you watch the video I used a trick, when you change the prompt and green bar reaches to the sampler custom advanced node , cancel the process and click queue prompt again

    • @CrustyHero
      @CrustyHero 2 місяці тому

      i always have this problem.. when i click cancel, it doesn't cancel immediately but it wait for the loading to end and then it cancels.. which takes the same time..
      @@Jockerai

    • @Jockerai
      @Jockerai  2 місяці тому

      It's because probably you are using the "default" weight type in LoadDiffusionModel node. change it to fp8 versions and you won't have that issue anymore.

  • @lousymask
    @lousymask 3 місяці тому

    my laptop keeps crashinggggggggggggggggggggggggggggggg why doesnt this work on my device

  • @gregpin1840
    @gregpin1840 3 місяці тому +1

    Thanks for the research. But I still wonder how you can run DEV 23G locally. I have a 24GB Card, with let's say 20GB of available VRAM, and always run out of memory. What system do you have ?

    • @equilibrium964
      @equilibrium964 3 місяці тому

      I run it with a 12 GB card, the model gets partially loaded and then shuffled around (increases generation time of course, same goes for lora training). In ComfyUI there is an option to activate it, but I have no idea where, because for my card comfy activated it by default.

    • @Jockerai
      @Jockerai  3 місяці тому

      You should use main Flux dev easy but something is wrong with your settings. please send me the full error or log. I will help you out

    • @prevailz1
      @prevailz1 3 місяці тому

      Run dev in --fast on 4090 and I'm generating in 12secs with 25 steps at 1920x1080

    • @Jockerai
      @Jockerai  3 місяці тому +1

      or you can try this :
      1. Open System Properties:
      - Right-click on the Start icon and select System.
      - In the window that opens, under Related settings, click on Advanced system settings.
      2. Access Performance Options:
      - In the System Properties window that appears, go to the Advanced tab.
      - Under the Performance section, click on the Settings button.
      3. Virtual Memory Settings:
      - In the **Performance Options** window, go to the Advanced tab.
      - In this tab, under the Virtual Memory section, click on the Change button.
      4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.

    • @starbuck1002
      @starbuck1002 3 місяці тому

      @@equilibrium964 just use a quantized (gguf or nf4) version in combination with the 8 step lora or even a merged checkpoint.

  • @ehabeltorky88
    @ehabeltorky88 3 місяці тому

    Hi, thanks, did it work with forge? and where can I get fp8_e5m2

    • @Jockerai
      @Jockerai  3 місяці тому

      @@ehabeltorky88 yes it works and on top menu bar there is section I forget it's name but when you open it you can see options for fp8-e5m2

    • @ehabeltorky88
      @ehabeltorky88 3 місяці тому

      @@Jockerai I think you mean Diffusion in Low Bits, thanks I'll try it.

  • @maxlux-xj9nh
    @maxlux-xj9nh 3 місяці тому

    1: What GPU do you recommend for the price? An entry-level GPU? 2: With this Turbo Flux Lora, how can I use another Lora because I need to create images with a Lora but at the same time using Turbo Lora?

    • @Jockerai
      @Jockerai  3 місяці тому +1

      @@maxlux-xj9nh I recommend Only Nvidia GPUs, 3060 or above and for VRam 12G or above.
      For multi lora just click Add lora how many times you want and load more loras to use all of them at the same time. You can watch full tutorial here:
      ua-cam.com/video/-Xf0CggToLM/v-deo.html

  • @Uday_अK
    @Uday_अK 3 місяці тому

    I have tried but while loading model my comfyUI is crashing any solution? I have NVIDIA 12GB VRAM

  • @b.a.5428
    @b.a.5428 3 місяці тому

    great tutorial, i am trying to do this on a MacBook Pro M1, however i get this error: "Trying to convert Float8_e5m2 to the MPS backend but it does not have support for that dtype." is there a way to achieve this? Thanks!

    • @Jockerai
      @Jockerai  3 місяці тому +1

      Thank you, glad you enjoyed the tutorial! 🙌 Regarding your error: The issue you’re seeing comes from the MPS (Metal Performance Shaders) backend on Mac, which currently doesn’t support the Float8_e5m2 datatype. Unfortunately, Apple's M1/M2 GPUs don't have full support for all data types like some other GPUs do.

    • @b.a.5428
      @b.a.5428 3 місяці тому

      @@Jockerai thank you for the reply! i hope it will very soon or a work around! again thanks for the great content..keep them cominggg

  • @vkthakur7839
    @vkthakur7839 3 місяці тому +1

    There are no links in description...

    • @Jockerai
      @Jockerai  3 місяці тому

      Now links are on the description please check it again. The video was uploaded and I didn't noticed 😬

  • @AIBizarroTheater
    @AIBizarroTheater 3 місяці тому

    running extremely slow on 3060 12gbram, | 4/8 [04:43

    • @Jockerai
      @Jockerai  3 місяці тому

      @@AIBizarroTheater which weight did you used?

    • @AIBizarroTheater
      @AIBizarroTheater 3 місяці тому

      @@Jockerai thank for answering flux dev fp8

    • @Jockerai
      @Jockerai  3 місяці тому

      @@AIBizarroTheater try this :
      1. Open System Properties:
      - Right-click on the Start icon and select System.
      - In the window that opens, under Related settings, click on Advanced system settings.
      2. Access Performance Options:
      - In the System Properties window that appears, go to the Advanced tab.
      - Under the Performance section, click on the Settings button.
      3. Virtual Memory Settings:
      - In the *Performance Options* window, go to the Advanced tab.
      - In this tab, under the Virtual Memory section, click on the Change button.
      4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.

  • @elgodric
    @elgodric 3 місяці тому

    Does the default flux model 23 GB work on 8Vram? Or should I go with the GUFF model

    • @Jockerai
      @Jockerai  3 місяці тому +1

      if you have 3000 series Nvidia GPUs or above yes it does.

  • @Redemptionz2
    @Redemptionz2 3 місяці тому

    What kind of workflow do you upscale with?

    • @Jockerai
      @Jockerai  3 місяці тому

      these tow workflows :
      ua-cam.com/video/oVnTZLRgUC0/v-deo.html
      .
      ua-cam.com/video/NKwXV5kgwD0/v-deo.html

  • @varyonalquar2977
    @varyonalquar2977 3 місяці тому

    Why does it render for 10 minutes for 1 image on an RTX 4070 SUPER ?? And how can a 23 GB model fit into 12GB VRAM?

    • @Jockerai
      @Jockerai  3 місяці тому +1

      try this :
      1. Open System Properties:
      - Right-click on the Start icon and select System.
      - In the window that opens, under Related settings, click on Advanced system settings.
      2. Access Performance Options:
      - In the System Properties window that appears, go to the Advanced tab.
      - Under the Performance section, click on the Settings button.
      3. Virtual Memory Settings:
      - In the Performance Options window, go to the Advanced tab.
      - In this tab, under the Virtual Memory section, click on the Change button.
      4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.

  • @JointyTv
    @JointyTv 3 місяці тому

    Does this work with other lora aswell? FOr example a character lora?

    • @Jockerai
      @Jockerai  3 місяці тому

      Yes sure you can add more Loras by pressing Add Lora button. But the point is in my tests "default weight" work better character loras. However you can test

    • @JointyTv
      @JointyTv 3 місяці тому

      @@Jockerai working yes, but does it still hold the quality since most of the time when using mutiple loras the characters get merged

    • @Jockerai
      @Jockerai  3 місяці тому

      @@JointyTv in my tests yes it works with the best quality but you should also keep in mind that the quality of the Lora itself is important as well, especially if it's a custom Lora

  • @jtjames79
    @jtjames79 3 місяці тому

    Has anyone tried training with stereo images?

  • @sephirothcloud3953
    @sephirothcloud3953 3 місяці тому

    My problem with flux is at very image generation it has to load the model again and again, while sdxl no, I tried even with gguf q4 that has less gb than sdxl model and it's the same. Do you have any advise for it?

    • @Jockerai
      @Jockerai  3 місяці тому +1

      After the first generation it loads the models but for the next one it doesn't.
      But however tell me your PC specs please

    • @sephirothcloud3953
      @sephirothcloud3953 3 місяці тому

      @@Jockerai 3060 12gb, ok then it's 100% my ram, i have 16gb, but is weird it happens only with flux even 6gb gguf model, while this doesn't happen with sdxl 12gb models, idk...

    • @Jockerai
      @Jockerai  3 місяці тому +2

      @@sephirothcloud3953 you can try this :
      1. Open System Properties:
      - Right-click on the Start icon and select System.
      - In the window that opens, under Related settings, click on Advanced system settings.
      2. Access Performance Options:
      - In the System Properties window that appears, go to the Advanced tab.
      - Under the Performance section, click on the Settings button.
      3. Virtual Memory Settings:
      - In the **Performance Options** window, go to the Advanced tab.
      - In this tab, under the Virtual Memory section, click on the Change button.
      4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.

    • @sephirothcloud3953
      @sephirothcloud3953 3 місяці тому

      @@Jockerai Yes i already have 40000, i just boosted it by moving the models on SSD from HDD, 10x speed, with your turbo mode i render at 37s - 111s with re-loading, gguf5q 48s - 78 with reloading. I'll buy more ram. I can finally use Flux with ur turbo mode, insane, thank you! :)

  • @pink_fluffy_sky
    @pink_fluffy_sky 3 місяці тому

    is it fp16? I heard fp8 is better for 12gb?

    • @Jockerai
      @Jockerai  3 місяці тому

      Before releasing this LoRA, yes, but now if you set the weight type to default it will be fp16 and in 8 step it will generate faster now. It is up to you fp16 is better in quality and a bit worse in speed

  • @cachitown
    @cachitown 3 місяці тому

    What does the 23G mean?
    Seeing that Dev is a 12B parameter model

    • @starbuck1002
      @starbuck1002 3 місяці тому

      23GB file size of the base model. Its misleading lol

    • @voiceofreason9780
      @voiceofreason9780 3 місяці тому

      Pretty sure 23G is the total file size.

    • @cachitown
      @cachitown 3 місяці тому

      @@voiceofreason9780 Yes, was just trying to discern your intention. Thx!

    • @Jockerai
      @Jockerai  3 місяці тому

      In the video, I used the model's file size of 23 GB to make it clear which version of Flux I was referring to, as that's the most noticeable difference compared to the other versions. :)

    • @starbuck1002
      @starbuck1002 3 місяці тому

      ​@@Jockerai This is referred to as the base model. I'm not sure what you mean by comparing it to other versions. Fine-tuned models typically have different names or may come in entirely different formats, like GGUF. Let's avoid comparing models based solely on file size.

  • @ankylosis751
    @ankylosis751 3 місяці тому

    on low vram 4gb vram gguf version gives highest quality yes the speed is slow but this one is slower with less quality...

  • @nideshmane5995
    @nideshmane5995 3 місяці тому

    Does this work in forge?

    • @Jockerai
      @Jockerai  3 місяці тому

      @@nideshmane5995 yes of course

  • @KoiAquaponics
    @KoiAquaponics 3 місяці тому

    Does this work in Flux ForgeUI?

    • @Jockerai
      @Jockerai  3 місяці тому

      I has to but I didn't test. Tell me if you test it thank you

  • @ronbere
    @ronbere 3 місяці тому

    where is the seed? I always get the same images

    • @Jockerai
      @Jockerai  3 місяці тому +1

      In "Random noise" node. you can set it to randomize or give it a number manualy

    • @ronbere
      @ronbere 3 місяці тому

      @@Jockerai omg i'm stupid😄

  • @cgpixol
    @cgpixol 3 місяці тому

    where can I get that fp8-e5m2 file? Thanks in Advance

    • @Jockerai
      @Jockerai  3 місяці тому +1

      No need to download. Just install last version of comfyui and download all necessary files and put them in proper directory then load my workflow and you are done!

    • @cgpixol
      @cgpixol 3 місяці тому

      @@Jockerai thank you. It works 😁

  • @robertaopd2182
    @robertaopd2182 3 місяці тому

    Add hyper 8 and 16 staps.... Give me out best images i do test for now....more details... Quuality images ...and i melt 1image in 45seconds on 20 staps with hyper16 and use rtx 3080 with 10ramv...i do test hyper and turbo alpha and hyper give out better....

    • @robertaopd2182
      @robertaopd2182 3 місяці тому

      I talk about dev hyper ....not any gguf hyper

    • @Jockerai
      @Jockerai  3 місяці тому

      how long did it take to generate?

  • @animation-nation-1
    @animation-nation-1 3 місяці тому

    RTX 4070ti 12GB. generated a black empty image :P

    • @Jockerai
      @Jockerai  3 місяці тому

      Something is not set correctly in your workflow

  • @I1Say2The3Truth4All
    @I1Say2The3Truth4All 3 місяці тому

    So can you place the diffusion_pytorch_model.safetensors (lora) inside ComfyUI\models\loras or inside ComfyUI\models\xlabs\loras folder?
    In your previous guide ua-cam.com/video/txDFK-RcUq4/v-deo.html installing ComfyUI you mentioned you should not put Flux lora inside ComfyUI\models\loras but in this video you are placing the lora inside ComfyUI\models\loras ?? :(

    • @Jockerai
      @Jockerai  3 місяці тому

      @@I1Say2The3Truth4All good question. For loras that belong to xlab team , you need to put them into xlab lora. But for the rest of them the main folder of loras is correct and you can use them via Power lora loader node. Also personal loras like flux personal lora which I covered In one of my videos

  • @zombieploios
    @zombieploios 3 місяці тому

    am I the only one observing quality loss especially with human anatomy ?

  • @0A01amir
    @0A01amir 3 місяці тому

    With how bad skin textures look in Flux, it better generate 15 image a second on 4gb instead of this.

  • @StargateMax
    @StargateMax 3 місяці тому

    Can't even download the model, it throws an error 401 and that's it. I tried with another Flux model and it didn't work either. When I try to generate, it just says:
    Prompt outputs failed validation
    VAELoader:
    - Value not in list: vae_name: 'ae.safetensors' not in ['sdxl_vae.safetensors', 'taesd', 'taesdxl']
    DualCLIPLoader:
    - Value not in list: clip_name1: 'ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors' not in []
    - Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in []

  • @kristian5747
    @kristian5747 3 місяці тому

    Second, but no links 😢

    • @Jockerai
      @Jockerai  3 місяці тому

      Now links are on the description please check it again. The video was uploaded and I didn't noticed 😬

    • @kristian5747
      @kristian5747 3 місяці тому

      ​@@Jockerai Thank you!, you just got a new subscriber

  • @rednoi3e
    @rednoi3e 3 місяці тому +1

    Just another one LoRa, like Hyper... And for Comfy, not for Forge.
    Nothing special

  • @Novalis2009
    @Novalis2009 3 місяці тому

    The first image you present as exceptional, do you really believe what you are saying. The image is unfinished and broken. Look at the nose there is a clear cut. A lot of the rest is blurry. So if you really believe this is a good image then you have no idea what you are talking about. Sorry mate. There is no way to trick around quality. You get good images with 10 step from the base models as well. It always depends on the parameters.