I've upgraded earlier this summer to a 4070 Ti Super (16GB) and get 1920x1080 images (no high res fix needed!) in ~58 seconds on average. Just tried now in 1024x1024 with your settings (including your prompt from 3:41, except for GPU weight in the 15k-ish) and get 25.9 in average (batch of 9 images in 3 minutes 53.1 sec. I still use 1080p screens, so it's pretty cool to be able to create screen size images in one go. Also, if you prompt for 1 character, they do not duplicate on wide screen like it does in SD models and everything is in good proportions (no stretched limbs).
@@EskaronVokonen Nice! I'm hoping to get the same card or I might be able to squeeze in a 4080 soon but yeah must be nice to have more VRAM. 8GB isn't going to cut it when you add lora's controlnet etc. And it doesn't look like these models are getting any smaller, even if we have quantized ones. Appreciate you sharing the info, very helpful not only for me but for others that might be wondering.
@@MonzonMedia wait to see the rtx50 series, oddly there lower ram but claiming they will be much faster on Ai across the board, ill live with my rtx3060 till i see what the rtx5060ti offers and if they do a 16gig version like the rtx4060ti
Great tutorial. Thank you. Using a 3080 TI (12GB) I was getting an average creation time of 32/34 second after the 3rd image, using flux1-dev-bnb-nf4-v2 at 1152 x 896 at the default installation settings.
Hats off to the Web Forge team! Finally, my 64GB RAM has meaning with my 4080 16GB VRAM card. Model swapping is so fast now under a minute (SD 1.5 and SDXL even quicker, almost instant) between different Flux models compared to ComfyUI, which takes over 3 minutes on a SATA drive before a render starts. Hopefully, the Forge team and Comfy can share optimization tips!
Yeah Forge for SD1.5 and SDXL is a joy to use. I'm curious to what size you were referring to that is taking you 3min on ComfyUI, seems awfully long for your GPU.
@MonzonMedia With ComfyUI on with Flux models only which are on my SATA drive, on a fresh first run the model takes around 2/3 minutes to load, then under a minute 38secs on the second runs, but with Web Forge I don't get those issues it's a lot quicker on the first run
There is a 12GB version of Flux Dev but it's only compatible with ComfyUI. I go over it in this video ua-cam.com/video/chfUGCE0AVY/v-deo.htmlsi=EBR1SpTqI0mocd4_ But yeah, I was more excited to see support for Forge!
I was frustrated seeing so many tutorial videos on comfy and its workflow for so long until Forge dropped an update. lllyasviel is working on god mode nowadays with all the amazing updates and features. Forge forever🤩
Yes, it's a great video. But I think it's the images generated by the current flux that surprised me the most, it's really almost close to the real thing for the detail, I've tried to run flux on mimicpc, that's my free experience of running flux, and the images generated are just perfect. This makes me willing to pay to support such a great invention!
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
I've had the same trouble before, and it's really a bit new to this operation for newbies. So much so that I was a bit disappointed with this so-called new AI before I used mimicpc, but then after I experienced it for free in Mimicpc online, I started to fall madly in love with this technology and it has a huge
I'd love to see that too but Foocus was made strictly with SDXL architecture, implementing Flux would be quite a bit of work. They may as well start from scratch like a Fooocus Flux but the developer already has Forge to maintain as well. What's preventing you from using Forge? It has more of a learning curve but it's not that bad.
You might be on an older version of Forge as that seems to be added as a default now. If not, go into the settings, User Interface and find the box for [info] Quicksettings list. Type sd_vae into the box, apply and restart and it should be there. 🙂
@@RamonGuthrie They are trained on dev but work on both dev and schnell. I've now tried it myself and to quote directly from the training Github page "Training a LoRA on Dev will however, run just fine on Schnell." If yours aren't working there's something wrong. 🙂
I did download 2 loras from civit but non of them is working. Can you point me to a working lora? also, it there a schnell model that works with forge?
Let me know what GPU you are using and what kind of speeds you are getting for 1024x1024. I'm getting better speeds than ComfyUI at the moment.
I've upgraded earlier this summer to a 4070 Ti Super (16GB) and get 1920x1080 images (no high res fix needed!) in ~58 seconds on average. Just tried now in 1024x1024 with your settings (including your prompt from 3:41, except for GPU weight in the 15k-ish) and get 25.9 in average (batch of 9 images in 3 minutes 53.1 sec. I still use 1080p screens, so it's pretty cool to be able to create screen size images in one go. Also, if you prompt for 1 character, they do not duplicate on wide screen like it does in SD models and everything is in good proportions (no stretched limbs).
@@EskaronVokonen Nice! I'm hoping to get the same card or I might be able to squeeze in a 4080 soon but yeah must be nice to have more VRAM. 8GB isn't going to cut it when you add lora's controlnet etc. And it doesn't look like these models are getting any smaller, even if we have quantized ones. Appreciate you sharing the info, very helpful not only for me but for others that might be wondering.
NF4 v2 With a 3060Ti 8GB VRAM = 52sec
AMD Ryzen 7 3700X
64gig 3700mhz
@@MonzonMedia wait to see the rtx50 series, oddly there lower ram but claiming they will be much faster on Ai across the board, ill live with my rtx3060 till i see what the rtx5060ti offers and if they do a 16gig version like the rtx4060ti
After reading everyone's responses, I think I'm running faster on mimicpc. I've loved it from the very beginning of the free onlineized trial.
Great tutorial. Thank you. Using a 3080 TI (12GB) I was getting an average creation time of 32/34 second after the 3rd image, using flux1-dev-bnb-nf4-v2 at 1152 x 896 at the default installation settings.
Hats off to the Web Forge team! Finally, my 64GB RAM has meaning with my 4080 16GB VRAM card. Model swapping is so fast now under a minute (SD 1.5 and SDXL even quicker, almost instant) between different Flux models compared to ComfyUI, which takes over 3 minutes on a SATA drive before a render starts. Hopefully, the Forge team and Comfy can share optimization tips!
Yeah Forge for SD1.5 and SDXL is a joy to use. I'm curious to what size you were referring to that is taking you 3min on ComfyUI, seems awfully long for your GPU.
@MonzonMedia With ComfyUI on with Flux models only which are on my SATA drive, on a fresh first run the model takes around 2/3 minutes to load, then under a minute 38secs on the second runs, but with Web Forge I don't get those issues it's a lot quicker on the first run
@@RamonGuthrie that's goooood to hear!!
Finally! An Non-ComfyUI less than 16gb version of Flux! Thanks for update, I'll have a fresh install of Forge Soon.
There is a 12GB version of Flux Dev but it's only compatible with ComfyUI. I go over it in this video ua-cam.com/video/chfUGCE0AVY/v-deo.htmlsi=EBR1SpTqI0mocd4_ But yeah, I was more excited to see support for Forge!
I was frustrated seeing so many tutorial videos on comfy and its workflow for so long until Forge dropped an update. lllyasviel is working on god mode nowadays with all the amazing updates and features. Forge forever🤩
Great video. Thanks. Will look in to Forge again.
Good to see new video! around 80 sec. on 4060 for me
Great video, valuable info! Thank you for sharing! 🙌🙌
Yes, it's a great video. But I think it's the images generated by the current flux that surprised me the most, it's really almost close to the real thing for the detail, I've tried to run flux on mimicpc, that's my free experience of running flux, and the images generated are just perfect. This makes me willing to pay to support such a great invention!
@@SouthbayCreations appreciate you bud!
Great coverage!
great, thank you! Nice video!
Legend
very cool ui
nice info!
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
Nice and the images start generating waaaaaaay faster, I'm using FP8 and in Comfy UI it takes ages to generate images.
I've had the same trouble before, and it's really a bit new to this operation for newbies. So much so that I was a bit disappointed with this so-called new AI before I used mimicpc, but then after I experienced it for free in Mimicpc online, I started to fall madly in love with this technology and it has a huge
NF4 v2 With a 3060Ti 8GB VRAM = 52sec
AMD Ryzen 7 3700X
64gig 3700mhz
I would love to run it on Foocus
I'd love to see that too but Foocus was made strictly with SDXL architecture, implementing Flux would be quite a bit of work. They may as well start from scratch like a Fooocus Flux but the developer already has Forge to maintain as well. What's preventing you from using Forge? It has more of a learning curve but it's not that bad.
Time to reinstall @@MonzonMedia
Same
Hi does the installation work for Mac? thanks
im missing the vae drop down how did you get that on the ui
You might be on an older version of Forge as that seems to be added as a default now. If not, go into the settings, User Interface and find the box for [info] Quicksettings list. Type sd_vae into the box, apply and restart and it should be there. 🙂
As @Elwaves mentions make sure you have the latest update or find it in quick settings list. 👍
Thank you!
There is a flux ip adapter by xlabs on hugging face, but the setup is for comfy. I am trying to get it to work for forge, have any insight?
Did you ever get this to work i would much rather use forge as well but control net does not work for it
Not available for Forge yet.
@@MonzonMedia ...... Well that does explain why its not working LOL thank you.
@@robertmiller32 I'm working on a video on how to use LLM's as an alternative to something like IP adapter.
Runs on Forge, but they killed off SVD - back to Comfy I guess.
I noticed that as well. I was browsing through the discussion page on the Forge Github to see if there is any info on it.
@@MonzonMedia Did you say they were upgrading gradio? Hopefully, if this is a gradio upgrade issue they just need some time to make it work.
I think it was not intentional, some mess while coding and merging
Is there a Flux Schnell that works this Forge?
I just always get a black screen, no matter if I use NF4 or NF8... (RTX 2070). Does anyone know a solution?
Make sure you are losing the vae but for your card NF4 isn’t compatible. You should be able to run FP8 version or the new gguf models
any reason to not install this through pinokio?
Should be fine 👍🏼
The main problem with nf4 is that it doesn't work with all Loras :(((
I’m sure that will change soon. There are certain Loras that do work
Comfy seems to hate NF4 version- runs close to 1.5 hours.
Does anyone know how to get controlnet working with this
Unfortunately still not available for forge, I’ve been checking for updates frequently, still nothing
I'm working on a video on how to use LLM's as an alternative to something like IP adapter.
@@MonzonMedia sweet cannot wait to see it
Whenever you get a chance, super simple and effective!
ua-cam.com/video/VWeku4lO9tc/v-deo.htmlsi=cqYVOahlx-0Aj4gX
You forgot to mention these Loras work only with Flux Dev
He didn't forget as the loras should work with Schnell as well, except for training. It's only NF4 they don't work with right now, which he mentions.
@@Elwaves2925 Loras are currently trained for Dev only and don't work with Schnell models
@@RamonGuthrie They are trained on dev but work on both dev and schnell. I've now tried it myself and to quote directly from the training Github page "Training a LoRA on Dev will however, run just fine on Schnell." If yours aren't working there's something wrong. 🙂
True but I was only talking about Dev throughout the video.
I did download 2 loras from civit but non of them is working. Can you point me to a working lora? also, it there a schnell model that works with forge?
i have m1 and still cant download the forge, so sad,
Forge is trash, it kept deleting my checkpoints
Never had that issue
seems like a you problem my guy, maybe ask for help instead of calling it trash