Latest update actually breaks this. Flux was running great on my 4090 until this morning. Now the clip models don't load properly and generating time is insane. From 1.5 it/s to 100 s/it.
I didn't noticed any slow down on my Radeon RX 6800 XT.... but then, I only get between 15s/it to 20s/it, about 5 minutes to generate a 1024x1024 image.
I followed vid exactly and mine says its entering low vram mode and sits there for over 10mins with no results. No clue how to get it to work. (Using a 4090 as well)
@@WizardofOlde as Sebastian commented it's just very new and lost of issues. I was getting the low vram warning for the last week. Except when running 1024x1024 and below. You could also use the 8fp clip model to help it. I haven't tried today though as it broke for me yesterday.
Don't forget that Swarm runs Flux. I got it working today with the Shnell version and I only have an NVIDIA GeForce RTX 3050 which is 8gb plus I have 32 gig of system ram. It does 1024 x 1024 in about 50 secs there abouts and the results are amazing. It will do a native size at just below 2048x2048. Hugging face and reddit have instructions. It's dead easy with Swarm.
On my 3080 12GB on swarmUI it loads in like 2 mins and generations are between 30 - 90 seconds.... the 12GB version of the 3080 has very high bandwidth maybe thats why its so fast?? im using flux dev
I don't know much about pcs but I am on a 12th generation core i5 16 gb ram and envidia 3050 with 4gb of vram I am thinking about getting an external GPU like 4070 nvidia but I am wondering if that might be useless for running a model like flux because of my cpu does anyone know if cpu like mine will be limites for this?
Hey Sebastian, I really enjoy your videos; they’ve taught me so much! :) I have a quick question. How can I change the style of an image (for example, to anime or another style) while keeping the original person (I'm using my own character Lora), pose and background? When I add "anime, anime style," etc., to the prompt, it changes the output image. I’d like to keep the output I had but "paint" it in a different style. Is there maybe a special node to add just before "save image" or something? I’m not sure...
I've been eagerly anticipating your video on Flux... greetings from Italy! I've been experimenting with Flux for a few days now, and it's been a game changer! 😂😂😂😂
Lora Training is almost complete with Koyha they are almost finished getting to working I would give them a week or 2 and we should be able to train Loras for it. And then youtube will blow up with tutorial videos lol
Hi Sebastian, a fan and a follower of yours. Great videos, learnt a lot from you. I have a question, there's a VAE encoder in the vae folder called "diffusion_pytorch_model.safetensors" in flux, so which one to use? ae.safetensor or this one? Thank you.
Nice tutorial, but I'm using a Mac (M3 Max, if you must know), and Flux won't work because of incompatibilities with MPS. What workarounds do you suggest in the meantime?
Hands, color bleeding and extra limbs are things I run into often in the programs/ sites I use. As long as it's free I may use this program. Honestly considering AI you would think most of the GPU manufacturers would add more VRam to them.
i haven't touched comfyui in months but adventually I began to question.. where is the negative prompt node or is that not commonly used in comfy? i forgot so much about it D; will begin updating comfy and brushing up sometime after watching this a few times lol, thanks for the video :D subscribed
Right now it's very limited so the base workflows are actually enough if you want to use pure Flux. I've seen workflow experiments but nothing revolutionary yet.
What happened to Playground AI? It had it all - now it's reduced to a shadow. Very concerned. Nobody in that community has commented. They just stopped publishing anything. What happened??
Great VIDEOS! One issue: If you divert ComfyUI to use Automatic1111 model folders FLUX will not work. :-( For example there is no "unet" folder in Automatic1111 to actually put the flux model in. At least not in my installation. Is there a solution? Or do I just have to clean intall ComfyUI and merely use it with flux and NOT make the change to the extra_model_paths.yaml file?
I avoided comfyui as long as I could and used A1111. Untill I took the jump and tried comfyui. Start simple, use example pictures and don’t let the spaghetti madness overwhelm you. In fact comfyui is sooooo much faster en nicer to work with than A1111. I now only use comfyui.
@@giuseppedaizzole7025 you can try the “download and click ready” version of comfyui. No need to install, just unzip and run. I run it on a rtx 3060 with 12GB vram and 32GB main memory. No problem at all. Runs smooth.
It looks frightening but seriously, it's as simple as a1111 if you do simple gens. The only thing is that ComfyUI allows you to do more if you want to. Pick a simple tutorial and follow it to the letter. When you're comfortable with how the interface work, download someone's workflow and try it out. It's really good.
Getting this error. I've checked and re-checked, but no success... Error occurred when executing DualCLIPLoader: Error while deserializing header: HeaderTooLarge Any idea what the issue is?
Solved my own issue... I had to re-download the 3 files for the CLIP folder. I noticed their size was only 40mb each. I did a "save link as" and seems that didn't work right. FYI, use the direct download link arrow.
I don't know why but my queue is stucked at the "load diffusion model" step, in your tutorial it takes just a minute to render a picture, any clue how to solve this?
thanks great video, I have it set up but wondering where the images save to (folder in the comfy files somewhere or somewhere else?) or if you have to right click to save every image?
I just tried FLUX based on the strength of this video, and the results were absolute rubbish. I gave it a removed background photo of my face, and told it to generate a video from it, image to image, and text to image. All three results were absolute rubbish that didn't even look remotely like me at all. The video result was the most bizarre, mutating my face into various amorphorous rainbow blobs. I'm like come on, even Pika or Hedra at their worst blows the doors off this rubbish.
I followed all instruction (ComfUI newbie) and just get crappy pixellated images back. Using Pinokio, Flux-dev runs well, but will less options. My machine is a MacBookPro M3Max with 128 GB RAM, so it shouldn't be a problem - and it isn't with Pinokio. Any ideas what I may have done wrong?
The comfyui keeps pausing for me once i try to run the workflow. no errors noted. when i press any key to continue it closes. anyone have any insight to what to do?
Your workflow doesn't look like the default comfyui workflow and those connections result in errors on the default one. Would be nice if you provided some explanation as to what you changed. For one, the "modelsamplingflux" is not there by default.
@@sebastiankamph Thanks for the reply mate., didn't know you could download ready workflows from there. Many AI tutorials gloss over steps assuming the users are not cavemen 😌
tried it took me 2000 sec for it to finish loading the anime girl then it didnt even display her, nah im good i'll just use it through twitter with the restrictions this aint for me
I def did something wrong. I get no errors when I press "Queue Prompt" but also nothing happens. I can see it in the queue but then also my graphics card is doing nothing same as the CPU.
did everything as described - ERROR: Could not detect model type of: C:\..........\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\checkpoints\clip_l.safetensors
This tutorial is to generate in our own device right? Non via API in other dedicated CPU? If not, it request some tokens to run it for each image generation or is unlimited? Thanks for all!
To generate these images as well as the author, you can install comfyui yourself, but there's actually another way, If I'm a newbie and find the installation process a bit cumbersome, I'm using Mimicpc, an online AI tool that includes popular AI tools such as SD3 and fooocus, which has been a great help in my career!
If it says in cmd while queing a promt "Requested to load Flux Loading 1 new model loaded partially 5697.2 39" Is becasue my vram is not big enough to load it completely and thus my results are going to be worse?
u just use what u need to use, there's plenty of tutorials on how to get things running - its not that hard - i used to use a1111 too but since using comfy i find it more flexible and faster, no lags, no crashes, just works or it don't if you have wrong settings.
There is on glif ( limited free runs daily ) You can create your own or use the pro version landscape or portrait versions I created on there for free ( Prompt only ) search my username Lorddryst. Ill be adding more advanced ones when I learn more about them. You can also use the pro model if you build your own glifs
I've seen people writing they can run it on 6GB, but even with a small resolution and using the schnell model they take minutes to generate one image. It just doesn't seem worth it to me. Also, you really want to have at least 32GB of RAM. The system specs you should have for smooth running is probably something like 24GB VRAM (more preferred) and 64GB RAM. The hardware hunger is real.
I don't know much about pcs but I am on a 12th generation core i5 16 gb ram and envidia 3050 with 4gb of vram I am thinking about getting an external GPU like 4070 nvidia but I am wondering if that might be useless for running a model like flux because of my cpu does anyone know if cpu like mine will be limites for this?
Thanks for the tutorial Sebastian as always great work. BTW I am using FLUX in a totally different UI, called SwarmUI. This tool has supported the FLUX model immediately, and I can generate an image for Schnell model only in about 40 seconds, and for dev model in 3 minutes. I have RTX 3070 Ti with 8gb vRam. While in ComfyUI it takes 3 minutes for the schnell model. SwarmUI is the UI that will replace A1111, it is so optimized and well organized, I just hope you try it, Sebastian, and if you find it any good to cover it in your next video. I found very few channels who mentioned it.
@@sebastiankamph Oh I didn't know that, I will check them right away, thanks! I hope you show FLUX in SwarmUI as well, it is much faster than ComfyUI when generating images using FLUX model.
Haven't used comfyui since beginning of the year. Can't for the life of me get it to work now even with a fresh install. I wish we'd move on from this unnecessarily complicated and tedious program.
I've had similar problems with Comfy too, especially with video workflows. The whole 'install missing nodes' advice only goes so far and YT'ers rarely address errors after that. However, I have a couple of Flux workflows working great and without any real issues to get them running. If that still doesn't interest you, apparently SwarmUI works with Flux.
It seems as of right now, the controlnet can only be loaded from a separate branch. But once it's on main, you'll load it like any other controlnet I would assume with load controlnet node into apply controlnet (with a canny preprocessor).
Whoa, I had no idea that it could be ran with an 8gb GPU. I heard stories of people saying it took them over 2 mins per gen with a 3090 so I just completely dismissed ever trying since I only have a 3070. Maybe I need to give it a try myself and see what happens
I don't know much about pcs but I am on a 12th generation core i5 16 gb ram and envidia 3050 with 4gb of vram I am thinking about getting an external GPU like 4070 nvidia but I am wondering if that might be useless for running a model like flux because of my cpu does anyone know if cpu like mine will be limites for this?
this FLUX has restored my faith and excitement for local AI after the tragedy that was SD3
How was SD3 tragedy?
What "tragedy"?
@@geekyprogrammer4831 It was half baked and has built-in censorship that prevents it from rendering human anatomy correctly.
ya as it had to be paid
@@Eisenbison what were they thinking
Was waiting for this video as soon as i read about flux!
Took a little bit longer than expected, but now the video is here at least :)
2:48 - Don't forget to mention that you have to log into Huggingface and agree to the terms in order to be authenticated to use the Flux1 dev model.
Oh my days, thank you so much for this! Almost got crazy looking for a solution lol
Always top-quality videos 👌
I find your videos really helpful! thank you for the content you post!
Latest update actually breaks this. Flux was running great on my 4090 until this morning. Now the clip models don't load properly and generating time is insane. From 1.5 it/s to 100 s/it.
Oof! The dangers of new tech.
I didn't noticed any slow down on my Radeon RX 6800 XT.... but then, I only get between 15s/it to 20s/it, about 5 minutes to generate a 1024x1024 image.
Dam, I was using it last night. I’ll have to see what it’s like today.
I followed vid exactly and mine says its entering low vram mode and sits there for over 10mins with no results. No clue how to get it to work. (Using a 4090 as well)
@@WizardofOlde as Sebastian commented it's just very new and lost of issues. I was getting the low vram warning for the last week. Except when running 1024x1024 and below. You could also use the 8fp clip model to help it. I haven't tried today though as it broke for me yesterday.
You can now add the models to the Checkpoints folder in the latest version of ComfyUI. Change the .sft extension to .safetensors
can we use it for auto111 or reforge ?
@@Heldn100 erm no one knows yet, maybe u can try comfyui now, its not that hard to use, its very flexible, i havent' used a111 for a long time.
They already renamed it properly on the huggingface page.
Don't forget that Swarm runs Flux. I got it working today with the Shnell version and I only have an NVIDIA GeForce RTX 3050 which is 8gb plus I have 32 gig of system ram. It does 1024 x 1024 in about 50 secs there abouts and the results are amazing. It will do a native size at just below 2048x2048. Hugging face and reddit have instructions. It's dead easy with Swarm.
Love swarm!
used it on nvidia 3060 12GB and ryzen 3100 16RAM, at first loads like 20min, and then 5 min for generation, quality outstanding
Is that Dev or Schnell version?
@@gkeNz dev. but schnell has similar generation time for me
I have nvidia 3050ti 16gb ddr4 do you think mine can also do it?
On my 3080 12GB on swarmUI it loads in like 2 mins and generations are between 30 - 90 seconds.... the 12GB version of the 3080 has very high bandwidth maybe thats why its so fast?? im using flux dev
I don't know much about pcs but I am on a 12th generation core i5 16 gb ram and envidia 3050 with 4gb of vram I am thinking about getting an external GPU like 4070 nvidia but I am wondering if that might be useless for running a model like flux because of my cpu does anyone know if cpu like mine will be limites for this?
Hey Sebastian,
I really enjoy your videos; they’ve taught me so much! :) I have a quick question. How can I change the style of an image (for example, to anime or another style) while keeping the original person (I'm using my own character Lora), pose and background? When I add "anime, anime style," etc., to the prompt, it changes the output image. I’d like to keep the output I had but "paint" it in a different style. Is there maybe a special node to add just before "save image" or something? I’m not sure...
Sebastian Kamph, nice video keep up the good content
I've been eagerly anticipating your video on Flux... greetings from Italy! I've been experimenting with Flux for a few days now, and it's been a game changer! 😂😂😂😂
Great to hear! Wanting to visit Italy someday, have never been 💫
@@sebastiankamph Well. I'm glad to hear that... anyway, I always enjoy watching your videos!!!
Are they going to have to just separate a model or some other way that is specialized in getting text correct and placing it on the image?
Do i need to pay for any of this?
Sebastian Kamph, nice video it was really entertaining
Need to watch this later, got flux set up but no idea on how to make Lora's for it, didnt think that was even possible!
Lora Training is almost complete with Koyha they are almost finished getting to working I would give them a week or 2 and we should be able to train Loras for it. And then youtube will blow up with tutorial videos lol
Great Tutorial, what kind of computer you're running that's soo fast :) ?
Thank you! RTX 4090
Thanks for the vid buddy will try this later after work 👍
No problem, hope you get it to run!
no dad-joke? 😔
He said "Oh flux it" when he realised he'd missed it.
LoL
@@RetroHenni to make up for it, there are 2 in the Flux NF4 video.
@@sebastiankamph ❤️
Are these models for commercial usage? And is that a schnell model you downloading for flux?
The outputs can be used for commercial usage. I use Flux dev primarily.
can i run the model in stable diffusion? btw i've just started learning this AI world
You're the best, thanks!
Hi Sebastian, a fan and a follower of yours. Great videos, learnt a lot from you. I have a question, there's a VAE encoder in the vae folder called "diffusion_pytorch_model.safetensors" in flux, so which one to use? ae.safetensor or this one? Thank you.
Hi! Thank you! Use the ae.safetensor for your vae
Nice tutorial, but I'm using a Mac (M3 Max, if you must know), and Flux won't work because of incompatibilities with MPS. What workarounds do you suggest in the meantime?
I saw Pieter levels using Flux model too on X. And he is using macbook. Must be some way, I need it too.
Hi, great vid! I wanted to ask you how you got that activity monitor on the menu?
You said text adding is easy. With what software do you normally add the text on top of it, maybe with effects and (pre-selected?) styles
Photoshop, Indesign or Illustrator depending on usecase. For stylized text Flux is very convenient.
Do you have a video showing how to also install checkpoints along with Lora in a separate node? Will checkpoints work?
Hands, color bleeding and extra limbs are things I run into often in the programs/ sites I use. As long as it's free I may use this program.
Honestly considering AI you would think most of the GPU manufacturers would add more VRam to them.
This is free, yes.
i haven't touched comfyui in months but adventually I began to question.. where is the negative prompt node or is that not commonly used in comfy? i forgot so much about it D;
will begin updating comfy and brushing up sometime after watching this a few times lol, thanks for the video :D subscribed
Welcome aboard! No negative prompts initially for Flux.
How many workflows are there and which one/s are best for flux? How do you know which one to choose?
Right now it's very limited so the base workflows are actually enough if you want to use pure Flux. I've seen workflow experiments but nothing revolutionary yet.
@@sebastiankamph Thanks
Can FLUX be used with other tools, such as Fooocus or Invoke?
What happened to Playground AI? It had it all - now it's reduced to a shadow. Very concerned. Nobody in that community has commented. They just stopped publishing anything. What happened??
Great VIDEOS! One issue: If you divert ComfyUI to use Automatic1111 model folders FLUX will not work. :-( For example there is no "unet" folder in Automatic1111 to actually put the flux model in. At least not in my installation. Is there a solution? Or do I just have to clean intall ComfyUI and merely use it with flux and NOT make the change to the extra_model_paths.yaml file?
Does that LORA really make a difference? Have you tried without?
Yes, I think it is helpful, but not required.
Unfortunately i get an "list index out of range" error...any tips ?
Where did you save your comfy
@@keisaboru1155 c\users\user\stablediffusion\ComfyUI
@@keisaboru1155 c\users\user\stablediffusion\ComfyUi
What is the difference between single file checkpoint ComfyUI suggests on their flux examples page and multiple files approach from this vid?
It censors NSFW images, and if youre tyring to upscaled a NSFW image it'll just replace it with a completely different SFW image.
No offense to the artsy types here but absent being able to make attractive women this is useless.
What a waste of such a good Ai. Hopefully people will find a way around it.
Hey, can I use this for completely realistic images of myself doing various activities in different locations?
I'm not 100% positive if there is a working controlnet img to img or adetailer or ReActor available yet.
Illyasviel also released a Flux1 NF4 model for Forge. Great quality and it works a lot faster.
Great tip! Just dropped a video on the NF4
@@sebastiankamphAwesome! Your videos are great.
@@sebastiankamph What will happen if I try it in a gtx 1650 4gb gddr6 vram
Hey Sebastian, nice tut! Thank you very much!
Could you share your pc spec? I'm looking for a build to run Comfy as smoothly as you do. Thanx!
Ryzen 9 7900x, 64gb ram, RTX 4090. It's mostly about the gpu. But in Flux case, ram minimum 32gb is nice to have too.
@@sebastiankamph Nice, thank you!
Will flux be available on a1111? Im afraid to try comfyui bc im sooooo used to a1111.
I avoided comfyui as long as I could and used A1111.
Untill I took the jump and tried comfyui. Start simple, use example pictures and don’t let the spaghetti madness overwhelm you. In fact comfyui is sooooo much faster en nicer to work with than A1111. I now only use comfyui.
as someone in the spectrum, I had a hard time changing from a1111 to comfy, but is worth it. Gets more easier with time, and much faster.
@@2008spoonman hi there..will confyui works on artx 12gb with 16gb ram? thanks
@@giuseppedaizzole7025 you can try the “download and click ready” version of comfyui. No need to install, just unzip and run. I run it on a rtx 3060 with 12GB vram and 32GB main memory. No problem at all. Runs smooth.
It looks frightening but seriously, it's as simple as a1111 if you do simple gens. The only thing is that ComfyUI allows you to do more if you want to. Pick a simple tutorial and follow it to the letter. When you're comfortable with how the interface work, download someone's workflow and try it out. It's really good.
Is there a way to plugin in reference images to use in combination with text prompt?
So which one is best if my GPU is 8GB but my RAM is 32GB? Would using the fp16 cause any issues/errors or is it better to stick with fp8?
Fp8 schnell
Is ComfyUi necessary for Flux?
Getting this error. I've checked and re-checked, but no success...
Error occurred when executing DualCLIPLoader:
Error while deserializing header: HeaderTooLarge
Any idea what the issue is?
Solved my own issue... I had to re-download the 3 files for the CLIP folder. I noticed their size was only 40mb each. I did a "save link as" and seems that didn't work right. FYI, use the direct download link arrow.
thanks great video
Glad you enjoyed it!
Man I can not see ComfyUI Manager button :( I did eveything you did but it doesnt work. Please help me! :D
And I can't see the 'Load Lora' part
I don't know why but my queue is stucked at the "load diffusion model" step, in your tutorial it takes just a minute to render a picture, any clue how to solve this?
Use schnell, lowest settings, or get a faster machine. But I don't know your setup.
thanks great video, I have it set up but wondering where the images save to (folder in the comfy files somewhere or somewhere else?) or if you have to right click to save every image?
/ComfyUI/Outputs
My Comfy not show CPU, RAM % and time, how can a i get this?
i need too plz
it is called "crystools", by Crystian
Hmm any idea why my generated viking-woman with your setup looks cartoonish.. or painted (like a blockbuster movie poster)
Thank you. How did you get to see the system resources in ComfyUI below the Queue Prompt Button?
Crystools
Can I try with Flux with my Geforce 1660 Super 6 GB or is it too low?
when downaloding VAE it says "file wasnt available on site"
I have rtx 3080 12GB and 32GB 3600mhz Ram could i use the high spec safetensors?
Can i run this with i9 10850k and RTX 4060 8GB?
Yes, but check my new video on Flux NF4. You want that.
it is asking for authorization for downloading flux model dev
On mimic pc, I can’t select a model of controlnet for preprocessing (it appears “none”). Can you explain me why?
Does it work with fooocus?
I just tried FLUX based on the strength of this video, and the results were absolute rubbish.
I gave it a removed background photo of my face, and told it to generate a video from it, image to image, and text to image. All three results were absolute rubbish that didn't even look remotely like me at all. The video result was the most bizarre, mutating my face into various amorphorous rainbow blobs. I'm like come on, even Pika or Hedra at their worst blows the doors off this rubbish.
I had that issue as well. Try with a CFG of 1 and 20 steps.
I followed all instruction (ComfUI newbie) and just get crappy pixellated images back. Using Pinokio, Flux-dev runs well, but will less options. My machine is a MacBookPro M3Max with 128 GB RAM, so it shouldn't be a problem - and it isn't with Pinokio. Any ideas what I may have done wrong?
Wow! Thank you! Wonderful video.
Happy to help :)
The comfyui keeps pausing for me once i try to run the workflow. no errors noted. when i press any key to continue it closes. anyone have any insight to what to do?
Getting a black image with nothing on it. Any fixes?
Your workflow doesn't look like the default comfyui workflow and those connections result in errors on the default one. Would be nice if you provided some explanation as to what you changed. For one, the "modelsamplingflux" is not there by default.
Did you download the default FLUX workflow from the links? Not the default comfy workflow.
@@sebastiankamph Thanks for the reply mate., didn't know you could download ready workflows from there. Many AI tutorials gloss over steps assuming the users are not cavemen 😌
@@m.a6416 see link in description for workflow links
when i generate the image the output is always black what could be the reason?
tried it took me 2000 sec for it to finish loading the anime girl then it didnt even display her, nah im good i'll just use it through twitter with the restrictions this aint for me
I def did something wrong. I get no errors when I press "Queue Prompt" but also nothing happens. I can see it in the queue but then also my graphics card is doing nothing same as the CPU.
Is it possible to upload selfie and use it as a foundation to generate nice photo for CV? Cheers.
I would recommend SD 1.5 for that right now. Flux needs controlnets, ipadapters and more first.
did everything as described - ERROR: Could not detect model type of: C:\..........\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\checkpoints\clip_l.safetensors
Nice 🎉
Very nice.
hey sebasitan at 4:57 seconds i cant select those there's no choices for those on my computer to fix the errors
This tutorial is to generate in our own device right? Non via API in other dedicated CPU? If not, it request some tokens to run it for each image generation or is unlimited? Thanks for all!
To generate these images as well as the author, you can install comfyui yourself, but there's actually another way, If I'm a newbie and find the installation process a bit cumbersome, I'm using Mimicpc, an online AI tool that includes popular AI tools such as SD3 and fooocus, which has been a great help in my career!
This is locally and unlimited.
I´m new to all this, how can I enable this search bar in confyui?
When loading the graph, the following node types were not found:
FluxGuidance
Nodes that have failed to load will show as red on the graph.
Update comfy
Will this work with comfy UI in automatic 1111?
If it says in cmd while queing a promt
"Requested to load Flux
Loading 1 new model
loaded partially 5697.2 39"
Is becasue my vram is not big enough to load it completely and thus my results are going to be worse?
I do not want to work with nodes - I have it in my 3d software. A1111 is the one. I wish ComfyUI would disappear..
And I don’t like a1111 or foocus due to the limited interface, so our opinions cancel and the world can continue on as it was.
u just use what u need to use, there's plenty of tutorials on how to get things running - its not that hard - i used to use a1111 too but since using comfy i find it more flexible and faster, no lags, no crashes, just works or it don't if you have wrong settings.
So if you don't like it, it should not exist? What a world that would be... Variety is the spice of life!
Swarmui its a much nicer UI for comfy thats similar to auto1111
Does it support face swap?
is there a service like phygital plus where I can deply this model online?
There is on glif ( limited free runs daily ) You can create your own or use the pro version landscape or portrait versions I created on there for free ( Prompt only ) search my username Lorddryst. Ill be adding more advanced ones when I learn more about them. You can also use the pro model if you build your own glifs
hi im new to this, does this let you generate for free? what specs do I need to accomplish this?
Yes, all free. GPU 8gb+ minimum. More is better.
Thank you! Runs great :)
Great to hear!
5-10 minutes generation time on a 4090...
And me trying with a gtx 1650
when I install flux, can i stil use comfy with sd1.5 and sdxl??
Yes!
test other scheduler + sampler combos - some of them are really nice
what is the minimum amount of Vram to run this? is 6GB enough? is 8GB enough? or does it need 12GB upwards?
I have seen people claiming to run it well on 8gb. I go over some of the vram options in the video.
16GB minimum was preferred by the creators, anything below will have long generation times when trying to generate high-res.
I've seen people writing they can run it on 6GB, but even with a small resolution and using the schnell model they take minutes to generate one image. It just doesn't seem worth it to me. Also, you really want to have at least 32GB of RAM.
The system specs you should have for smooth running is probably something like 24GB VRAM (more preferred) and 64GB RAM. The hardware hunger is real.
I don't know much about pcs but I am on a 12th generation core i5 16 gb ram and envidia 3050 with 4gb of vram I am thinking about getting an external GPU like 4070 nvidia but I am wondering if that might be useless for running a model like flux because of my cpu does anyone know if cpu like mine will be limites for this?
@@inglesconpelis270 Your CPU will be fine. Your 16GB RAM will be a much bigger problem.
I have no clue why yours has "manager" and mine doesn't
does this work in Fooocus?
Thanks for the tutorial Sebastian as always great work.
BTW I am using FLUX in a totally different UI, called SwarmUI. This tool has supported the FLUX model immediately, and I can generate an image for Schnell model only in about 40 seconds, and for dev model in 3 minutes. I have RTX 3070 Ti with 8gb vRam. While in ComfyUI it takes 3 minutes for the schnell model.
SwarmUI is the UI that will replace A1111, it is so optimized and well organized, I just hope you try it, Sebastian, and if you find it any good to cover it in your next video. I found very few channels who mentioned it.
Great tip! I have 2 videos on guides on Swarm. It's great!
@@sebastiankamph Oh I didn't know that, I will check them right away, thanks!
I hope you show FLUX in SwarmUI as well, it is much faster than ComfyUI when generating images using FLUX model.
step 1 : buy an actual high end computer T-T...
Are you an AI generated demonstrator?
Haven't used comfyui since beginning of the year. Can't for the life of me get it to work now even with a fresh install. I wish we'd move on from this unnecessarily complicated and tedious program.
I've had similar problems with Comfy too, especially with video workflows. The whole 'install missing nodes' advice only goes so far and YT'ers rarely address errors after that. However, I have a couple of Flux workflows working great and without any real issues to get them running.
If that still doesn't interest you, apparently SwarmUI works with Flux.
Where should you add the controlnet node?
It seems as of right now, the controlnet can only be loaded from a separate branch. But once it's on main, you'll load it like any other controlnet I would assume with load controlnet node into apply controlnet (with a canny preprocessor).
Video is missing an example on how to use the new canny node...
im on AMD GPU, did not get it running yet
hi, how to set the % bar, to monitoring the cpu, etc..
can you make a video about the flux controlnet canny from xlabs? i can't get it to work
Hey i get this Error : Could not detect model type of ..models\checkpoints\flux1-schnell.safetensors. Maybe anyone has a solution to this ?
Hey Sebastian is any of this possible with RTX 3050 4gb??
I don't think so, no. Use SD 1.5 instead
Whoa, I had no idea that it could be ran with an 8gb GPU. I heard stories of people saying it took them over 2 mins per gen with a 3090 so I just completely dismissed ever trying since I only have a 3070. Maybe I need to give it a try myself and see what happens
after Comfy changed VAE eating large amounts of additional memory (when using FLUX) - it works!
I don't know much about pcs but I am on a 12th generation core i5 16 gb ram and envidia 3050 with 4gb of vram I am thinking about getting an external GPU like 4070 nvidia but I am wondering if that might be useless for running a model like flux because of my cpu does anyone know if cpu like mine will be limites for this?
maybe I am using an old workflow, does not work for me on 3070, will follow this tutorial to see if there's a diff