Stable Diffusion is FINISHED! How to Run Flux.1 on ComfyUI
Вставка
- Опубліковано 29 лис 2024
- 📢 Ultimate Guide to AI Influencer Model on ComfyUI (for Begginers):
🎓 Start Learning Today: rebrand.ly/AI-...
⭐ Get All Commands & ComfyUI Workflow: rebrand.ly/Aic...
------------------------------------------------------------------
🌟 1 - Clip: huggingface.co...
🌟 2 - Vae: huggingface.co...
🌟 3 - Unet: huggingface.co...
🌟 4 - Workflow: openart.ai/wor...
Learn how to install and utilize the groundbreaking Flux AI on ComfyUI in this step-by-step tutorial! We'll cover everything from downloading the necessary Flux.1 weights and CLIP models to navigating the ComfyUI interface and generating stunning, high-quality images from text prompts. Discover why Flux is being hailed as the Stable Diffusion killer, surpassing even Stable Diffusion 3 in image detail, prompt adherence, and artistic style range. Whether you're a seasoned AI artist or new to AI image generation, this guide will equip you with the knowledge to unlock the full potential of Flux AI and elevate your creative workflow. Don't miss out on the future of AI-powered art!
------------------------------------------------------------------
➜ Join our new Discord Server: / discord
𝕏 : X.com/Aiconomist1
Ai Models:
/ elara.ravenna
/ bonnie.vargova
For Business:
➜ aiconomistbusiness@gmail.com
📢 Last chance to get 40% OFF my AI Digital Model for Beginners COURSE: aiconomist.gumroad.com/l/ai-model-course
@1:43
* Requires Nvidia graphics card
* Minimum 12GB VRAM
* At least 32GB system RAM
Not so fast.
* Can be run on CPUs and AMD GPUs too (tho it's painfully slow)
* GPUs with 8GB VRAM can also be used (tested with Nvidia 3060)
* Even 16GB system RAM is enough (but make sure you're using FP8 mode).
Not the most ideal setup, but it gets the job done.
ty for the heads up, i was about to wait until i buy a 3090 lol
Slower than a turtle....lol takes like 30min to load, of course I don't have the necessary specs on my pc,... 🙂
I have RTX 3080 12GB + 32GB. I'm gonna fire this up because FLUX is damn good. I'm just wondering if we can make images with our own character references. Maybe not yet?
@@bigbrotherr ya it works so well
@@bigbrotherr I'm not sure but if you use your Flux gens on a ConfyUI node for inpainting, face swapping, would it work? I barely tried comfi a few weeks ago but still on a1111 :)
it works on a 3050 4GB with 32GB Ram, had to use the schnell fp8 model with a smaller resolution (768px). to get it around 1 min 40 sec. The big benifit of this model is that it gives good results, no deformed humans. So the long time to generate a picture is more worth it.
Hi Brother Im using the same config..Did you also downloaded the 23Gb File?
@@ayyappanaga7982
the model im using is around 12GB. Model is called, flux1-dev-bnb-nf4-v2
You realize its a gamechanger after generating couple of images, the prompt is understood very well in flux which didnt happen before, hands with 5 fingers, natural poses, crazy crazy stuff cant wait for more updates.
Great video! With the same settings, on my 16GB RTX 4060 TI, it takes 45 seconds to generate a batch of 4 images
So blessed you are to have bought a 16GB RTX 4060 Ti. I envy people who have it
The quality is great, but I don't see it becoming very popular until it can be used with controlnets and IP-Adapter. It doesn't recognize negative prompts too. Until then SDXL and SD1.5 will continue to dominate.
Very true. I was thinking the same.
What about consistent styles?
please help I always get black image in the output, ComfyUI works without any error but I get black image no matter what.
0:13 => "...another company was working in silence." Good one, bro. Same people, but they needed to start a new company because they are being sued by Getty and it's looking like that lawsuit will end Stable Diffusion. Flux.1 is basically Stable Diffusion 3.1 with more model data.
I use the flux schnell model with my RTX 3060 12gb and it created just fine. The team behind Black Forest worked on Stable Diffusion and Midjourney so they really know there stuff.
for weight d-type if you're on 24gb vram card you should be on default instead of fp8 for higher quality, you didn't say anything there so thought i'd bring it up. good tutorial vid for newbies otherwise. sizes up to 2 mega pixels (1920x1080 for instance) work.
ITs crashing on my setup I have a 3090 and with 64 GB ram, it only works on with fp8
@@neoneil9377 idk what to tell ya... close all other programs? I literally have nothing open but edge & comfyui and it works flawlessly on my 3090, all properly loads into vram on fp16.
Second link doesnt work : )
confirmed, it 404s
Just for the record: I've tried this on my home-build with just a GTX 1070 (8gb vram) and 32 gb system ram, and i'm getting a 1024x1024 image in about a minute WITH Photoshop running in the background and two browsers open.
Why are you giving me hope with my Rtx 3050 4gb GPU in my laptop!!!😮
Crash on ComfyUI but 5 minutes on ForgeUI for GGUF with RTX4060 8Gb
any way to fix "module 'torch' has no attribute 'float8_e4m3fn'"?
using the runpod setup you covered in your previous video
Pretty steep hardware requirements, need a pruned version with MPS support (Metal 3).
It will come I'm sure.
Agree.
Not really I run the schnell model on my home pc with only a RTX 3060 12gb
@@TPCDAZ MPS is macOS support.
@@rmeta3391 I know but macs are not really made for any of this. That doesn't mean that the hardware requirements are steep.
hello when are going to launch "Ultimate Guide to AI Digital Model/Influencer on Stable Diffusion ComfyUI (Beginner Friendly)" course , i am waiting for it , can you tell the exact date of launch ?
Follow the steps but doesn't work for me on a MacBook Pro M3
How do you get to the Manager Menu?
I was watching How to Create Coloring Book For KDP using AI for FREE (2024). Where did it go!
is there any lite version model ? under 7GB like sdxl 😣
Second link for vae does not exist anymore.
Do you know how to install on the stability matrix? The Models folder, which is a folder shared with other models in the stability matrix, does not have a Unet folder like the regular Comfy UI.
I have a 3080ti with 12 GB RAM, and 32 GB system ram, for some reason it crashes at "GOT PROMPT", I tried the FP8. Also other checkpoints of flux give me the same error. Other models work. I downloaded the last version of standalone COMFYUI.
Are using the Flux specific nodes?
Thanks, can we add loras and checkpoints? to anime style, what do you recommend?
It runs nice on my RTX 3060 12GB. Generating one image 1024x1024 in 35 seconds. 4x batch 92 seconds.
How much ram are you using? ddr4 or 5?
@@sandeepsaurabh8181 32GB DDR4 3600mhz
I have one ai model already made by rendernet. Im using that face model on comfyui epic realisim model with reactor faceswap to create different images same face, different places. So is there any option to use Flux Dev to make samething ? I searched but I couldnt find a way to use my model face and create pictures. I start to think SD is still better option
the workflow does not open with a "Load diffusion model" node. It has a UNETLoader node, which doesnt work if I am using one of the safetensors models...
I use the Dev fp16 on a maxed out M1Max and runs pretty well, tho slower than SDXL on Fooocus
Are you guy able to control the camera angle in flex.1? What ever instruction I give it, it just ignores it completely.
I have, using standard photography lingo. Eg. "Photo is a high angle mid-shot".
@@bentontramellI’ve tried wide angle shot and I’ve tried to give it a focal length but it just won’t work. Something else I’ve never been able to do is to get someone lit only but a single light source, like a flame. Flux just adds lights everywhere. And without negative prompts it’s even harder. But it’s a V1 and it’s quite amazing. Looking forward for v2
I donmt understand. When i try to que prompt after doing all other steps the teminal goes (env) (base) H:\Research\pino\api\comfyui.git\app> and browser disconnects. Help
Thanks to the author for the video and links.
I have a GTX1080Ti with 11GB VRAM, I am running the flux models with the fp8 T5, resolution is 1024x1024, at around 25s/it, quite slow(SDXL/SD3/Kolors by comparison is around 2-3s/it), but it isn't unreasonable tbh, given that I am using the full 23GB flux models(tried both schnell and dev) with only 11GB VRAM, it is to be expected.
Edit: The point of my comment was even on 11GB VRAM I am able to run the models, I also only have 32GB system RAM, it is probably pushing the limits, as it does completely saturate my VRAM, but it works with less than 12GB.
When I open ComfyUi there is no Manager and Share buttons?
i have 4060ti 8gb unfortunatelly, can run it well?
where to get working ComfyUI for PC?
I cannot download the ae.sft but I can find ae.safetensors so I renamed to ae.stf but I'm getting an error.
Prompt outputs failed validation
VAELoader:
Value not in list: vae_name: 'ae.sft' not in ['taesd', 'taesdxl', 'taesd3']
Wonderful, truly wonderful!
I wish it could run on cloud GPU like sagemaker, need help for the notebook
the second link does not work
Thank you so much! What do you do with the workflow file that has been downloaded?
Please follow the steps you need to download the weights to get the workflow running
@@Aiconomist Thank you so much for your help. What do I do with this file? its the last step folder download. workflow-flux-simple-workflow-schnell-40OkdaB23J2TMTXHmxxu-reverentelusarca-openart
@@InterestingLifeTravels On the official "Flux Examples" webpage you have two pre-generated images (an anime girl and some bottle on the table). Just drag and drop one of these images on top of your ComfyUI, depending on whether you're using Schnell or Dev model. Then the default ComfyUI workflow for your model will show up.
@@InterestingLifeTravels i figured it out, in the comfyui interface click on load, and then click that workflow file (.json) and the ui will look like in this video
nothing happens, after clicking qeue prompt, the green circle stays on the first box where you enter the model.
the CMD says:
got promt
model_type FLOW
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
RTX 3060 Ti (8GB? i think?)
i dont mind waiting a bit longer, but nothing happens
same for me did you fix it ?
It works on my 2070 Super 8GB VRAM and 32GB System RAM only takes 45 seconds
which one is the developer model?
Followed this video, got the results. Thank you for sharing!
don't work "Error occurred when executing DualCLIPLoader" HELP
Error occurred when executing SamplerCustomAdvanced:
FP64 data type is unsupported on current platform.
How to turn off the low vram mode? I add --normalvram command but it still in low vram mode
Default female model is Asian you have to specify the desired race in your prompt. Additionally hands are bigger than it needs to be.
Reconnecting error...
help me, when i try to Queue thr prompt it says reconnecting forever
does this work on mac?
You downloaded the workflow script, and then immediately showed how to launch Comfi UI. So where did you put the script? Be consistent please!
ahh
nvm I found the solution
@@OpenTo-rc2pg Any chance you could help a guy out?
@@Beastly477 I found the solution. don't worry.
@@OpenTo-rc2pg Yeah me too, Timeshift
Can you please make tutorial to run on cloud
I always get this bug! :( TypeError: Failed to fetch.... someone please help me!
1:44 is not true, i ran it on my AMD RX 6800 16GB with ROCm. It took 460 seconds for one 1024x1024 image with flux.1 dev. Its slow but it works :)
You're right I run everything with the 6800xt, sdxl, pony, 1.5, faceID, instant ID etc. good to know it works with flux as well
Use Nvidia GPU for faster generations
@@bigbrotherr big brain answer
lower the steps.
I have 3070ti laptop 8gb vram 16 gb Ram, and 6800H
Will it work ?
Pls help
i see that the schnell model is on apache2.0 license. isn't this completely free for commercial use? i am confused cause they wrote schnell is tailored for local development and personal use.
do you know what these actually mean?
is the Dev one better?
can it use negative prompts to get rid of extra limbs?
No, it doesn't seem to take negative prompts, only positive. I tried many times.
It says reconnecting every time I try to run prompt
Thanks for the video, but I don't know why I get this error "module 'torch' has no attribute 'float8_e4m3fn'"
I updated comfy and even updated my torch version after getting this error but still, no go. Comfy works for everything else, but flux :(
128gb ram 24gb vram windows 10, python 3.10
it says you didnt download the float file or didn't place it in the correct save path
@@MrGenius2 thanks, but i did. the only thing i can think of is the fact some of my models are in the automatic1111 folders and i added them through tye extra modle yaml, but for the flux models i placed them all in comfy, which file specifically do you mean? i placed them all
Whenever I have issues with something new for Comfyui, I just create an additional Comfyui portable. Tried installing Kolors on my first Comfyui and had issues, so I went to my third iteration of Comfyui portable which had way less stuff installed. It worked.
@@Artishtic yes i guess that’s what i’ll have to do. i wanted to avoid that. thanks
I got the same error. I then tried using the fp16 version clip and that worked (though it took like 25 minutes on my rig which doesn't have as much RAM as yours).
Thank you for making the tutorial video. This is my first time installing ComfyUI to test Flux1. I followed all the instructions and installed everything, but when I click on 'run_cpu.bat' (or 'nvidia'), sometimes the ComfyUI screen appears, and sometimes it doesn't. When it doesn't appear, the last command line shown is as below, and when I press any key, it just disappears and nothing happens. I've tried many times but still can't open it. I don't know what this error is. Could you please guide me on how to fix it? Thank you very much.
D:\ComfyUI\ComfyUI_windows_portable>pause
Press any key to continue..."
same problem for me too.... did you got the solution?
Off topic question, Pleas dont laugh :) : Would You know any top notch 2d image to 3d mesh ai method by any chance ?
How is this model for img2vid models?
It's too slow even on a 4090 right now. Looks promising but needs unhobbling bigtime.
is it as slow or slower than Midjourney?
I use the Schnell version of the Flux model and FP16 mode on my 4080, and the 1024px square image is generated in 8 seconds only. I don't consider that too slow, especially for the first release. But on a Dev model, it needs about 28 seconds, and that one might be slow for some. But the difference is not huge, so I'm using the Schnell model in the end. Even the difference between FP8 and FP16 modes is not so great, so systems with less than 32GB RAM are also fine.
@@polystormstudio I don't run Midjourney on my machine.
Probably doing something wrong cause here on 3090, it takes less than 10 seconds per image at 1280x1024
@@context_eidolon_music just asking because when I used MJ, it was incredibly slow at peak times. I cancelled it because I can't rely on an online generator.
As slow as Flux is for you, do you think the image quality is worth it?
Huggingface links aren't working at all.
Amazing!! Love you
What about the CPU mode?
flux is mind blowing!
Have downloaded all files and put them into their right places (IMHO) - but when I start the ComfyUI server I just get my usual SD nodes - nothing about FLUX1 ?? I have reset Chrome etc. too - When I try to LOAD nothing is shown to select. Wonder what I have done wrong - please ??
Hmmm - then I (Re-)installed everything using the Manager (models 311++). Refreshed and now I get something when I try to load (flux 1-dev.... and flux 1-schnell...) but trying tu load one of these I just get: "Unable to find workflow in flux-schnell ... etc ??
Did you drop one of the three sample images from the hugging face website that has the flux-specific nodes into the workspace to get started?
@@bentontramell Yes - after a couple of days I found out that the workflow is stored inside the .png files. But Thank you.
*ae.SFT, not SLT. Just a quick note - Amazing video!
will this work on 6GB Vram laptop?
nooo you need an SaaS like Runpod, Massed Compute... steep vram requirements, 12GB VRAM+ /32GB sys RAM
It may run on RDNA 3 apu laptop or mini pc if it has 32 or 64 gb system ram. Windows allocate ran as Vram.
can we use this on a1111?
I'm not sure , please visit the stable diffusion subreddit you can find updates there
No you can't yet
As long as flux dont allow NSFW generation, stable diffusion will still be the bread and butter.
How about on Mac
Get a real pc
You call the file ae.slt, but it is ae.sft
the model so big for my pc
What about negative Prompts to avoid constantly getting super models with tons of make-up on their faces? Also, only 1 out of 10 pictures for me have correct limbs and hands. Stable Diffusion won't be buried for a very long time. Tried this prompt and all I get is garbage: "an astronaut sitting on a rock on moon and playing guitar. in the background the rising earth wich is in an atomic world war."
isnt the prompt a bit crazy? Ai seems to suffer when you give them too many details (sitting on a rock + playing guitar) I would try only sitting on a rock, or only playing guitar for better results
Cant get this to run using any online tutorials. Very weird.
SAI is def finished but SDXL is going to stay for a while. The problem with flux.1 is that dev and schnell ver are distilled model from their API only pro model, meaning traditional fine tuning method are not going to work. So at best, it’s a Midjouney alternative with little community fine tunes and LORAs like SD1.5 and SDXL.
How is some generic pose so great mark of this model, just wondering? I mean yes the model can understand the meaning and what you are looking for quite well. But most models seem to often be able to generate these super generic stock photo poses, but not be able to generate certain not so common poses or even worse, most have so far been very limited with the view angles and different FOVs, let alone open mouths + teeth, crossed hands and such... because those images weren't in the training data set or didn't make it, or the model simply can't work with too small details. But definitely the model doesn't mangle the poses and seems to be way better with hands and poses, saw some pretty consistent and varied yoga poses in Reddit, not sure which of these models, but anyway way more consistent than any of SD models.
To say RIP to SD is quite overstatement........ unless flux will implement controlnet... it will never match up to SD
Controlnet and lora waiting room
right, I'm off to buy a new pc
Well isn't that group basically at least partially former Stability AI developers? You talk like this was some unknown group of devs. And I wouldn't care about their licenses, as long as they are not transparent about their data sets and how they acquired those images (I have no idea about that though if there is some info).
Confirmed working with
1660 non super 6GB
16gb RAM
😮
are you sure about this?
@@GamingLegend-uq3on Yes, its my own machine
how long it take for generating an image ?
It doesn't do celebs. It is very weak on artists styles. Its slow. It takes a lot of resources.
Let me know when it can run fast on a user pc that doesn't require your life savings. I'm sure Flux is great. It will not be popular in this form.
The image quality is amazing, but the hands are still not perfect and look disproportionate.
Can flux make NSFW pictures?
Unfortunately, as long as control nets don't work its just an interessting toy.
It launched yesterday.
@@ritpoptrue but still no controlnet - not usable for me
you wanna maybe give them a second... it came out yesterday.
@@generichuman_ pretty exciting times! An SD model equal to MJ? WOW!
Give me a time bro 😮❤
can it do NSFW? (asking for an AI friend XD)
Booooooo.
can u run flux on comfyui thru rundrop? i dont have a machine that mets requirements for fast gens.
I love this model. Fantastic QUality!
Prompt outputs failed validation
VAELoader:
- Value not in list: vae_name: 'vaeae.sft' not in ['FluxVAE.safetensors', 'ae.sft', 'taesd', 'taesdxl']
DualCLIPLoader:
- Value not in list: clip_name1: 't5xxl_fp8_e4m3fn.safetensors' not in ['clip_l.safetensors', 't5xxl_fp16.safetensors']
mine error's saying flux isnt a type in dual clip loader. =(