Awesome video, thanks! I got a error about multiplying a floating point number (time_factor) by a nonetype (zero for time) which does not compute. Python 3.10 allows you to code Int | none but should be coded as Union[A, B] : return which will return actual numbers. Simply adding a ConditioningSetTimestepRange of course didn't work.
Thank you! In the description of the video, there is a section "Resources". Click on the updated workflows link. You can then drag either the dev or schnell image into ComfyUI to load the workflow.
Im using a guidance scale of 2.5 for realism. I find that the default 3.5 scale makes the skin look like it has a plastic HDR sheen to the skin. The higher the scale the less realism. But like you said, a lower scale sacrifices prompt adherence.
Yes, there's a sacrifice there. I'm thinking of a way to compensate for this. So my idea is, what if you leave guidance scale at 3.5 and use the DynamicThreasholdingFull node with mimic_scale set to 2? Have you tried this?
can you please explain if we have 2 loras, and they are different persons, how can we implement, it without effecting each other. lets think like a couple picture?
Hello, maybe try with a lora stack node. There are plenty in the manager. You will have to test out which settings will work. It still early with the flux and loras.
Hello sir, Your videos are great and very informative. Sir i have a question and after reading in your bio that you are a python guy i'm pretty sure you will help me out. so i want to migrate my python installation from my c drive to another drive can you please help me out how can i do that without breaking everything like my comfyui installed in my system locally? Waiting for your comment
Thank you! If you only want to move ComfyUI, then you can move then entire ComfyUI folder if you have the portable version. In case you installed ComfyUI manually, I think you will have to reinstall it on the other drive.
@@CodeCraftersCorner i will give you my situation idea why I want this, i want to move python to another drive not cmfyui. So when I run comfyui on my laptop the problem is I have python in my c drive and when i load models and my c drive start doing reads and writes it gets hot and with it being hot my wifi card is right below it and heat causes my wifi to shut down itself randomly. So I'm wondering if I move python to another drive and that other drive get into the use then I don't have this problem as far as I am thinking. Let me know if it is possible .
@GamingLegend-uq3on In this case, the problem might still persist. Python itself doesn't generate much heat, but image generation and heavy processing can cause your CPU/GPU to heat up. Moving Python to another drive won't change the fact that ComfyUI will still use your CPU/GPU for processing, which might lead to the same heating issue. But if you want to give it a try, you will have to reinstall python in the new drive and change in python path in the environment path.
Change the values in the ModelSamplingFlux for Max Shift 0.5 and Base Shift 0.3 this will give you also more photorealism. And of course you can also lower FluxGuidance 1.8 - 2.3
Hello. for now, there seems to be more issues with the controlnet than good results. You can monitor the progress by going into the description, under the resources section, there is a link controlnet issues.
Great video -- thanks! I'm having an issue running this workflow; I get this error: "ERROR: Could not detect model type of: C:\comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\models\checkpoints\flux1-dev-fp8.safetensors" Any idea how to fix this?
Hello, can you tell me why my flux shchnell is a .sft? I put it in models/unet and I dont understand why my schnell is sft and yours is not, also it will be different from the dev1.
Hello, the .sft only has the UNET component. To get the safetensors one, go into the description of the video and click on the "Updated workflows" link. This will take you to a post. The first line where it says "checkpoint for the Flux dev here", click on the "here" link and download the safetensors file. Place them in the models > checkpoints folder.
@@CodeCraftersCorner thank you for the detailed answer, please tell me what means I only have the unet component? I got the files from a workflow for flux the first days of release and I dont know very well what im doing. I really appreciate to understand a bit more. Thank you !
@@Atenea-u4x There's two ways of using the Flux models. The sft files you have require a specific workflow. I made a video explaining about them. I think it would be easier to follow the video than explaining it with text. You can watch it here: ua-cam.com/video/OaS9hFD_xz8/v-deo.html
The new models that are half in size and supposed to be faster is taking much longer than the original 22 GB version (dual clip workflow). The original version takes 2 mins per image in my PC but the new supposedly faster version takes about 15-20 mins. Something is really off here. But not sure what. My comfyui is uptodate. I used the same workflow you showed in the video. Why do you think this is?
Thanks for letting me know! My system automatically switches to light mode during the day and dark mode at night, so I didn’t realize this. I'll make sure to adjust it in future videos.
Hello, you can try to update your ComfyUI. If you have the portable version, go into the updates folder and open the update_comfyui_and_dependencies.bat file. Once completed, start ComfyUI and try again. Updating through the Manager alone seems to not work well.
Yes for fast generation. I am using GTX 1650 4GB VRAM and 32GB of RAM. It takes about 10 minutes with the schnell model. Dev model takes way too long though but both works.
Yes, it seems it is broken. They made a code branch separated from the main branch. It seems more experimental for developers for now. You can see the progress and how to get the controlnet branch from here: bit.ly/4cqA1dt
i think flux is not available for commercial activity , Right ? and this is from a privet entity and we are training their model / debugging / being habitual without having free and stable future of development even don't have commercial rights, right? so our time is not valuable or just some ytubrs making money by spreading this?
The flux schnell is available for commercial activity. The dev model is under non-commercial research purposes only. Fun fact, the output (generated images) from the dev model can be used independently for commercial purposes.
probably the main criticism on flux as of now is that it can't do realistic images very well. I tend to disagree, but I've got mixed results. anyway, judging on your screens it looks like that lora is either undertrained, or just garbage
@CodeCraftersCorner "better" at text, and better at adhering to certain prompts, but simply can't do certain things that would seem basic. I don't want to go into it right now, but Flux is a good start but I'll take SDXL with all the tools we have until Flux has more lora and the like.
I disagree as well with those people, the images have come out looking very realistic. This is the most realistic that I've seen the images so far since I've started generating AI images. I can't see myself going back to anything outside of FLUX.
@@AIImagePlaymates-r7h definitely, images are awesome and when it comes to realism, i don't need to adjust any particular settings really, i just add more details to the prompt, it's great! :)
Thanks for testing the cutting edge models and keeping us up to date. 🙏🏽
Thank you very much for the support, Sebastian!
Excellent work mate! Clean, concise and right to the point. I rarely comment on youtube videos. But you deserve a comment. Well done and keep up!
Glad you liked it!
Well explained and caught up all latest info as usual, thanks a lot!
Thank you.
Just want I needed, thanks so much!
Happy to help!
Awesome video, thanks!
I got a error about multiplying a floating point number (time_factor) by a nonetype (zero for time) which does not compute. Python 3.10 allows you to code Int | none but should be coded as Union[A, B] : return which will return actual numbers.
Simply adding a ConditioningSetTimestepRange of course didn't work.
Thank you for the comment! I haven’t encountered this error myself, but if I find a solution, I’ll be sure to post it here.
Thank You very Much!
Thank you for watching!
Nice Job! 👍
Thank you!
Excellent video:) Bravo.
Thank you!
Great video! Could you upload your workflows? Thank you very much.
Thank you! In the description of the video, there is a section "Resources". Click on the updated workflows link. You can then drag either the dev or schnell image into ComfyUI to load the workflow.
Im using a guidance scale of 2.5 for realism. I find that the default 3.5 scale makes the skin look like it has a plastic HDR sheen to the skin. The higher the scale the less realism. But like you said, a lower scale sacrifices prompt adherence.
Yes, there's a sacrifice there. I'm thinking of a way to compensate for this. So my idea is, what if you leave guidance scale at 3.5 and use the DynamicThreasholdingFull node with mimic_scale set to 2? Have you tried this?
Thanks for sharing, both of you! I will give these numbers a try.
Could you please write a tutorial on using PULID with FLUX🙏🙏🙏
Hello, currently PuLID is only for SDXL models. Once we get support for FLUX, I will do one.
can you please explain if we have 2 loras, and they are different persons, how can we implement, it without effecting each other. lets think like a couple picture?
Hello, maybe try with a lora stack node. There are plenty in the manager. You will have to test out which settings will work. It still early with the flux and loras.
keep up the good work man! is the flux dev version work for low vram with 512 size?
Thank you! Yes, I am using it with 4GB of VRAM generating images at 1024 size.
you don't need to download again if you already have fp16 models. you can use flux model merger node and create fp8 checkpoint.
Thanks for sharing.
Hello sir,
Your videos are great and very informative. Sir i have a question and after reading in your bio that you are a python guy i'm pretty sure you will help me out. so i want to migrate my python installation from my c drive to another drive can you please help me out how can i do that without breaking everything like my comfyui installed in my system locally?
Waiting for your comment
Thank you! If you only want to move ComfyUI, then you can move then entire ComfyUI folder if you have the portable version. In case you installed ComfyUI manually, I think you will have to reinstall it on the other drive.
@@CodeCraftersCorner i will give you my situation idea why I want this, i want to move python to another drive not cmfyui. So when I run comfyui on my laptop the problem is I have python in my c drive and when i load models and my c drive start doing reads and writes it gets hot and with it being hot my wifi card is right below it and heat causes my wifi to shut down itself randomly. So I'm wondering if I move python to another drive and that other drive get into the use then I don't have this problem as far as I am thinking. Let me know if it is possible .
@GamingLegend-uq3on In this case, the problem might still persist. Python itself doesn't generate much heat, but image generation and heavy processing can cause your CPU/GPU to heat up. Moving Python to another drive won't change the fact that ComfyUI will still use your CPU/GPU for processing, which might lead to the same heating issue. But if you want to give it a try, you will have to reinstall python in the new drive and change in python path in the environment path.
thanks again man
Thanks for the support!
Change the values in the ModelSamplingFlux for Max Shift 0.5 and Base Shift 0.3 this will give you also more photorealism. And of course you can also lower FluxGuidance 1.8 - 2.3
Thanks for sharing, I will give it a try!
I tried these settings, and it just made the image blurry? Any hep greatly appreciated. Thank you
@chiptaylor1124 If it helps, I am using the default values with more emphasis on the prompt.
@@CodeCraftersCorner Thank you!
Hey nice , What about the control net
Hello. for now, there seems to be more issues with the controlnet than good results. You can monitor the progress by going into the description, under the resources section, there is a link controlnet issues.
Great video -- thanks!
I'm having an issue running this workflow; I get this error: "ERROR: Could not detect model type of: C:\comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\models\checkpoints\flux1-dev-fp8.safetensors"
Any idea how to fix this?
Thank you. For the model error, you will need to download it and place it in the folder ComfyUI > models > checkpoints.
What extension you are using to preview nodes?
It's a ComfyUI update. I made a video on it and explain how to set it up here: ua-cam.com/video/8an9mkpDS2o/v-deo.html
can you use it with schnelle?
Hello, for now the loras are only for the dev model.
Hello,
can you tell me why my flux shchnell is a .sft? I put it in models/unet and I dont understand why my schnell is sft and yours is not, also it will be different from the dev1.
Hello, the .sft only has the UNET component. To get the safetensors one, go into the description of the video and click on the "Updated workflows" link. This will take you to a post. The first line where it says "checkpoint for the Flux dev here", click on the "here" link and download the safetensors file. Place them in the models > checkpoints folder.
@@CodeCraftersCorner thank you for the detailed answer, please tell me what means I only have the unet component? I got the files from a workflow for flux the first days of release and I dont know very well what im doing. I really appreciate to understand a bit more.
Thank you !
@@Atenea-u4x There's two ways of using the Flux models. The sft files you have require a specific workflow. I made a video explaining about them. I think it would be easier to follow the video than explaining it with text. You can watch it here: ua-cam.com/video/OaS9hFD_xz8/v-deo.html
what is that ide that you use with the icons on the left? can you tell me please where I can download it?
It is the Edge browser. I changed the setting to show tabs on the left side instead of at the top.
can i simply use a custom made Lora inside comfyui with flux? im a noob
Yes, add your lora to ComfyUI > models > loras folder.
The new models that are half in size and supposed to be faster is taking much longer than the original 22 GB version (dual clip workflow). The original version takes 2 mins per image in my PC but the new supposedly faster version takes about 15-20 mins. Something is really off here. But not sure what. My comfyui is uptodate. I used the same workflow you showed in the video. Why do you think this is?
Hello, not sure! Anyone else with this issue?
if you mean the GGUF versions, its the same for me and i dont understand why. i'm on mac m3, hbu? i was wondering if its a mac+GGUF issue
Is this only working on FP8? im using FP16 with a 3090 and its throwing errors that I don't understand
I should work for both dtypes. Not sure!
Can you please make a video on how to make a professional profile pic using flux. You may replace the face from a professional ai image.
I will try.
When can we use more than 1 lora
Not sure!
nice, but bro pls use night mode, those white site scenes are blinding me! ;)
Thanks for letting me know! My system automatically switches to light mode during the day and dark mode at night, so I didn’t realize this. I'll make sure to adjust it in future videos.
The workflow is not working showing me error "failed to fetch". How to solve this problem
Hello, you can try to update your ComfyUI. If you have the portable version, go into the updates folder and open the update_comfyui_and_dependencies.bat file. Once completed, start ComfyUI and try again. Updating through the Manager alone seems to not work well.
I have the same mice arrow color )
Nice!
whats your pc specs?
Yes I also want to know the specs.
I have a GTX 1650 4GB VRAM and 32 GB of RAM.
There's a negative prompt workflow on Civitai now. 🙂
Thanks for letting me know.
I don't know why but my generated image looks grainy. I am following the exact workflow that is given in the website.anyonefacing the same issue,
Not sure! Have not seen this one before.
lora key not loaded?
Hello, can you try this one: flux_RealismLora_converted_comfyui. There is a link in the description of the video.
still requires insane VRAM ?
Yes for fast generation. I am using GTX 1650 4GB VRAM and 32GB of RAM. It takes about 10 minutes with the schnell model. Dev model takes way too long though but both works.
I get a little more realistic image at denoise 0.9 instead of 1
Interesting! Thanks for sharing. I will give this a try.
Anyone else got an error message when trying to use the controlnet?
AttributeError: 'NoneType' object has no attribute 'keys'
Yes, it seems it is broken. They made a code branch separated from the main branch. It seems more experimental for developers for now. You can see the progress and how to get the controlnet branch from here: bit.ly/4cqA1dt
No way of finetuning yet?
There's a model merge of schnell and dev for now.
Hi
Hello
i think flux is not available for commercial activity , Right ? and this is from a privet entity and we are training their model / debugging / being habitual without having free and stable future of development even don't have commercial rights, right? so our time is not valuable or just some ytubrs making money by spreading this?
The flux schnell is available for commercial activity. The dev model is under non-commercial research purposes only. Fun fact, the output (generated images) from the dev model can be used independently for commercial purposes.
dev is better, but it can't be use commercial
While the model itself is strictly for non-commercial research only, the images you generate with it are free to use for commercial purposes.
So many bad Flux tutorials on UA-cam. Why not just include a link to the ComfyUI workflow in the description?
Thanks for watching! You may have missed it but there is a link in the video description under the [RESOURCES] section that says "Updated Workflows".
probably the main criticism on flux as of now is that it can't do realistic images very well. I tend to disagree, but I've got mixed results. anyway, judging on your screens it looks like that lora is either undertrained, or just garbage
hehe give it a chance, its very impressive so far, miles better than the crud of sd3, good blooming riddance! XD
Yes, although it's not an all-in-one perfect model, flux is better than any previous models so far.
@CodeCraftersCorner "better" at text, and better at adhering to certain prompts, but simply can't do certain things that would seem basic. I don't want to go into it right now, but Flux is a good start but I'll take SDXL with all the tools we have until Flux has more lora and the like.
I disagree as well with those people, the images have come out looking very realistic. This is the most realistic that I've seen the images so far since I've started generating AI images. I can't see myself going back to anything outside of FLUX.
@@AIImagePlaymates-r7h definitely, images are awesome and when it comes to realism, i don't need to adjust any particular settings really, i just add more details to the prompt, it's great! :)
DoRa i read are better than LoRa. maybe a topic for a next video. cheers
Thanks for letting me know! I will look into it.