I think you would get a lot of use out of the "fast group bypasser" node, from rgthree, in your workflows. It lists all the group nodes in a single node with a slider to turn them on and off individually.
I see a lot of people talk smack in your comments. You are the best content creator for beginners to intermediate on all of UA-cam. I have watched lots of content from the top 8 or 10 content creator and most are just fine but you are the GOAT for beginners hands down IMO and top 3 for the intermediate crowd. Keep on keepin' on sir.
Warning: Missing Node Types When loading the graph, the following node types were not found: workflowFlorence workflowFlux Controlnet workflowNodes 3 workflowXLabs Sampler No selected item Nodes that have failed to load will show as red on the graph.
People said the same thing of SDXL. And then PDXL (Pony) happened. And then people said pony couldn't do as well as SDXL for anything other than porn. And now Pony surpassed that even. Flux will be so much better very soon. I'd say in less than 6 months it will have enough people who trained, modified, and hacked it to the point that it'll do whayever you want. But it takes time - Flux did just release and we're already seeing some amazing progress.
i did everything as instructed i believe, even refreshed after the white screen part, Prompt outputs failed validation DualCLIPLoaderGGUF: - Value not in list: clip_name2: 't5-v1_1-xxl-encoder-Q8_0.gguf' not in ['ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors', 't5-v1_1-xxl-encoder-Q5_K_M.gguf', 't5xxl_fp16.safetensors'] sometimes it changes to this issue instead: 'NoneType' object has no attribute 'device'
It works! Flux's loras are SO much better than previous models. I had to post-process the image so much it wasn't making sense anymore. Now it's exactly my subject, with perfect pupils and the right amount of fingers 🤣 Thank you so much! Would be nice to have some insights on Flux lora training next to make them even better and finetuned.
Hey, that's a great compilation of handy Flux workflows and models. Over the last couple of weeks, I've spent countless hours with Flux, and I'm glad that you've decided to clear up some of the confusion surrounding the release of Flux and its various models. It would be great if, in the future, there were better recall functions for different Comfy UI presets and workflows, as well as a better folder structure in Comfy UI. It would be awesome not only for first-time users but also for semi-professionals. Have Fun! 😉
ok will give this a shot. Flux has always given me issues, never could get the hang of it and resource intensive. This looks like you put a lot of work in to make it as simple as possible. They are BIG models so looks like this will take some time to install. Appreciate the way that you have modeled the interface on A1111, this will make a huge difference on engaging workflows and help guide clarity an understanding of the process. Great video and explainer!!
Il me semble avoir reconnu un léger accent français, donc je commente dans cette langue :-) Fabuleuse vidéo : la présentation est claire, pédagogique, avec de l'humour ce qui ne gâche rien. Les liens sont tous présents dans la présentation ou sur la page patreon. Bref une réussite. Avec une telle vidéo, impossible de rater l'installation de ce nouveau Flux Q8. Surprise il est beaucoup plus rapide que les versions précédentes de Flux (avec NVIDIA GeForce RTX 3070) , et le résultat est magnifique. Un grand merci à Aitrepreneur.
It would be fantastic if you added an 'SDXL aspect Ratio' module to choose the format you need and that the height and width update with the correct values
I've run installing missing custom nodes. I still get Warning: Missing Node Types When loading the graph, the following node types were not found: workflowFlorence workflowFlux Controlnet workflowNodes 3 workflowXLabs SamplerI'm getting: Warning: Missing Node Types When loading the graph, the following node types were not found: workflowFlorence workflowFlux Controlnet workflowNodes 3 workflowXLabs Sampler
I run the same with a 3080 10GB without any problem... and can use it even with other applications using vram , like some youtube video tweaking with GPU weights... I always liked the flexibility of comfy, but while they don't make some way to use flux it less vram like forge I'll be only using forge
I remember trying ComfyUI a while ago and found it confusing and a bit overwhelming, but this video is tempting me to give it another try. The workflow which you have created simplifies it to the point where new users such as myself can practically jump right in. Thank you for all of the hard work you have put into making all of this.
Duude!!! I love you! Thats why i have stayed a patreon supporter even though ive been out of the ai game for a while. It all because the focus only being on compyui and i hate it even though i know how much more powerful it is! This will work for me thank you!
So, the q8 could be faster depending on if the GPU has native support for those sizes, or if it has to used more advanced math and code to achieve the same space savings. Older GPUs do not have basic units of 2bits to calculate on, nor 4 or 5 or 6, and really old ones not even 8. So, the way to compress the space takes more compute time as a tradeoff.
Off Topic Question : SDXL LORA STYLE Training video you did in 2023, has anything changed , if i would train architects style, or fashiondesigners style, or Photographers style? i never came across any that were really that great... and your video from 2023 did a cartoon style,.... but not architects or fashion designers.... it seems to be a different process and parameters, right?
Why I'm not seeing the "Download as an archive" option when right-clicking on a folder in Jupyter Lab, even though the archive extension is installed and enabled in Aitrepreneur Comfyui pod?
Man you're really good at making these one click installs. I'm sure this was a lot of work. I hope it brings in enough patreon income to make it worth it because you deserve it.
Thanks for all the links, very useful (I didn't update my text encoders files) I didn't try yet your workflow (I will certainly do it later) but that looks very useful.
What would you suggest for an RTX A4500 with 20GB of Vram? I am currently using the FP8 model, but if I can get some speed improvement with a Q8...that would be nice
Hey! Thank you for all this precious info! One question though... when I load the workflow I get a warning (all white): "Missing Node Types - when loading the graph the following nodes types were not found - "workflownodes 3" "workflowFlorence" "workflowFlux Controlnet" "workflowXLabs Sampler". I can't find them anywhere... please help!
This is awesome. Would love to see additional fields for SD1.5 and SDXL/Pony so it could be the ultimate All-In-One prompt interface. Many workflows combine some or all of these models due to specific strengths/weaknesses, and availability of loras and specific prompts.
LoRA version mismatch for KModel: - my SDXL-lora was not accepted, with flux1-dev-Q_K_S.gguf and t5-v1_1-xxl-encoder-Q3_K_L.gguf with a GTX 1060 6 GB VRAM
1) I stated to testing your workflows starting from generation image. And I dont know what are default Flux settings for "Other Nodes" (with vea): max_shift and base_shift. I mean I used more simple vea loader without such options. And it is cool to have some possibilities but until I know what it does I would like to have default values :). 2) What about negative prompts in FLUX (sometimes i saw such workflows but when I tried to do something like this i had some errors.
Actually i now testing again Q5_K_M (loaded partially) + ViT-l-14_Text_impoved details + t5xxxl_Q8 - and so far it works similar to Q4_K_M (which is loaded fully). But i think it depends also what you generate (need to test more). Ps. i think this shift from your workflow is probably default at last it gives me the same results like mine workflow.
This is amazing. Thank you so much for doing this. There's a question on your Patreon I'm following that's asking if this install works if you've already got Comfy/Flux installed. Cheers.
Optimized isn't really the right word. More space efficient is better, because these GGUFs are optimized for size. The non-GGUF versions are also optimized, but they are optimized for quality. The problem is not communicating what they are optimized for, and distorting the idea of optimization. It could also be optimized for speed., but we can't know if you only used the word optimized.
I'm gettint the error of missing workflowflorence, how can i install it? manager it's not working for that, and the github link not working, thank you!
Amazing set-up, runs great. I'm new to flux, literally just installed it, but is there an easy way to get older Loras to work with flux. The one I brought over from 1111 doesn't work in this (flux compatible loras do).
Q8 would work faster than Q4 because it is not optimized rather compressed. GGUF quantization compresses less important layers and they have to be decompressed when they are needed. So it causes extra workload and accordingly slower but in return model occupies less space. For LLMs GGUF can be run both on CPU and GPU and can be even split between them so this should work on CPUs too. And there are way more GGUF/matrix quantization than these so there might be more Flux models soon. Yeah, checked huggingface the guy who made GGUF quantization works on more Flux versions. And you can run Flux on CPUs too but only on Linux for a bizarre reason. Because i have a tiny GPU i could never follow text2image side well but it follows developments so slow. In text2text side when there is a new model peope make GGUF versions of them within few hours. While it took so long for Flux but we are getting there, few more months and it will be possible to run Flux etc split between CPU and GPU.
Im getting error: "Prompt outputs failed validation DualCLIPLoaderGGUF: - Required input is missing: clip_name2 - Required input is missing: clip_name1" Also discord link is dead
it is awesome, thank you very much! just one question. i have the setting exactly like you, but in the image to image, the picture doesn't generate an anime picture like you and idk whu D:
Greetings, I tried to use the Flux prompter to generate a nude image but the LLM returned a notice "I cannot create explicit content, but I’d be happy to help with other creative ideas. How about a story or poem?"
So, hopefully training and finetuning next? Because its definitely very needed. While flux is really good at prompt following, the quality of the image, especially if you want to generate people or characters, is still kinda meh. For generating people, I had SD1.5 workflows with better image quality.
This is an absolutely fantastic resource! The installer worked perfectly for me and your UA-cam instructions were flawless. You've made the Comfy interface manageable, which is no easy task. Outstanding work on all fronts, good sir!
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
❤❤❤
👋
Should I buy a 4090 or wait for a 5090?
@@marcshawn wait my boy. 4090 is cool but the 5090 is coming, just wait
I think you would get a lot of use out of the "fast group bypasser" node, from rgthree, in your workflows. It lists all the group nodes in a single node with a slider to turn them on and off individually.
I see a lot of people talk smack in your comments. You are the best content creator for beginners to intermediate on all of UA-cam. I have watched lots of content from the top 8 or 10 content creator and most are just fine but you are the GOAT for beginners hands down IMO and top 3 for the intermediate crowd. Keep on keepin' on sir.
Thank you, I really appreciate the kinds words :)
Warning: Missing Node Types
When loading the graph, the following node types were not found:
workflowFlorence
workflowFlux Controlnet
workflowNodes 3
workflowXLabs Sampler
No selected item
Nodes that have failed to load will show as red on the graph.
same, did you fix it?
@@miguelarce6489 I couldn't but It still allows me to do the basic text to image.
This video deserves 3 times as many likes! Thank you for making this available to the community for free!
perhaps I missed it but I dont see how this is uncensored? flux has always been pretty bad for NSFW
Plenty of loras and new trained models for that now
there are also ''unchained'' flux models
People said the same thing of SDXL. And then PDXL (Pony) happened. And then people said pony couldn't do as well as SDXL for anything other than porn. And now Pony surpassed that even.
Flux will be so much better very soon. I'd say in less than 6 months it will have enough people who trained, modified, and hacked it to the point that it'll do whayever you want. But it takes time - Flux did just release and we're already seeing some amazing progress.
@@ripleyhrgiger4669 Im confused, that model doesnt seem to be a realism model, so why are you talking about it?
@@xalzor740 Pony Realism has entered the chat...
i did everything as instructed i believe, even refreshed after the white screen part,
Prompt outputs failed validation
DualCLIPLoaderGGUF:
- Value not in list: clip_name2: 't5-v1_1-xxl-encoder-Q8_0.gguf' not in ['ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors', 't5-v1_1-xxl-encoder-Q5_K_M.gguf', 't5xxl_fp16.safetensors']
sometimes it changes to this issue instead: 'NoneType' object has no attribute 'device'
Did you find solution? I have same problem with missing clips
It works! Flux's loras are SO much better than previous models. I had to post-process the image so much it wasn't making sense anymore. Now it's exactly my subject, with perfect pupils and the right amount of fingers 🤣 Thank you so much! Would be nice to have some insights on Flux lora training next to make them even better and finetuned.
It's not the prompt in the upscale that is "wrong" it's the high denoise value :)
Hey, that's a great compilation of handy Flux workflows and models. Over the last couple of weeks, I've spent countless hours with Flux, and I'm glad that you've decided to clear up some of the confusion surrounding the release of Flux and its various models. It would be great if, in the future, there were better recall functions for different Comfy UI presets and workflows, as well as a better folder structure in Comfy UI. It would be awesome not only for first-time users but also for semi-professionals. Have Fun! 😉
For 4GB card or lower, use the NF4 version that has HyperSD 8 steps merged, Don't use GGUF version it's slower.
i don't see where is worklow to download?
It's locked behind his Patreon paywall. It's not free basically.
its in his patreon
It's in the description now incase he didn't update previously.
@olivierniclausse1791
the workflow is on his patreon for free
it is located on his patreon for free
Thank you this is very cool! I would like to see an update on the state of AI music or image to video
It is slow, but Q8 will run with 12g VRAM. I'm using an RTX 3060 with 12G and Forge.
Same gpu. I use Q8 and Q6 for text encoder. 20 steps tooks almost 2 minutes
With flux schnell I generate images on a 3060 in 18secs
Thanks for all the work! As a true master you managed to simplify ComfyUI 👍
I appreciate that! ;)
What an amazing work. The best workflow for ComfyUI I have ever seen, thank you!
Thanks, took me a while to get it right
💯
ok will give this a shot. Flux has always given me issues, never could get the hang of it and resource intensive. This looks like you put a lot of work in to make it as simple as possible. They are BIG models so looks like this will take some time to install. Appreciate the way that you have modeled the interface on A1111, this will make a huge difference on engaging workflows and help guide clarity an understanding of the process. Great video and explainer!!
Il me semble avoir reconnu un léger accent français, donc je commente dans cette langue :-) Fabuleuse vidéo : la présentation est claire, pédagogique, avec de l'humour ce qui ne gâche rien. Les liens sont tous présents dans la présentation ou sur la page patreon. Bref une réussite. Avec une telle vidéo, impossible de rater l'installation de ce nouveau Flux Q8. Surprise il est beaucoup plus rapide que les versions précédentes de Flux (avec NVIDIA GeForce RTX 3070) , et le résultat est magnifique. Un grand merci à Aitrepreneur.
It would be fantastic if you added an 'SDXL aspect Ratio' module to choose the format you need and that the height and width update with the correct values
it's not uncensored at all
I've run installing missing custom nodes. I still get Warning: Missing Node Types
When loading the graph, the following node types were not found:
workflowFlorence
workflowFlux Controlnet
workflowNodes 3
workflowXLabs SamplerI'm getting:
Warning: Missing Node Types
When loading the graph, the following node types were not found:
workflowFlorence
workflowFlux Controlnet
workflowNodes 3
workflowXLabs Sampler
Still
thank you for the content. Always learning something new :D
i love your work so much, it all what i want, i feel so happy to use comfy now.
well done, this time you have outdone yourself. thank you very much!
I'm running 8Q with fp16 t5xxl in Forge on my 12GB VRAM 3060. It's almost magic. You might want to look into it.
What is BQ?
@@jitgo I believe it's the 8Q model, the one that supposedly only works with 16 gb cards
@@jitgo It's 8Q. It's the best quantized version of the fp16 Dev model. It's almost exactly the same but takes less memory and runs a lot faster.
@@jitgo FLUX.1-dev-Q8_0 i assume
I run the same with a 3080 10GB without any problem... and can use it even with other applications using vram , like some youtube video tweaking with GPU weights... I always liked the flexibility of comfy, but while they don't make some way to use flux it less vram like forge I'll be only using forge
You are amazing sir. This is so amazing and extremely well done. Must have took a lot of work! Thank you for your efforts
Glad you enjoyed it!
I remember trying ComfyUI a while ago and found it confusing and a bit overwhelming, but this video is tempting me to give it another try. The workflow which you have created simplifies it to the point where new users such as myself can practically jump right in. Thank you for all of the hard work you have put into making all of this.
Duude!!! I love you! Thats why i have stayed a patreon supporter even though ive been out of the ai game for a while. It all because the focus only being on compyui and i hate it even though i know how much more powerful it is! This will work for me thank you!
It is a very clean and intuitive workflow design. 👍
So, the q8 could be faster depending on if the GPU has native support for those sizes, or if it has to used more advanced math and code to achieve the same space savings.
Older GPUs do not have basic units of 2bits to calculate on, nor 4 or 5 or 6, and really old ones not even 8. So, the way to compress the space takes more compute time as a tradeoff.
You can also add a switch to enable or disable groups by a click, much faster
Off Topic Question : SDXL LORA STYLE Training video you did in 2023, has anything changed , if i would train architects style, or fashiondesigners style, or Photographers style? i never came across any that were really that great... and your video from 2023 did a cartoon style,.... but not architects or fashion designers.... it seems to be a different process and parameters, right?
"Oh boy, oh boy" you are back!
Ok you came back, better than before, that's why i respect your job.
longtime fan! love your videos! can you do an updated uncensored roleplay or txt ai king??? what is the best model for this that you recommend ?
Thank you very helpfull to understand GGUF models. If I go away 1 or 2 months then come back, there is so much new things we can be lost easily...
I get this error: `newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.
You are truly one of the best in the AI community.
Why I'm not seeing the "Download as an archive" option when right-clicking on a folder in Jupyter Lab, even though the archive extension is installed and enabled in Aitrepreneur Comfyui pod?
good to see you back!
Man you're really good at making these one click installs. I'm sure this was a lot of work. I hope it brings in enough patreon income to make it worth it because you deserve it.
Thanks for all the links, very useful (I didn't update my text encoders files)
I didn't try yet your workflow (I will certainly do it later) but that looks very useful.
This looks great man, well done and thanks
Is the Workflow only available for patrons?
No, it's available using the join for free option
yup....
@@yiluwididreaming6732 guess his videos aren't actually worth watching then = unsubscribed
@@holylord666 ohhhkayyy. peace out
It’s available under the free tier. You don’t need to pay.
Do you have a comfyui workflow similar to this for SDXL/Pony? Flux isn't quite there for me yet, but this is the best workflow I've seen for comfy!
thank you for the videos and keep it up :)
What would you suggest for an RTX A4500 with 20GB of Vram? I am currently using the FP8 model, but if I can get some speed improvement with a Q8...that would be nice
It doesnt work:
`newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.
Hey! Thank you for all this precious info! One question though... when I load the workflow I get a warning (all white): "Missing Node Types - when loading the graph the following nodes types were not found - "workflownodes 3" "workflowFlorence" "workflowFlux Controlnet" "workflowXLabs Sampler". I can't find them anywhere... please help!
by checking in detail, I can see I'm missing the "florence" node on the GGUF UPSCALER. But Model Manager doesn't find any missing nodes to install...
This is awesome. Would love to see additional fields for SD1.5 and SDXL/Pony so it could be the ultimate All-In-One prompt interface.
Many workflows combine some or all of these models due to specific strengths/weaknesses, and availability of loras and specific prompts.
LoRA version mismatch for KModel: - my SDXL-lora was not accepted, with flux1-dev-Q_K_S.gguf and t5-v1_1-xxl-encoder-Q3_K_L.gguf with a GTX 1060 6 GB VRAM
Nice one. Please add a workflow for restoring old image (detail restoration / upscale 4k)
I have 11GB VRAM, I just use the full fp16 model, with `--lowvram --cuda-device 0 --reserve-vram 1.0` commandline arguments.
Can you add outpainting to your ultimate all in one as well?
Thanks Aitrepreneur, it's a great workflow for FLUX beginners, amazing.👍
1) I stated to testing your workflows starting from generation image. And I dont know what are default Flux settings for "Other Nodes" (with vea): max_shift and base_shift. I mean I used more simple vea loader without such options. And it is cool to have some possibilities but until I know what it does I would like to have default values :).
2) What about negative prompts in FLUX (sometimes i saw such workflows but when I tried to do something like this i had some errors.
Actually i now testing again Q5_K_M (loaded partially) + ViT-l-14_Text_impoved details + t5xxxl_Q8 - and so far it works similar to Q4_K_M (which is loaded fully). But i think it depends also what you generate (need to test more).
Ps. i think this shift from your workflow is probably default at last it gives me the same results like mine workflow.
Thanks! Will this have image training?
This is dope. Thanks very much!!!
Thanks for your time and efforts 👌🏻
Where's the actual workflow? I see where to download models but I don't see the workflow.
You have to be on his Patreon.
it's on his patreon
It's locked behind his Patreon paywall. It's not free basically.
Same
Same
I would like to see some info abaut how to train lora on flux based safetensors.
use flux gym, 2-4 images, trigger word+basic caption, train. Very easy.
@@quercus3290 thank you for the advice, i will check it
@@quercus3290 thx for your help👍
@@quercus3290 thank you
@@quercus3290 hm... Somehow the sythem keeps remowing my answers...
Any automatic install for Zluda for AMD or OpenML?
That would be next level.
I really like comfyUI but my biggest issue comes with its lack of backward compatibility which makes workflows obsolete in just 2 weeks.
awesome job, thank you!!!
This is amazing. Thank you so much for doing this. There's a question on your Patreon I'm following that's asking if this install works if you've already got Comfy/Flux installed. Cheers.
I can't see the "Manager" Button in Comfy UI. You used v0.2.2. I am using v0.2.3 😞
really great work! thanks for sharing!!
i dont have the queue button, restarted it already. how can i get this button? i
cant generate without it
All I want to know is which model or lora makes proper lady parts.
im getting this error:
workflowoutpaintsize1
workflowoutpaintflor
workflowfaceswap2
workflowfaceswap3
workflowREACTORFACESWAP
workflowpullflo
workflowpulidtxt
workflowvidiupsl
Optimized isn't really the right word. More space efficient is better, because these GGUFs are optimized for size.
The non-GGUF versions are also optimized, but they are optimized for quality.
The problem is not communicating what they are optimized for, and distorting the idea of optimization.
It could also be optimized for speed., but we can't know if you only used the word optimized.
Why does dragging an image to another node work so different from what your showing here?
I'm gettint the error of missing workflowflorence, how can i install it? manager it's not working for that, and the github link not working, thank you!
fire as always!!!
got red boxes and no Manager button. Followed the "manual install" I wish the video included support for the free link stuff too...
Me too
Me too, ComfyUI manager" button cannot be found anywhere.
Is the inpainting workflow using the newly released flux inpainting weights by alimama-creative?
you forgot an Outpainting Workflow. Just kidding, nice work, thank you very much!
I got stuck after loading the json workflow.. As the manager button seems to be missing on my comfyui gui....
(edit never mind I've figured it out)
Mine is missing and i can't find out how to solve it... any advice please? 😣
yeah maybe help others and tell them what you did to solve it :D
wtf? how you solved it?
Hey, how did u fix it? having the same issues here.
How is the quality compared to NF4 and FP8? I don't care about speed.
Thanks!
Damm
Thanks so much man ;)
@@Aitrepreneur I’m sitting up a site with image generation among other things. This is very helpful.
Hey other sampler models, aside from basic 'euler', don't seem to work. You know why?
Amazing set-up, runs great. I'm new to flux, literally just installed it, but is there an easy way to get older Loras to work with flux. The one I brought over from 1111 doesn't work in this (flux compatible loras do).
Does your installer work on Mac?
Would the option for under 12gb vram work if I have 4gb vram?
This is awesome! Thnx!!
Q8 would work faster than Q4 because it is not optimized rather compressed. GGUF quantization compresses less important layers and they have to be decompressed when they are needed. So it causes extra workload and accordingly slower but in return model occupies less space. For LLMs GGUF can be run both on CPU and GPU and can be even split between them so this should work on CPUs too. And there are way more GGUF/matrix quantization than these so there might be more Flux models soon.
Yeah, checked huggingface the guy who made GGUF quantization works on more Flux versions. And you can run Flux on CPUs too but only on Linux for a bizarre reason. Because i have a tiny GPU i could never follow text2image side well but it follows developments so slow. In text2text side when there is a new model peope make GGUF versions of them within few hours. While it took so long for Flux but we are getting there, few more months and it will be possible to run Flux etc split between CPU and GPU.
I have never felt so 'heard' than the "most people hate ComfyUI" and "what the hell is that" and "need a PhD" :) THANK YOU!
wat
noob question but when you close it where do i find it which file do i run to get it up again
Does this work offline or do you need internet?
You need internet to install everything, but then it works offline. It fully runs on your machine only.
If i want only the workflow, i wornder how i can download, i hadn't seen :(
Im getting error:
"Prompt outputs failed validation
DualCLIPLoaderGGUF:
- Required input is missing: clip_name2
- Required input is missing: clip_name1"
Also discord link is dead
it is awesome, thank you very much! just one question. i have the setting exactly like you, but in the image to image, the picture doesn't generate an anime picture like you and idk whu D:
same
When I have a LoRa with dev, will it run also on the fp8 Versions or the GGUF optimized models?
yes it does!
Thanks for your workflow.👍
Greetings, I tried to use the Flux prompter to generate a nude image but the LLM returned a notice "I cannot create explicit content, but I’d be happy to help with other creative ideas. How about a story or poem?"
haha.....
Hey guys. a quick question. Does this flux aldo only works with NVIDEA gpus? or they also works with AMD?
Thaaanks
you need Comfyui Zluda in order to run this on AMD GPUs
hey thx for the amazing guide! ur discord invite says 'invalid' can u post the updated one?
Pony Loras works in this?
Thanks, but how can I use it on Lightning AI?
I'm still never going to use comfy I think, can you just show us how to make flux work in webforge or automatic?
So, hopefully training and finetuning next? Because its definitely very needed. While flux is really good at prompt following, the quality of the image, especially if you want to generate people or characters, is still kinda meh. For generating people, I had SD1.5 workflows with better image quality.
This is an absolutely fantastic resource! The installer worked perfectly for me and your UA-cam instructions were flawless. You've made the Comfy interface manageable, which is no easy task. Outstanding work on all fronts, good sir!