I convert all of my models to FP8 instead of FP16 to get even faster times! Works for Flux, Pony, Illustrious, SDXL, etc. Also, I've been using ComfyUI for over a year (from A1111) as it's just faster than A1111 or Forge - but many people don't enjoy the spaghetti-like node structure. Also, I know the audio is TTS, but it's pronounced "Civit-eye"
hey nice video. How about the other version nf42 and gguf can you show the comparison between the other models . it would be nice to see their speed because i am planning to buy a 4060 .
NF4 is faster and only requires low VRAM. GGUF has many models; if you want to generate faster using this model, try Q3 and Q4.As I tested and gathered information from my friends, I found that VRAM and system RAM play a significant role in generating images. Also, a video card with 'Ti' is faster. Like 3060 Ti. Thanks for watching.
I wouldn't recommend a 4060. 4070ti or 3090 used better. or you will get some issues with controlnet. if you just want to gen images no worries Nf4 Guff is fine.
@@animation-nation-1 can you elaborate what kind of problem i will face because i am really in budget because in my currency 4070 even 4060 16 gb version is very expensive .
Muy buen video , aclara bastantes terminos confusos. Seria interesante que pusiseras los prompt o los link de imagenes generadas para copiar los parametros y hacer nosotros nuestras propias comparaciones. Gracias
I convert all of my models to FP8 instead of FP16 to get even faster times! Works for Flux, Pony, Illustrious, SDXL, etc. Also, I've been using ComfyUI for over a year (from A1111) as it's just faster than A1111 or Forge - but many people don't enjoy the spaghetti-like node structure.
Also, I know the audio is TTS, but it's pronounced "Civit-eye"
i got a 4060 i want to learn about ai, wherte do i start??
i got a rtx 4060 i want to learn about ai, wherte do i start??
hey nice video. How about the other version nf42 and gguf can you show the comparison between the other models . it would be nice to see their speed because i am planning to buy a 4060 .
NF4 is faster and only requires low VRAM. GGUF has many models; if you want to generate faster using this model, try Q3 and Q4.As I tested and gathered information from my friends, I found that VRAM and system RAM play a significant role in generating images. Also, a video card with 'Ti' is faster. Like 3060 Ti. Thanks for watching.
I wouldn't recommend a 4060. 4070ti or 3090 used better. or you will get some issues with controlnet. if you just want to gen images no worries Nf4 Guff is fine.
@@animation-nation-1 can you elaborate what kind of problem i will face because i am really in budget because in my currency 4070 even 4060 16 gb version is very expensive .
The faster generating test was also using the integrated gpu, you see it running at over 20%
i can run flux scnell gguf q4 with 2060 6g vram 16gb ram in less then 20 seconds after the main laoding.
Muy buen video , aclara bastantes terminos confusos.
Seria interesante que pusiseras los prompt o los link de imagenes generadas para copiar los parametros y hacer nosotros nuestras propias comparaciones.
Gracias
BNB sucks! Only use it if you run out of all quantization possibilities!