Next Tech and AI
Next Tech and AI
  • 26
  • 266 794
FAST SD3.5 GGUF for low VRAM GPUs with Highest Quality. Stable Diffusion 3.5 Large, Turbo & Medium.
For Stable Diffusion 3.5 Large, Turbo & Medium we install GGUF for low VRAM GPUs on ComfyUI locally. Workflows are provided and explained, followed by image comparisons even with FLUX.
Additionally you will get best parameter settings and details regarding the performance.
Videos:
XY-Plot with ComfyUI: ua-cam.com/video/GCKkn0YN6Us/v-deo.html
Flux GGUF for low VRAM GPUs with ComfyUI: ua-cam.com/video/B-Sx_XCAqzk/v-deo.html
Flux Installation on ComfyUI: ua-cam.com/video/52YAQZ-1nOA/v-deo.html
Workflows (for free):
www.patreon.com/posts/fast-sd3-5-gguf-115039268
The GGUF Models:
github.com/city96/ComfyUI-GGUF
huggingface.co/stabilityai/stable-diffusion-3.5-large
huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo
huggingface.co/stabilityai/stable-diffusion-3.5-medium
comfyanonymous.github.io/ComfyUI_examples/sd3/
UPDATE: huggingface.co/city96/stable-diffusion-3.5-medium-gguf
huggingface.co/ND911/stable-diffusion-3.5-medium-GGUF/tree/main
PLEASE CHECK THE PINNED COMMENT FOR UPDATES !
Chapters:
0:00 About SD3.5 and GGUF
1:20 GGUF Installation
1:57 GGUF Model Files
5:15 SD3.5 Large Workflow
8:03 Result Comparison
9:35 SD3.5 Turbo Workflow
10:34 SD3.5 Medium Workflow
11:55 More GGUF Models and the Future
#comfyui #gguf #stablediffusion
Переглядів: 2 316

Відео

How to ControlNet FLUX & SDXL with ComfyUI (Including XY-Plot Tutorial & free Workflows)
Переглядів 1,3 тис.Місяць тому
You will learn how to use ComfyUI Workflows for ControlNet with FLUX DEV, FLUX SCHNELL and SDXL. Additionally you will learn how to XY-Plot different Parameters including ControlNet Parameters. We will use the ControlNet Models Union-Pro for Flux.1 and UnionProMax for SDXL. Videos: Flux Installation on ComfyUI: ua-cam.com/video/52YAQZ-1nOA/v-deo.html Inpainting and ComfyUI-Manager: ua-cam.com/v...
How to Inpaint FLUX with ComfyUI. BEST Workflows including Flux-Fill, ControlNet and LoRA.
Переглядів 5 тис.Місяць тому
You will learn how to inpaint with ComfyUI and Flux.1 in 4 different ways including ControlNet and you can find an update in the description for the new Flux.1-Fill-Dev. Additionally you will learn quick modifications of the workflows in order to do Outpainting and to use a LoRA with the Inpainting ControlNet. We compare the results of the workflows and you will get suggestions when to use whic...
How to Prompt FLUX. The BEST ways for prompting FLUX.1 SCHNELL and DEV including T5 and CLIP.
Переглядів 9 тис.2 місяці тому
We compare prompting the FLUX T5 text encoder with Stable Diffusions Clip 1.5 by using ComfyUI and the FLUX GGUF models. Comparing styles with FLUX SCHNELL and FLUX DEV results in a surprise. Natural language can improve the prompting results with Flux.1. Learn how to create your own Prompts for FLUX and how to enhance and improve them with LLMs like ChatGPT. Videos: Flux GGUF for low VRAM GPUs...
FAST Flux GGUF for low VRAM GPUs with Highest Quality. Installation, Tips & Performance Comparison.
Переглядів 20 тис.3 місяці тому
We install the new GGUF node on ComfyUI locally for NVIDIA or AMD GPUs. The image generation examples show both the great quality as well as the detailed performance, followed by tips & tricks including Flux.1 DEV and Flux.1 SCHNELL. Videos: Flux Installation on ComfyUI: ua-cam.com/video/52YAQZ-1nOA/v-deo.html ComfyUI with ZLUDA on Windows: ua-cam.com/video/X4V3ppyb3zs/v-deo.html ComfyUI with R...
How to ComfyUI with Flux.1. Detailed Installation. Workflows, Tips & Performance. AMD and NVIDIA.
Переглядів 19 тис.3 місяці тому
We install ComfyUI locally on AMD or NVIDIA GPUs, optionally use DirectML or CPU. The image generation examples include Stable Diffusion and Flux.1, followed by tips & tricks, performance considerations and comparing results especially regarding Flux.1 DEV and Flux.1 SCHNELL. Videos: ComfyUI with ZLUDA on Windows: ua-cam.com/video/X4V3ppyb3zs/v-deo.html ComfyUI with ROCm on Linux: ua-cam.com/vi...
OUTPAINTING that works. Impressive results with Automatic1111 Stable Diffusion WebUI.
Переглядів 2 тис.3 місяці тому
We use Stable Diffusion Automatic1111 and the inpaint model to outpaint the surrounding of a generated image and extend it this way. Learn about the installation, the parameters and the best approach as well as possible pitfalls. This solution works locally with AMD and NVIDIA GPUs. Mentioned Videos: ControlNet: ua-cam.com/video/NqTBV_vR-iM/v-deo.html Dreambooth: ua-cam.com/video/_tYcL9ePkU0/v-...
How to DREAMBOOTH your Face in Stable Diffusion. Detailed Tutorial. Best Results.
Переглядів 6 тис.4 місяці тому
We use Stable Diffusion Automatic1111 and the DreamBooth extension to fine-tune a custom model using a set of photos. Learn about the installation, the parameters and the best approach as well as possible pitfalls. This solution works locally with AMD and NVIDIA GPUs. The DreamBooth extension for Automatic1111: github.com/d8ahazard/sd_dreambooth_extension 1500 class images for 'person': github....
Queue your tasks in Automatic1111 and let your PC do the work.
Переглядів 6556 місяців тому
Tired of waiting for Automatic1111 to finish generation? You want to queue several prompts with different parameters, checkpoints, ControlNet processing? Reuse and edit queued items? Learn how to do this with the SD WebUI Agent Scheduler extension. Git Repository of SD WebUI Agent Scheduler: github.com/ArtVentureX/sd-webui-agent-scheduler Video about Upscaling: ua-cam.com/video/eV-ZQfIqFfQ/v-de...
How to use ControlNet with SDXL. Including perfect hands and compositions.
Переглядів 7 тис.6 місяців тому
We use Stable Diffusion Automatic1111 to repair and generate perfect hands. Learn about ControlNet SDXL Openpose, Canny, Depth and their use cases. This includes keeping compositions and use good hands as templates. Advertisement / sponsor note Try FaceMod AI Face Swap Online: bit.ly/4aha0fm Advertisement / sponsor note ControlNet: github.com/Mikubill/sd-webui-controlnet SDXL Models: huggingfac...
ComfyUI with ZLUDA on Windows for AMD GPUs (Tutorial).
Переглядів 17 тис.7 місяців тому
ComfyUI with ZLUDA on Windows for AMD GPUs (Tutorial).
AMD ROCm under WINDOWS Status Update. ZLUDA with SD.next as the best alternative (Tutorial).
Переглядів 18 тис.8 місяців тому
AMD ROCm under WINDOWS Status Update. ZLUDA with SD.next as the best alternative (Tutorial).
How to add Online Access to a GPT. For AMD and NVIDIA GPUs. With CrewAI & Ollama.
Переглядів 7389 місяців тому
How to add Online Access to a GPT. For AMD and NVIDIA GPUs. With CrewAI & Ollama.
How to use ANIMATEDIFF in Stable Diffusion with CONTROLNET. BUG FIX! Control-Video and Custom Models
Переглядів 6 тис.9 місяців тому
How to use ANIMATEDIFF in Stable Diffusion with CONTROLNET. BUG FIX! Control-Video and Custom Models
GPT4All 5x FASTER. Runs LLAMA 3 and supports AMD, NVIDIA, Intel ARC GPUs.
Переглядів 8 тис.10 місяців тому
GPT4All 5x FASTER. Runs LLAMA 3 and supports AMD, NVIDIA, Intel ARC GPUs.
How to FACE-SWAP with Stable Diffusion and ControlNet. Simple and flexible.
Переглядів 44 тис.11 місяців тому
How to FACE-SWAP with Stable Diffusion and ControlNet. Simple and flexible.
How to UPSCALE with Stable Diffusion. The BEST approaches.
Переглядів 40 тис.11 місяців тому
How to UPSCALE with Stable Diffusion. The BEST approaches.
AMD ROCm on WINDOWS for STABLE DIFFUSION released SOON? 7x faster STABLE DIFFUSION on AMD/WINDOWS.
Переглядів 19 тис.Рік тому
AMD ROCm on WINDOWS for STABLE DIFFUSION released SOON? 7x faster STABLE DIFFUSION on AMD/WINDOWS.
How to create a bootable USB flash drive with Ubuntu Linux for GPT/UEFI. The NEW, easier way.
Переглядів 8 тис.Рік тому
How to create a bootable USB flash drive with Ubuntu Linux for GPT/UEFI. The NEW, easier way.
AIs reveal Top 5 UFO Facts.
Переглядів 77Рік тому
AIs reveal Top 5 UFO Facts.
How to use Stable Diffusion XL locally with AMD ROCm. With AUTOMATIC1111 WebUI and ComfyUI on Linux.
Переглядів 28 тис.Рік тому
How to use Stable Diffusion XL locally with AMD ROCm. With AUTOMATIC1111 WebUI and ComfyUI on Linux.

КОМЕНТАРІ

  • @az3848
    @az3848 День тому

    I keep getting errors for the Flux: mat1 and mat2 shapes cannot be multiplied

    • @NextTechandAI
      @NextTechandAI День тому

      Download my workflows, update your ComfyUI.

  • @kaleb51
    @kaleb51 2 дні тому

    Hello you have a nice channel. I had a quick question. How can i use tensorflow on my amd rx6800xt?

  • @eledah9098
    @eledah9098 4 дні тому

    Great vid! Are you planning to make one with the new Flux tools as well?

    • @NextTechandAI
      @NextTechandAI 4 дні тому

      Thanks a lot! I've already updated the description for this video and added a new workflow for Flux Fill. Regarding Depth and Canny I'm not sure as we already have several good solutions, including Union Pro for Flux, which I've covered in the Flux ControlNet video. I'm very keen on the new Redux model, but it doesn't seem to work the way I have hoped. Anyhow, that's currently the best candidate for a video about the Flux tools.

  • @generalawareness101
    @generalawareness101 4 дні тому

    Text eludes me with inpainting in Flux.

    • @NextTechandAI
      @NextTechandAI 4 дні тому

      What do you mean?

    • @generalawareness101
      @generalawareness101 4 дні тому

      @@NextTechandAI I mean, I have spent over 2 days trying to get it to work and it will not. I have gone through various YT creator workflows and forget it. Ironically, I actually had XL almost do it, while the flux one next to it (from a creator) could not. Dev. I even tried your workflow to no avail.

    • @NextTechandAI
      @NextTechandAI 4 дні тому

      @@generalawareness101 I still don't know what exactly didn't work for you, but in general you have to give the text enough space. Similar to finger inpainting, the new area to be inpainted needs to be large enough to actually accommodate 5 fingers.

    • @generalawareness101
      @generalawareness101 4 дні тому

      @@NextTechandAI I gave it 1/4, 1/2, 3/4 of the images. I tried everything.

  • @NextGenGames0
    @NextGenGames0 4 дні тому

    i have amd rx6600 can i use it ?

    • @NextTechandAI
      @NextTechandAI 4 дні тому

      Sure, it just depends on the model and resolution you want to use. For low VRAM you can check my videos about GGUF for Flux and SD3.5. Both works best on AMD with Linux ROCm or Windows Zluda, see my related videos.

  • @kemicalyemster
    @kemicalyemster 6 днів тому

    best outpainting on youtube. Thanks!

    • @NextTechandAI
      @NextTechandAI 6 днів тому

      Thank you very much for the motivating feedback. I'm glad you found the video useful.

  • @euve8421
    @euve8421 7 днів тому

    For me the extension for control net is not enabled in the extensions tab and if I enable it after I press Apply and restart UI it auto disable it again. I get error Error running postprocess_batch_list, and error Error running postprocess and a Warning for No motion module detected, falling back to the original forward. I installed both extension from the URL and put both models for the appropriate directory. I guess because of that I have only ControlNet Integrated on my generation tab. I did the fix from the end section of the video and now it says: TypeError: HEAD is a detached symbolic reference as it points to '10bd9b25f62deab9acb256301bbf3363c42645e7' on startup

  • @sgl3163
    @sgl3163 7 днів тому

    Does anyone think of updating these tutorials? Does this still work in 2024? Have the files changed? Have the names changed? I'm betting yes.

  • @Kostya10111981
    @Kostya10111981 8 днів тому

    Hallo Patrick! Have you tried Amuse for AMD generator? If so, what can you say about it?

    • @NextTechandAI
      @NextTechandAI 8 днів тому

      Hallo Kostya! Although I haven't used it yet, I think Amuse is a reasonable option to try out image generation. However, I think AMD should have put the resources into ROCm for Windows; ComfyUI, Forge and A1111 are proven open source tools that offer significantly more options. In my opinion, we don’t need another proprietary tool that is also based on ONNX.

  • @ancientlord5697
    @ancientlord5697 9 днів тому

    Hello sir, I run the directml bat and it says "Error loading "ImportError: DLL load failed while importing torch_directml_native:" I'm having a problem, can you help me how to solve it? I have rx 6800 graphic cards and 32gb system memory on windows.

    • @NextTechandAI
      @NextTechandAI 9 днів тому

      Please post the contents of your directml bat. I assume you are calling the wrong bat or it is missing something.

    • @ancientlord5697
      @ancientlord5697 8 днів тому

      @@NextTechandAI I reached the interface without any problems by doing the setup again. I checked the unet, clip and ae files and put them in their places. When I added the prompt to the queue and ran it, now I get the error "[F1119 23:44:35.000000000 dml_util.cc:118] Invalid or unsupported data type Float8_e4m3fn."

    • @NextTechandAI
      @NextTechandAI 8 днів тому

      @@ancientlord5697 As I said in the video, Flux is currently not supported by directML.

    • @ancientlord5697
      @ancientlord5697 8 днів тому

      ​@@NextTechandAIIs there something i did wrong? Doesn't it work with directml in video?

    • @NextTechandAI
      @NextTechandAI 8 днів тому

      @@ancientlord5697 No, I said in the video it doesn't work with Flux. I used Zluda after generating the example with SD15. You can use SD15, SD35 and SDXL.

  • @EdWingfield
    @EdWingfield 13 днів тому

    Not working, video source stays black and empty (0:00) whenever extension i use for video source.

  • @takiparilimpossivel
    @takiparilimpossivel 13 днів тому

    Can we get an updated version of this tutorial please? I'm struggling to make this work. I follow to the T and it still tells me I don't have a NVidia GPU installed

    • @aemsu1617
      @aemsu1617 14 годин тому

      Restarting PC helped for me because the Zluda dll didn't get recognized instantly in Environmentvariable

  • @riyan8432
    @riyan8432 15 днів тому

    Thanks a lot.

    • @NextTechandAI
      @NextTechandAI 15 днів тому

      I'm glad you liked the video, thanks for the feedback.

  • @rodopil1161
    @rodopil1161 15 днів тому

    So so USEFUL and essential ! Merci beaucoup :)

    • @NextTechandAI
      @NextTechandAI 15 днів тому

      Thank you for your feedback, I'm glad the video was useful for you 😀

  • @RodrigoAGJ
    @RodrigoAGJ 16 днів тому

    I’m really eager to try out this interesting workflow! Where can I find it?

    • @NextTechandAI
      @NextTechandAI 16 днів тому

      I'm glad the video is useful. Which workflow do you mean?

  • @LesCalvin3
    @LesCalvin3 18 днів тому

    That's what I'm looking for! With the ability to export and import, I can get AI's help writing the files and make didn't queues. Thank you!

    • @NextTechandAI
      @NextTechandAI 18 днів тому

      Thank you very much, I‘m glad the tutorial is useful!

  • @folkeroRGC
    @folkeroRGC 18 днів тому

    Great tutorial, thanks, how we can use inpaint to use two Loras for different characters.

    • @NextTechandAI
      @NextTechandAI 18 днів тому

      Thanks a lot. First inpaint the left character, in a second step inpaint the right one.

  • @hotnikq
    @hotnikq 18 днів тому

    I'm a ROCm gui, so, no windows at all....

  • @as-ng5ln
    @as-ng5ln 18 днів тому

    DEV has its own vae

    • @NextTechandAI
      @NextTechandAI 18 днів тому

      What do you mean? There is one VAE for Flux, but some checkpoints have it included directly.

    • @as-ng5ln
      @as-ng5ln 18 днів тому

      @NextTechandAI dev has a special vae that can be downloaded on huggingface, maybe that is why the images turned out that poorly

    • @NextTechandAI
      @NextTechandAI 18 днів тому

      @@as-ng5ln No, there is one VAE for Flux. This has absolutely nothing to do with the fact that Schnell follows prompts better than DEV. Try it yourself and generate the same image with both VAE files. By the way, you can try this with SD3.5 Large and Turbo, too.

    • @as-ng5ln
      @as-ng5ln 18 днів тому

      @@NextTechandAI I'm telling you... I have the two files "ae.safetensors" and "flux1DevVAE_safetensors.safetensors". ae comes from schnell, while the other one is from the dev directory

    • @NextTechandAI
      @NextTechandAI 18 днів тому

      @as-ng5ln Yes, and they have same effect on Flux image generation. As I said, try youself.

  • @paddyhaig101
    @paddyhaig101 19 днів тому

    This doesn't work, certainly when using 23.10-ubuntu-studio-Mantic_Minotour. The problem I have experienced repeatedly, is that EFI/Grub does not install on the selected USB regardless of the correct partition being chosen via the installation interface. It might work if your in a position to remove any other harddrives connected to your system. However, I didn't want to dismantle my laptop. I really couldn't seem to find a way around it.

    • @NextTechandAI
      @NextTechandAI 19 днів тому

      I followed my own video tutorial two days ago in order to install Ubuntu 24.04.01 on a USB drive. There haven't been any problems and it's working as expected. Are you choosing the installation target for the boot manager as described in the video?

  • @PenkWRK
    @PenkWRK 22 дні тому

    hello i got problem like this "the size of tensor a (1536) must match the size of tensor b (2304) at non-singleton dimension 2". i try anything but still got this

    • @NextTechandAI
      @NextTechandAI 22 дні тому

      Hi, which of my workflows and which model files do you use?

    • @PenkWRK
      @PenkWRK 21 день тому

      @NextTechandAI i use models SD. 3.5 Medium fp16 GGUF and i create manually same workflow on your this video but still getting error. i talk ChatGPT and update torch, transformers, diffusers and also setting torch float16 in my python not working aswell

    • @NextTechandAI
      @NextTechandAI 21 день тому

      @@PenkWRK Could you please use the suggested models and my workflows, you can download them from my patreon for free. In case this doesn't help, try the original SD3.5 Medium model without GGUF.

  • @Thimb012
    @Thimb012 23 дні тому

    I have followed the tutorial to the letter and retried multiple times, however SDNext is still using the CPU. I am not seeing any errors, however, I do not get the line when starting the Webui.bat "Torch Allowed... etc"". Also, when I run to generate, I see "Torch generator: device=cpu". Finally, I see"No ROCm runtime is found, using ROCM_HOME='C:\Program Files\AMD\ROCm\6.1'" What am I doing wrong? I have a AMD 6750 XT and I am bad at this.

  • @Med2402
    @Med2402 23 дні тому

    Thanks

  •  23 дні тому

    Since when 12 GB is "low vRAM"? 😅. I always considered 4-6 GB as low, 8-12 GB medium and 16+ as high vRAM.

    • @NextTechandAI
      @NextTechandAI 23 дні тому

      VRAM refers to the GPU's memory, not the file size😉 Some have already gotten FLUX with GGUF to work with 4-6GB VRAM, I expect the same for SD3.5. With 8 GB or less, GGUF definitely makes sense, I also use it with 16 GB.

    • @Gaming_Legend2
      @Gaming_Legend2 21 день тому

      its a good amount but not for ai generation, i keep crashing the card when running SDXL models on a RX6600 lol 12 is defently on the edge of low vRAM for this type of stuff

  • @shreyaspapinwar2745
    @shreyaspapinwar2745 24 дні тому

    Hey How do I get all the options you have in Source checkpoint drpodown, I only got 1.

    • @NextTechandAI
      @NextTechandAI 24 дні тому

      These are checkpoints that I have downloaded over time and others that I have generated with Dreambooth.

  • @jaiderariza1292
    @jaiderariza1292 24 дні тому

    I wonder how to do this on WSL2 and RX 7700XT? or the only path will be window directly?

    • @NextTechandAI
      @NextTechandAI 24 дні тому

      By now WSL2 should be possible with an RX 7x00, my RX 6800 is still not supported.

  • @jaiderariza1292
    @jaiderariza1292 24 дні тому

    what amd gpu are you using?

  • @trelogiatros21
    @trelogiatros21 25 днів тому

    I tried the method all good.But at the start after exactly class images i get this error : Exception training model: 'Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install 'accelerate>=0.26.0'`'

  • @foreropa
    @foreropa 25 днів тому

    I´m going to change my AMD card for a Nvidia card soon, AMD is doing a terrible job at this, I don´t want to use linux to use stable diffusion for example, why Nvidia users can use a simple way of using it while AMD has to relay in some hard ways to get it. Even thought, I did install Stable Diffusion in Windows with my AMD card, and it worked fine, but I don´t remember how I did it and now I´m tired of trying.

    • @NextTechandAI
      @NextTechandAI 25 днів тому

      I can understand that very well, I am very disappointed with AMD's support of AI, too.

  • @foreropa
    @foreropa 26 днів тому

    If you haven´t installed conda, check this video... What video???

    • @NextTechandAI
      @NextTechandAI 26 днів тому

      It's the one mentioned in the description with "AMD ROCm on Windows Status Details and GIT & MiniConda-Installation".

    • @foreropa
      @foreropa 26 днів тому

      @@NextTechandAI Thanks!!

  • @93simongh
    @93simongh 27 днів тому

    Thanks but doesnt help if the just the 3 text encoder files are about almost 15 gb in total... My confyui crashes my pc (99% ram use) while loading the 3 clips before even attenpting to load the gguf

    • @NextTechandAI
      @NextTechandAI 27 днів тому

      For very low VRAM I've suggested in the video the FP8 T5, which is below 5GB. g and i together are about 1,5GB. You can even use the GGUF T5 encoders linked at the bottom of City96s GitHub with down to 2GB, but they have a bigger impact on quality than the quantized UNET models. Hence I'd try FP8 T5 first. Use runtime parameters --use-split-cross-attention and --lowvram or even --novram.

    • @93simongh
      @93simongh 27 днів тому

      @NextTechandAI thanks I will have to try. Where do you use the parameters you mentioned? Do I use them when launching comfyui?

    • @NextTechandAI
      @NextTechandAI 27 днів тому

      @@93simongh Yes, in the batch file directly after 'main.py'.

  • @bwheldale
    @bwheldale 27 днів тому

    I've been a no plotter for way too long. This is a much appreciated tutorial, thanks.

    • @NextTechandAI
      @NextTechandAI 27 днів тому

      Thanks for your feedback. I'm happy that the plot community has another member.

  • @elias-mp9yk
    @elias-mp9yk 27 днів тому

    bist du deutsch?

    • @NextTechandAI
      @NextTechandAI 27 днів тому

      I'm sure it's not too difficult to recognize my accent😉

  • @louisbeauger
    @louisbeauger 28 днів тому

    Does Comfyui work well with a 7900 xtx?

    • @NextTechandAI
      @NextTechandAI 28 днів тому

      If you can manage the installation with Zluda, then yes, extremely well.

    • @louisbeauger
      @louisbeauger 27 днів тому

      @@NextTechandAI Everything works fine ?

    • @NextTechandAI
      @NextTechandAI 27 днів тому

      @@louisbeauger There are a few restrictions, e.g. components using the bits-and-bytes extension do not work, like NF4, but I like GGUF much more anyhow.

    • @louisbeauger
      @louisbeauger 27 днів тому

      @@NextTechandAI Ok Thank's !

  • @darkman237
    @darkman237 28 днів тому

    What about forge?

    • @NextTechandAI
      @NextTechandAI 28 днів тому

      From what I have seen there shall be a release soon. Forge with Medium 3.5 seems to be broken, probably they want to fix this first.

    • @NextTechandAI
      @NextTechandAI 28 днів тому

      From what I have seen there shall be a release soon. Forge with Medium 3.5 seems to be broken, probably they want to fix this first.

  • @forg2x
    @forg2x 28 днів тому

    I tried SD 3.5 and Flux wins at least in Low VRAM 3060 12GB

    • @NextTechandAI
      @NextTechandAI 28 днів тому

      Regarding quality or speed? Are you using both with GGUF?

  • @yousifradio
    @yousifradio 28 днів тому

    I Like It

  • @wolfgangterner7277
    @wolfgangterner7277 28 днів тому

    The loader GGuf does not work. I always get this error message. What do I have to do to load the GGUF files?? ( `newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead. Danke ## Stack Trace)

    • @NextTechandAI
      @NextTechandAI 28 днів тому

      Have you updated both the GGUF extension as well as your Comfy? Which GPU are you using?

    • @wolfgangterner7277
      @wolfgangterner7277 28 днів тому

      @NextTechandAI I have updated everything and have a 12 GB RTX3060

    • @NextTechandAI
      @NextTechandAI 28 днів тому

      @wolfgangterner7277 There is an issue in the GGUF github, which suggests several solutions: github.com/city96/ComfyUI-GGUF/issues/7 I think downgrading numpy as suggested in the bottom of this issue is the easiest solution.

  • @AberrantArt
    @AberrantArt 28 днів тому

    I still can't get flux to run on my Radeon 5500 XT. I love your channel BTW. Thank you for what you do.

  • @NextTechandAI
    @NextTechandAI 28 днів тому

    Will you try the GGUF SD3.5 models or are you sticking with FLUX? UPDATE: City96 released his own versions of SD3.5 Medium GGUF models: huggingface.co/city96/stable-diffusion-3.5-medium-gguf

  • @MrDebranjandutta
    @MrDebranjandutta 29 днів тому

    Neat stuff, but whats the most optimised model for nv 3060 /w 12gb?

    • @NextTechandAI
      @NextTechandAI 29 днів тому

      Thank you. If you are using the models shown in my Flux GGUF video, I'd suggest Q8_0, Q5_K_S or Q4_K_S - the biggest one that fits.

  • @RussAlexei
    @RussAlexei Місяць тому

    Благодарю за видео, очень полезно.

  • @ChanhDucTuong
    @ChanhDucTuong Місяць тому

    Thank you very much I was thinking about comparing different Inpainting technique, your video is just what I need. What do you think about cropping the inpainting part then upscale it seperately then inpaint then stitch it back? There is a node Crop&Stitch for that or we can do it manually but I'm not sure if those could work with your ControlNet workflow.

    • @NextTechandAI
      @NextTechandAI Місяць тому

      Thanks a lot for your feedback. Interesting, I didn't know these two noes. Looks like by using them we can get something similar to 'masked only' in A1111. I don't think you need ControlNet for this. Not sure regarding upscaling, but usually it's a good idea to do at least a 2x after inpainting to blur the contours.

  • @sereinnat9832
    @sereinnat9832 Місяць тому

    Great video ! Could you share the Flux Workflow ? I think only the SDXL is in the description

    • @NextTechandAI
      @NextTechandAI Місяць тому

      Thanks a lot! As mentioned in the video you can find the workflows on my patreon (for free). The link is in the description 😉

  • @FraterOvis
    @FraterOvis Місяць тому

    Hey Mate any chance we get german versions of the videos? :)

    • @NextTechandAI
      @NextTechandAI Місяць тому

      I published the first video of this channel on a German channel at the same time, it has around 600 views. The English version has almost 30k views. I'm afraid there is only a very small target group for such videos in German.

  • @vadar007
    @vadar007 Місяць тому

    Super informative! However, after getting everything setup and hitting the Train button I get the following error....AttributeError: module 'transformers.integrations' has no attribute 'deepspeed'. There seems to be little to no info on this error. Can't believe I am the first person to run into this. Any guidance?

    • @NextTechandAI
      @NextTechandAI Місяць тому

      Thanks! Are you using a different version? I cannot remember a deepspeed option/attribute, can you possibly deactivate it?

    • @vadar007
      @vadar007 Місяць тому

      ​@@NextTechandAI I commented out these two lines referenced in the AttributeError and it seems to be working now: #if transformers.integrations.deepspeed.is_deepspeed_zero3_enabled(): #import deepspeed Location: stable-diffusion-webui\venv\Lib\site-packages\diffusers From what I can tell DeepSpeed is supposed to help accelerate the training. I may try installing it later but I'll work on getting my training tuned in first. My speed is adequate for now. Dreambooth extension version is 1b3257b4 (2024-08-04). Automatic 1111 v1.10.1 Python v 3.10.11 Torch 2.1.2 + cu121 xformers 0.0.23.post1 gradio 3.41.2

    • @NextTechandAI
      @NextTechandAI Місяць тому

      @vadar007 Thank you for the feedback, I'm glad it's working for you now.

    • @AboodHani-t9r
      @AboodHani-t9r Місяць тому

      how did you solve this?

    • @vadar007
      @vadar007 29 днів тому

      @@AboodHani-t9r Sort the comments by Newest First and you'll see my reply in this thread that tells you how to fix it.

  • @bordignonjunior
    @bordignonjunior Місяць тому

    your accent is amazing 🤩

    • @NextTechandAI
      @NextTechandAI Місяць тому

      I'm happy you enjoyed the video😀

  • @n0_l0gic
    @n0_l0gic Місяць тому

    King shit 🔥 Question: Is using CLIPTextEncodeFlux the same as inputing a normal CLIPTextEncode into a FluxGuidance node (I don't really understand why there is two inputs in the CLIPTextEncodeFlux version when you only enter in the second field)? Also, do you have to insert a ConditioningZeroOut between the empty text prompt and the negative input (Or can you just use a single of them? Either one?)?

    • @NextTechandAI
      @NextTechandAI Місяць тому

      Thank you. In my tests there was always a slight difference, the ClipTextEncodeFlux seems to be better suited for T5. See my video about Flux prompting (ua-cam.com/video/OSGavfgb5IA/v-deo.html) regarding the two input fields. Frankly speaking I haven't seen ConditioningZeroOut quite often and it shouldn't have much influence, but from my point of view it looks more correct as Flux does not use negative prompts.