- 26
- 266 794
Next Tech and AI
Germany
Приєднався 20 лип 2023
Welcome! I'm Patrick, holding a master’s degree in mathematics and a passion for technology, here to help you navigate the world of AI. My background in software development and project management brings a practical perspective to cutting-edge AI tools and technology.
📌 What you'll find here:
• Tutorials on AI technologies like ComfyUI, Flux, Stable Diffusion, Automatic1111, and various LLMs.
• Guides that break down complex topics into simple steps, making advanced tools accessible to a wider audience.
• Videos with technical insights and unique examples that go beyond the basics to deepen your understanding.
Whether you're just starting out or already have some experience, there's something here for everyone. Let's explore the potential of AI together!
*Note: The email below is for business inquiries only, do not use for support or viewer-questions.
📌 What you'll find here:
• Tutorials on AI technologies like ComfyUI, Flux, Stable Diffusion, Automatic1111, and various LLMs.
• Guides that break down complex topics into simple steps, making advanced tools accessible to a wider audience.
• Videos with technical insights and unique examples that go beyond the basics to deepen your understanding.
Whether you're just starting out or already have some experience, there's something here for everyone. Let's explore the potential of AI together!
*Note: The email below is for business inquiries only, do not use for support or viewer-questions.
FAST SD3.5 GGUF for low VRAM GPUs with Highest Quality. Stable Diffusion 3.5 Large, Turbo & Medium.
For Stable Diffusion 3.5 Large, Turbo & Medium we install GGUF for low VRAM GPUs on ComfyUI locally. Workflows are provided and explained, followed by image comparisons even with FLUX.
Additionally you will get best parameter settings and details regarding the performance.
Videos:
XY-Plot with ComfyUI: ua-cam.com/video/GCKkn0YN6Us/v-deo.html
Flux GGUF for low VRAM GPUs with ComfyUI: ua-cam.com/video/B-Sx_XCAqzk/v-deo.html
Flux Installation on ComfyUI: ua-cam.com/video/52YAQZ-1nOA/v-deo.html
Workflows (for free):
www.patreon.com/posts/fast-sd3-5-gguf-115039268
The GGUF Models:
github.com/city96/ComfyUI-GGUF
huggingface.co/stabilityai/stable-diffusion-3.5-large
huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo
huggingface.co/stabilityai/stable-diffusion-3.5-medium
comfyanonymous.github.io/ComfyUI_examples/sd3/
UPDATE: huggingface.co/city96/stable-diffusion-3.5-medium-gguf
huggingface.co/ND911/stable-diffusion-3.5-medium-GGUF/tree/main
PLEASE CHECK THE PINNED COMMENT FOR UPDATES !
Chapters:
0:00 About SD3.5 and GGUF
1:20 GGUF Installation
1:57 GGUF Model Files
5:15 SD3.5 Large Workflow
8:03 Result Comparison
9:35 SD3.5 Turbo Workflow
10:34 SD3.5 Medium Workflow
11:55 More GGUF Models and the Future
#comfyui #gguf #stablediffusion
Additionally you will get best parameter settings and details regarding the performance.
Videos:
XY-Plot with ComfyUI: ua-cam.com/video/GCKkn0YN6Us/v-deo.html
Flux GGUF for low VRAM GPUs with ComfyUI: ua-cam.com/video/B-Sx_XCAqzk/v-deo.html
Flux Installation on ComfyUI: ua-cam.com/video/52YAQZ-1nOA/v-deo.html
Workflows (for free):
www.patreon.com/posts/fast-sd3-5-gguf-115039268
The GGUF Models:
github.com/city96/ComfyUI-GGUF
huggingface.co/stabilityai/stable-diffusion-3.5-large
huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo
huggingface.co/stabilityai/stable-diffusion-3.5-medium
comfyanonymous.github.io/ComfyUI_examples/sd3/
UPDATE: huggingface.co/city96/stable-diffusion-3.5-medium-gguf
huggingface.co/ND911/stable-diffusion-3.5-medium-GGUF/tree/main
PLEASE CHECK THE PINNED COMMENT FOR UPDATES !
Chapters:
0:00 About SD3.5 and GGUF
1:20 GGUF Installation
1:57 GGUF Model Files
5:15 SD3.5 Large Workflow
8:03 Result Comparison
9:35 SD3.5 Turbo Workflow
10:34 SD3.5 Medium Workflow
11:55 More GGUF Models and the Future
#comfyui #gguf #stablediffusion
Переглядів: 2 316
Відео
How to ControlNet FLUX & SDXL with ComfyUI (Including XY-Plot Tutorial & free Workflows)
Переглядів 1,3 тис.Місяць тому
You will learn how to use ComfyUI Workflows for ControlNet with FLUX DEV, FLUX SCHNELL and SDXL. Additionally you will learn how to XY-Plot different Parameters including ControlNet Parameters. We will use the ControlNet Models Union-Pro for Flux.1 and UnionProMax for SDXL. Videos: Flux Installation on ComfyUI: ua-cam.com/video/52YAQZ-1nOA/v-deo.html Inpainting and ComfyUI-Manager: ua-cam.com/v...
How to Inpaint FLUX with ComfyUI. BEST Workflows including Flux-Fill, ControlNet and LoRA.
Переглядів 5 тис.Місяць тому
You will learn how to inpaint with ComfyUI and Flux.1 in 4 different ways including ControlNet and you can find an update in the description for the new Flux.1-Fill-Dev. Additionally you will learn quick modifications of the workflows in order to do Outpainting and to use a LoRA with the Inpainting ControlNet. We compare the results of the workflows and you will get suggestions when to use whic...
How to Prompt FLUX. The BEST ways for prompting FLUX.1 SCHNELL and DEV including T5 and CLIP.
Переглядів 9 тис.2 місяці тому
We compare prompting the FLUX T5 text encoder with Stable Diffusions Clip 1.5 by using ComfyUI and the FLUX GGUF models. Comparing styles with FLUX SCHNELL and FLUX DEV results in a surprise. Natural language can improve the prompting results with Flux.1. Learn how to create your own Prompts for FLUX and how to enhance and improve them with LLMs like ChatGPT. Videos: Flux GGUF for low VRAM GPUs...
FAST Flux GGUF for low VRAM GPUs with Highest Quality. Installation, Tips & Performance Comparison.
Переглядів 20 тис.3 місяці тому
We install the new GGUF node on ComfyUI locally for NVIDIA or AMD GPUs. The image generation examples show both the great quality as well as the detailed performance, followed by tips & tricks including Flux.1 DEV and Flux.1 SCHNELL. Videos: Flux Installation on ComfyUI: ua-cam.com/video/52YAQZ-1nOA/v-deo.html ComfyUI with ZLUDA on Windows: ua-cam.com/video/X4V3ppyb3zs/v-deo.html ComfyUI with R...
How to ComfyUI with Flux.1. Detailed Installation. Workflows, Tips & Performance. AMD and NVIDIA.
Переглядів 19 тис.3 місяці тому
We install ComfyUI locally on AMD or NVIDIA GPUs, optionally use DirectML or CPU. The image generation examples include Stable Diffusion and Flux.1, followed by tips & tricks, performance considerations and comparing results especially regarding Flux.1 DEV and Flux.1 SCHNELL. Videos: ComfyUI with ZLUDA on Windows: ua-cam.com/video/X4V3ppyb3zs/v-deo.html ComfyUI with ROCm on Linux: ua-cam.com/vi...
OUTPAINTING that works. Impressive results with Automatic1111 Stable Diffusion WebUI.
Переглядів 2 тис.3 місяці тому
We use Stable Diffusion Automatic1111 and the inpaint model to outpaint the surrounding of a generated image and extend it this way. Learn about the installation, the parameters and the best approach as well as possible pitfalls. This solution works locally with AMD and NVIDIA GPUs. Mentioned Videos: ControlNet: ua-cam.com/video/NqTBV_vR-iM/v-deo.html Dreambooth: ua-cam.com/video/_tYcL9ePkU0/v-...
How to DREAMBOOTH your Face in Stable Diffusion. Detailed Tutorial. Best Results.
Переглядів 6 тис.4 місяці тому
We use Stable Diffusion Automatic1111 and the DreamBooth extension to fine-tune a custom model using a set of photos. Learn about the installation, the parameters and the best approach as well as possible pitfalls. This solution works locally with AMD and NVIDIA GPUs. The DreamBooth extension for Automatic1111: github.com/d8ahazard/sd_dreambooth_extension 1500 class images for 'person': github....
Queue your tasks in Automatic1111 and let your PC do the work.
Переглядів 6556 місяців тому
Tired of waiting for Automatic1111 to finish generation? You want to queue several prompts with different parameters, checkpoints, ControlNet processing? Reuse and edit queued items? Learn how to do this with the SD WebUI Agent Scheduler extension. Git Repository of SD WebUI Agent Scheduler: github.com/ArtVentureX/sd-webui-agent-scheduler Video about Upscaling: ua-cam.com/video/eV-ZQfIqFfQ/v-de...
How to use ControlNet with SDXL. Including perfect hands and compositions.
Переглядів 7 тис.6 місяців тому
We use Stable Diffusion Automatic1111 to repair and generate perfect hands. Learn about ControlNet SDXL Openpose, Canny, Depth and their use cases. This includes keeping compositions and use good hands as templates. Advertisement / sponsor note Try FaceMod AI Face Swap Online: bit.ly/4aha0fm Advertisement / sponsor note ControlNet: github.com/Mikubill/sd-webui-controlnet SDXL Models: huggingfac...
ComfyUI with ZLUDA on Windows for AMD GPUs (Tutorial).
Переглядів 17 тис.7 місяців тому
ComfyUI with ZLUDA on Windows for AMD GPUs (Tutorial).
AMD ROCm under WINDOWS Status Update. ZLUDA with SD.next as the best alternative (Tutorial).
Переглядів 18 тис.8 місяців тому
AMD ROCm under WINDOWS Status Update. ZLUDA with SD.next as the best alternative (Tutorial).
How to add Online Access to a GPT. For AMD and NVIDIA GPUs. With CrewAI & Ollama.
Переглядів 7389 місяців тому
How to add Online Access to a GPT. For AMD and NVIDIA GPUs. With CrewAI & Ollama.
How to use ANIMATEDIFF in Stable Diffusion with CONTROLNET. BUG FIX! Control-Video and Custom Models
Переглядів 6 тис.9 місяців тому
How to use ANIMATEDIFF in Stable Diffusion with CONTROLNET. BUG FIX! Control-Video and Custom Models
GPT4All 5x FASTER. Runs LLAMA 3 and supports AMD, NVIDIA, Intel ARC GPUs.
Переглядів 8 тис.10 місяців тому
GPT4All 5x FASTER. Runs LLAMA 3 and supports AMD, NVIDIA, Intel ARC GPUs.
How to FACE-SWAP with Stable Diffusion and ControlNet. Simple and flexible.
Переглядів 44 тис.11 місяців тому
How to FACE-SWAP with Stable Diffusion and ControlNet. Simple and flexible.
How to UPSCALE with Stable Diffusion. The BEST approaches.
Переглядів 40 тис.11 місяців тому
How to UPSCALE with Stable Diffusion. The BEST approaches.
AMD ROCm on WINDOWS for STABLE DIFFUSION released SOON? 7x faster STABLE DIFFUSION on AMD/WINDOWS.
Переглядів 19 тис.Рік тому
AMD ROCm on WINDOWS for STABLE DIFFUSION released SOON? 7x faster STABLE DIFFUSION on AMD/WINDOWS.
How to create a bootable USB flash drive with Ubuntu Linux for GPT/UEFI. The NEW, easier way.
Переглядів 8 тис.Рік тому
How to create a bootable USB flash drive with Ubuntu Linux for GPT/UEFI. The NEW, easier way.
How to use Stable Diffusion XL locally with AMD ROCm. With AUTOMATIC1111 WebUI and ComfyUI on Linux.
Переглядів 28 тис.Рік тому
How to use Stable Diffusion XL locally with AMD ROCm. With AUTOMATIC1111 WebUI and ComfyUI on Linux.
I keep getting errors for the Flux: mat1 and mat2 shapes cannot be multiplied
Download my workflows, update your ComfyUI.
Hello you have a nice channel. I had a quick question. How can i use tensorflow on my amd rx6800xt?
Great vid! Are you planning to make one with the new Flux tools as well?
Thanks a lot! I've already updated the description for this video and added a new workflow for Flux Fill. Regarding Depth and Canny I'm not sure as we already have several good solutions, including Union Pro for Flux, which I've covered in the Flux ControlNet video. I'm very keen on the new Redux model, but it doesn't seem to work the way I have hoped. Anyhow, that's currently the best candidate for a video about the Flux tools.
Text eludes me with inpainting in Flux.
What do you mean?
@@NextTechandAI I mean, I have spent over 2 days trying to get it to work and it will not. I have gone through various YT creator workflows and forget it. Ironically, I actually had XL almost do it, while the flux one next to it (from a creator) could not. Dev. I even tried your workflow to no avail.
@@generalawareness101 I still don't know what exactly didn't work for you, but in general you have to give the text enough space. Similar to finger inpainting, the new area to be inpainted needs to be large enough to actually accommodate 5 fingers.
@@NextTechandAI I gave it 1/4, 1/2, 3/4 of the images. I tried everything.
i have amd rx6600 can i use it ?
Sure, it just depends on the model and resolution you want to use. For low VRAM you can check my videos about GGUF for Flux and SD3.5. Both works best on AMD with Linux ROCm or Windows Zluda, see my related videos.
best outpainting on youtube. Thanks!
Thank you very much for the motivating feedback. I'm glad you found the video useful.
For me the extension for control net is not enabled in the extensions tab and if I enable it after I press Apply and restart UI it auto disable it again. I get error Error running postprocess_batch_list, and error Error running postprocess and a Warning for No motion module detected, falling back to the original forward. I installed both extension from the URL and put both models for the appropriate directory. I guess because of that I have only ControlNet Integrated on my generation tab. I did the fix from the end section of the video and now it says: TypeError: HEAD is a detached symbolic reference as it points to '10bd9b25f62deab9acb256301bbf3363c42645e7' on startup
Does anyone think of updating these tutorials? Does this still work in 2024? Have the files changed? Have the names changed? I'm betting yes.
Hallo Patrick! Have you tried Amuse for AMD generator? If so, what can you say about it?
Hallo Kostya! Although I haven't used it yet, I think Amuse is a reasonable option to try out image generation. However, I think AMD should have put the resources into ROCm for Windows; ComfyUI, Forge and A1111 are proven open source tools that offer significantly more options. In my opinion, we don’t need another proprietary tool that is also based on ONNX.
Hello sir, I run the directml bat and it says "Error loading "ImportError: DLL load failed while importing torch_directml_native:" I'm having a problem, can you help me how to solve it? I have rx 6800 graphic cards and 32gb system memory on windows.
Please post the contents of your directml bat. I assume you are calling the wrong bat or it is missing something.
@@NextTechandAI I reached the interface without any problems by doing the setup again. I checked the unet, clip and ae files and put them in their places. When I added the prompt to the queue and ran it, now I get the error "[F1119 23:44:35.000000000 dml_util.cc:118] Invalid or unsupported data type Float8_e4m3fn."
@@ancientlord5697 As I said in the video, Flux is currently not supported by directML.
@@NextTechandAIIs there something i did wrong? Doesn't it work with directml in video?
@@ancientlord5697 No, I said in the video it doesn't work with Flux. I used Zluda after generating the example with SD15. You can use SD15, SD35 and SDXL.
Not working, video source stays black and empty (0:00) whenever extension i use for video source.
Can we get an updated version of this tutorial please? I'm struggling to make this work. I follow to the T and it still tells me I don't have a NVidia GPU installed
Restarting PC helped for me because the Zluda dll didn't get recognized instantly in Environmentvariable
Thanks a lot.
I'm glad you liked the video, thanks for the feedback.
So so USEFUL and essential ! Merci beaucoup :)
Thank you for your feedback, I'm glad the video was useful for you 😀
I’m really eager to try out this interesting workflow! Where can I find it?
I'm glad the video is useful. Which workflow do you mean?
That's what I'm looking for! With the ability to export and import, I can get AI's help writing the files and make didn't queues. Thank you!
Thank you very much, I‘m glad the tutorial is useful!
Great tutorial, thanks, how we can use inpaint to use two Loras for different characters.
Thanks a lot. First inpaint the left character, in a second step inpaint the right one.
I'm a ROCm gui, so, no windows at all....
DEV has its own vae
What do you mean? There is one VAE for Flux, but some checkpoints have it included directly.
@NextTechandAI dev has a special vae that can be downloaded on huggingface, maybe that is why the images turned out that poorly
@@as-ng5ln No, there is one VAE for Flux. This has absolutely nothing to do with the fact that Schnell follows prompts better than DEV. Try it yourself and generate the same image with both VAE files. By the way, you can try this with SD3.5 Large and Turbo, too.
@@NextTechandAI I'm telling you... I have the two files "ae.safetensors" and "flux1DevVAE_safetensors.safetensors". ae comes from schnell, while the other one is from the dev directory
@as-ng5ln Yes, and they have same effect on Flux image generation. As I said, try youself.
This doesn't work, certainly when using 23.10-ubuntu-studio-Mantic_Minotour. The problem I have experienced repeatedly, is that EFI/Grub does not install on the selected USB regardless of the correct partition being chosen via the installation interface. It might work if your in a position to remove any other harddrives connected to your system. However, I didn't want to dismantle my laptop. I really couldn't seem to find a way around it.
I followed my own video tutorial two days ago in order to install Ubuntu 24.04.01 on a USB drive. There haven't been any problems and it's working as expected. Are you choosing the installation target for the boot manager as described in the video?
hello i got problem like this "the size of tensor a (1536) must match the size of tensor b (2304) at non-singleton dimension 2". i try anything but still got this
Hi, which of my workflows and which model files do you use?
@NextTechandAI i use models SD. 3.5 Medium fp16 GGUF and i create manually same workflow on your this video but still getting error. i talk ChatGPT and update torch, transformers, diffusers and also setting torch float16 in my python not working aswell
@@PenkWRK Could you please use the suggested models and my workflows, you can download them from my patreon for free. In case this doesn't help, try the original SD3.5 Medium model without GGUF.
I have followed the tutorial to the letter and retried multiple times, however SDNext is still using the CPU. I am not seeing any errors, however, I do not get the line when starting the Webui.bat "Torch Allowed... etc"". Also, when I run to generate, I see "Torch generator: device=cpu". Finally, I see"No ROCm runtime is found, using ROCM_HOME='C:\Program Files\AMD\ROCm\6.1'" What am I doing wrong? I have a AMD 6750 XT and I am bad at this.
Thanks
I'm glad you liked the vid.
Since when 12 GB is "low vRAM"? 😅. I always considered 4-6 GB as low, 8-12 GB medium and 16+ as high vRAM.
VRAM refers to the GPU's memory, not the file size😉 Some have already gotten FLUX with GGUF to work with 4-6GB VRAM, I expect the same for SD3.5. With 8 GB or less, GGUF definitely makes sense, I also use it with 16 GB.
its a good amount but not for ai generation, i keep crashing the card when running SDXL models on a RX6600 lol 12 is defently on the edge of low vRAM for this type of stuff
Hey How do I get all the options you have in Source checkpoint drpodown, I only got 1.
These are checkpoints that I have downloaded over time and others that I have generated with Dreambooth.
I wonder how to do this on WSL2 and RX 7700XT? or the only path will be window directly?
By now WSL2 should be possible with an RX 7x00, my RX 6800 is still not supported.
what amd gpu are you using?
An RX 6800.
I tried the method all good.But at the start after exactly class images i get this error : Exception training model: 'Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install 'accelerate>=0.26.0'`'
I´m going to change my AMD card for a Nvidia card soon, AMD is doing a terrible job at this, I don´t want to use linux to use stable diffusion for example, why Nvidia users can use a simple way of using it while AMD has to relay in some hard ways to get it. Even thought, I did install Stable Diffusion in Windows with my AMD card, and it worked fine, but I don´t remember how I did it and now I´m tired of trying.
I can understand that very well, I am very disappointed with AMD's support of AI, too.
If you haven´t installed conda, check this video... What video???
It's the one mentioned in the description with "AMD ROCm on Windows Status Details and GIT & MiniConda-Installation".
@@NextTechandAI Thanks!!
Thanks but doesnt help if the just the 3 text encoder files are about almost 15 gb in total... My confyui crashes my pc (99% ram use) while loading the 3 clips before even attenpting to load the gguf
For very low VRAM I've suggested in the video the FP8 T5, which is below 5GB. g and i together are about 1,5GB. You can even use the GGUF T5 encoders linked at the bottom of City96s GitHub with down to 2GB, but they have a bigger impact on quality than the quantized UNET models. Hence I'd try FP8 T5 first. Use runtime parameters --use-split-cross-attention and --lowvram or even --novram.
@NextTechandAI thanks I will have to try. Where do you use the parameters you mentioned? Do I use them when launching comfyui?
@@93simongh Yes, in the batch file directly after 'main.py'.
I've been a no plotter for way too long. This is a much appreciated tutorial, thanks.
Thanks for your feedback. I'm happy that the plot community has another member.
bist du deutsch?
I'm sure it's not too difficult to recognize my accent😉
Does Comfyui work well with a 7900 xtx?
If you can manage the installation with Zluda, then yes, extremely well.
@@NextTechandAI Everything works fine ?
@@louisbeauger There are a few restrictions, e.g. components using the bits-and-bytes extension do not work, like NF4, but I like GGUF much more anyhow.
@@NextTechandAI Ok Thank's !
What about forge?
From what I have seen there shall be a release soon. Forge with Medium 3.5 seems to be broken, probably they want to fix this first.
From what I have seen there shall be a release soon. Forge with Medium 3.5 seems to be broken, probably they want to fix this first.
I tried SD 3.5 and Flux wins at least in Low VRAM 3060 12GB
Regarding quality or speed? Are you using both with GGUF?
I Like It
Thank you!
The loader GGuf does not work. I always get this error message. What do I have to do to load the GGUF files?? ( `newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead. Danke ## Stack Trace)
Have you updated both the GGUF extension as well as your Comfy? Which GPU are you using?
@NextTechandAI I have updated everything and have a 12 GB RTX3060
@wolfgangterner7277 There is an issue in the GGUF github, which suggests several solutions: github.com/city96/ComfyUI-GGUF/issues/7 I think downgrading numpy as suggested in the bottom of this issue is the easiest solution.
I still can't get flux to run on my Radeon 5500 XT. I love your channel BTW. Thank you for what you do.
Thank you very much!
Will you try the GGUF SD3.5 models or are you sticking with FLUX? UPDATE: City96 released his own versions of SD3.5 Medium GGUF models: huggingface.co/city96/stable-diffusion-3.5-medium-gguf
Neat stuff, but whats the most optimised model for nv 3060 /w 12gb?
Thank you. If you are using the models shown in my Flux GGUF video, I'd suggest Q8_0, Q5_K_S or Q4_K_S - the biggest one that fits.
Благодарю за видео, очень полезно.
Thanks a lot for your feedback.
Thank you very much I was thinking about comparing different Inpainting technique, your video is just what I need. What do you think about cropping the inpainting part then upscale it seperately then inpaint then stitch it back? There is a node Crop&Stitch for that or we can do it manually but I'm not sure if those could work with your ControlNet workflow.
Thanks a lot for your feedback. Interesting, I didn't know these two noes. Looks like by using them we can get something similar to 'masked only' in A1111. I don't think you need ControlNet for this. Not sure regarding upscaling, but usually it's a good idea to do at least a 2x after inpainting to blur the contours.
Great video ! Could you share the Flux Workflow ? I think only the SDXL is in the description
Thanks a lot! As mentioned in the video you can find the workflows on my patreon (for free). The link is in the description 😉
Hey Mate any chance we get german versions of the videos? :)
I published the first video of this channel on a German channel at the same time, it has around 600 views. The English version has almost 30k views. I'm afraid there is only a very small target group for such videos in German.
Super informative! However, after getting everything setup and hitting the Train button I get the following error....AttributeError: module 'transformers.integrations' has no attribute 'deepspeed'. There seems to be little to no info on this error. Can't believe I am the first person to run into this. Any guidance?
Thanks! Are you using a different version? I cannot remember a deepspeed option/attribute, can you possibly deactivate it?
@@NextTechandAI I commented out these two lines referenced in the AttributeError and it seems to be working now: #if transformers.integrations.deepspeed.is_deepspeed_zero3_enabled(): #import deepspeed Location: stable-diffusion-webui\venv\Lib\site-packages\diffusers From what I can tell DeepSpeed is supposed to help accelerate the training. I may try installing it later but I'll work on getting my training tuned in first. My speed is adequate for now. Dreambooth extension version is 1b3257b4 (2024-08-04). Automatic 1111 v1.10.1 Python v 3.10.11 Torch 2.1.2 + cu121 xformers 0.0.23.post1 gradio 3.41.2
@vadar007 Thank you for the feedback, I'm glad it's working for you now.
how did you solve this?
@@AboodHani-t9r Sort the comments by Newest First and you'll see my reply in this thread that tells you how to fix it.
your accent is amazing 🤩
I'm happy you enjoyed the video😀
King shit 🔥 Question: Is using CLIPTextEncodeFlux the same as inputing a normal CLIPTextEncode into a FluxGuidance node (I don't really understand why there is two inputs in the CLIPTextEncodeFlux version when you only enter in the second field)? Also, do you have to insert a ConditioningZeroOut between the empty text prompt and the negative input (Or can you just use a single of them? Either one?)?
Thank you. In my tests there was always a slight difference, the ClipTextEncodeFlux seems to be better suited for T5. See my video about Flux prompting (ua-cam.com/video/OSGavfgb5IA/v-deo.html) regarding the two input fields. Frankly speaking I haven't seen ConditioningZeroOut quite often and it shouldn't have much influence, but from my point of view it looks more correct as Flux does not use negative prompts.