You are now the only channel I watch when it comes to stable diffusion tutorials and similar. I can finally generate awsome pictures of frogs in suits on my amd gpu.
😂😂 yessss. I love it. I am hoping to get some stuff in my website before too long for people to either generate or post SD images. And I would absolutely love to see frogs in suits that you generated. Thank you so much for watching!
I second this.. many suited frogs are being made. Been trying to get this to work for ages and this by far seems to be the easiest set up. Thanks!@@FE-Engineer
You are very welcome! I’m really glad you got it working and are having fun with it! Still working on the website and another site. But will be getting the ability for folks to send up photos or images from AI sometime in the not too distant future!
Thank you, i have followed about 20 different tutorials and none of them have worked, your video was very easy to follow and worked perfectly on my 7900xtx, again thank you, you have put to bed hours of turmoil from me trying to get this to work, excellent video
You are welcome. Thank you so much for watching! I am waiting for ROCm on windows and then everyone can basically do everything with all of the different tools out there without really any compromises. Good speed and good support. One of these days….
Hey man, I want to thank you from the bottom of my heart, I was having trouble with this waiting time, easy and clear tutorial to follow, I complete the process now in 10 seconds, it was around 1m30, thank you for this Christmas gift ! Merry Christmas
Haven't seen any updates on comfyui github page about this ability of running comfy on windows. 😮 But happy to know this finally works. Amd is also cheaper ,and amd holders no need to switch for nvi
Wow that really helped! Not as fast as yours since I had an error that shifted some of the work back to the CPU. But 3 mins is much better than 50! The error was The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications.
You are very welcome! Sorry it took a while I’ve been debating about what and how much to include and do for comfyui. And I wanted to make sure it could work on windows without ROCm as a requirement.
rx6650xt (in a SDXL workflow) ~22s/iteration with dreamshaperXL turbo v2, ~12s/iteration with SDXLRefiner v1.0_0.9vae and ~ 11s/iteration with SDXLRefiner v1.0
@@ed1k37 I managed to back track it, it was on a different file forlder but now im stuck at something else, when I run "python main.py --directml" its showing me "[Errno 2] No such file or directory"
Does this run quicker than stable-diffusion-webui-directml on Windows using a 7900 XTX with zluda? I'm trying to avoid using shark because it takes up too much space and takes too long.
THX - Cause of you (and some tweek from my side) I can Render with ComfyUI, using my RX580 Radeon Card 👍 (ADD - on a 1920x1080 Picture i get 34s/it and on a 512x512 around 2s/it)
Yea. For some reason my conda was added to path but was being finicky. I ended up more recently largely ditching conda because for videos I end up installing stuff a lot and conda overall was becoming a bit more hurtful than helpful overall in my specific scenario. :-/
Glad it worked! Heh recently someone had a comment saying this tutorial does not work anymore? So interesting that it did work without issue for you. Glad you got it up and running! :) thank you for watching!
is this possible without adding miniconda to path? i dont wanna screw my computer up lol UPDATE! I did get it running w/o having to add to path...had to downgrade numpy tho...
@@Sereath i dont exactly, i honestly got it working played around with it a bit and dropped it, but i wanna say 7.x or 2.x, whichever sounds more relevant...
After pip install torch-directml i got ERROR: (base) C:\Users\user\ComfyUI>pip install torch-directml Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement torch-directml (from versions: none) ERROR: No matching distribution found for torch-directml How to fix who know? Thanks!))
Sounds like a python version issue, or potentially a permissions issue. I would make sure you have things setup appropriately and using the correct python version.
Thank you so much! I am glad this helped. Comfyui is awesome but definitely has a learning curve. So I tried to keep it really straight forward as there is a ton of creators going in depth for comfyui.
i seem to be getting [error executing checkpontloadersimple] the torch not compiled with cuda, followed all the steps. made sure im using the (python main.py --directml). Is downloading the latest miniconda an issue? even though i typed the python3.10.12. And i made sure i clicked on path when installing the miniconda *tried it with miniconda 10 and still same error
I actually made a start.bat inside the comfyUI folder with the following code: @echo off call conda activate comfyUI call python main.py --directml The issue is that if you don't have a prompt open and try to run without the double call it does not work. Adding the second call to ensure that the python call is deployed inside the opened up prompt when clicking on the file. Hope this helps!
I'm getting this Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely. (among other things) and then a big error in comfyUI when it hits the ksampler. Maybe I need a newer version of python or something from the video? (I have no idea about python, I just followed all the instruction :D )
Can you make a video to Stable Video diffusion with ComfyUI on AMD GPU? Always get this: Error occurred when executing KSampler: input must be 4-dimensional Thank you! Great Video
De-Install Torch and Re-Install it again that will fix it... I had same Problem ✌(Un-Install = pip uninstall torch) Then use the Line in the Video to Re-Install
Sadly not working for me - always gives out an Error including ___________________ UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:\__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.) ___________________ Do you maybe have a hint for me?
Hey try this: python main.py --directml Otherwise I had the same error! Comfy tried to call torch.cuda.current_device(), and it could not of course: [...] File "G:\AI\ComfyUI\comfy\model_management.py", line 83, in get_torch_device return torch.device(torch.cuda.current_device()) File "G:\AI\VirtualEnvs\win_comfy\lib\site-packages\torch\cuda\__init__.py", line 674, in current_device _lazy_init() File "G:\AI\VirtualEnvs\win_comfy\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
A video on getting koyha ss to play nice with amd would be super useful. Can install it fine on Linux but it either wants to only use CPU or throws a ton of errors.
i dont know what i am doing wrong, I have tried 4 times, the last time I even fresh installed windows first. these tutorials never work for me and I cant figure out what I am doing wrong.
For me. On windows. Using directml and NOT using ROCm (like I use in Linux). It is about 40% of my normal it/s speed under similar circumstances. So it is a big decrease in performance. But. It works with everything I think. It is not limited to ONNX, and it should support all the fun stuff like inpainting controlnet etc as best I can tell. I have not played with those things on it in windows though so I will have to actually test to be sure. If someone absolutely wants all functionality and refuses to go to Linux. This is probably the best bet right now.
@@FE-Engineer i dont care speed in windows rather i need stability. i always run into vram issues on windows when using fork of a1111. only thing i want is seamless experience hence i have dual boot ubuntu
Love the tutorial, but I got all the steps as you said but when executing "python main.py --directml", I have the error "module 'torch' has no attribute 'Tensor'" Tried to search the issue but with no results ...
not sure what you did differently from me, I followed verbatim and tried to run Flux with this set up. I keep getting a gpu device has been suspended. I also have 7900 xtx, 32 gbs of ram, ryzen 5900x. I've looked at drivers. There must be a better way?
How you make a bat file to automatically run it, paste what i put below into a bat file. Works for me. @echo off call conda activate comfyui python main.py --directml pause
hi, I am encountering a problem. the installation went well and I managed to launch comfyUI but once in front of the panels it is not possible to generate any image, because the panel which loads the checkpoint models does not work. it indicates "ckpt_name null" and when I interact it does not open any pop up with the list like in the video but goes to "ckpt_name undefined" and it is no longer possible to interact with the model selection line, although I have two models in my models folder. I don't understand what I did wrong. Thank you for answering me.
According to the instructions, Comfyui has been installed on drive C. You can check drive C and it will be there. The checkpoint must be saved on drive C.
(comfyui) C:\Users\mrodeon\ComfyUI>pip install torch-directml ERROR: Could not find a version that satisfies the requirement torch-directml (from versions: none) ERROR: No matching distribution found for torch-directml Anyone know how to fix it?
Do you know if ComfyUI has the same problem with inpainting? The Automatic1111's couldn't inpaint with directml and it was only solved by using the commands: "--no-half --precision full --no-half-vae --opt-sub-quad-attention --opt-split-attention-v1". ComfyUI doesn't have these exact commands and like the extension "sd-webui-inpaint-anything" without them, the Face Detailer and other inpaint segments from "ComfyUI-Impact-Pack" throws the error: "The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1"
Honestly. I am not sure. I thought since it basically has a different overall architecture that it might not have the same fallbacks as using the directML fork of automatic1111.
@@HeinleinShinobu I found that Face Detailer from Impact Pack to be the best automated inpainting since it doesn't really search for CUDA (although if used with SAM, they must be loaded with CPU). While for manual masking, using ControlNet as auxiliary or using dedicated inpainting models was the only way to not get bad results
@@jameshenry347 I use SAM too but for some reason, when i click the area and click detect, it doesn't do its thing, i look at the cmd prompt and it has lots of error which i don't understand at all. Havent try Face Detailer yet.
@@HeinleinShinobu Well, when I used with Face Detailer, the SAM Loader node would always throw fatal errors if SAM was processed with the "auto" or "GPU" option and would only work with the "CPU" one, but never tried manually, so I don't know if the processing is the same way. I imagine it has to be CPU forced someway.
This is the first time anyone has mentioned this. My guess would immediately be your python version perhaps. Or the install did not finish or had a problem.
I get this error: Torch not compiled with cuda enabled. I think i just gonna buy a nvidia card. I been searching for solutions for 2 hours and couldnt find anything.. lmao
Everything works great except I can not get Reactor Face swap working. Every time I load a face_restore_model and run it I get this error message "RuntimeError: Input type (torch.FloatTensor) and weight type (PrivateUse1FloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor"
Half the price for a better GPU in many ways (vram specifically). 4090s were melting power connectors when I bought it. And 3090s were 3 years old and the same price as a top of the line amd card. I wanted to run AI stuff so vram was pretty critical. And dropping off of the 3090 or 4090 meant big drops in vram on Nvidia cards unless you got an A series card.
@@FE-Engineer I see, thank you. 32GB of RAM, W10, 7900 XTX Nitro+, Ive been experimenting with many CivitAI models. This stuff is painful. Edit: Oh yeah I do generates 4 images at a time, 512x512 each.
I have this running un Nobara linux and it works fine. I got this set up in windows with this tutorial and everything installs fine, but when I try to generate an image I get a "Not enough VRAM/Memory" error. I am not sure what I am doing wrong. I am running a 6700XT and a 5700xt. I am not sure if it is referencing the wrong GPU or what. Anyway, this was a good tutorial and easy to follow.
Try using the medvram flag. See if that helps. Although if you are running dual video cards you might run into weird issues. I have never tried with dual video cards and unfortunately in that type of instance I don’t know how much help I can honestly provide :-/
@@FE-Engineer Hi, I'm totally new to this thing and I ran into this low vram thing too. Everything installs fine, the UI is on but it says I have only 1GB and I can't render even a tiny (128x128) image. I have an RX570 with 4GB of Vram and 32GB of RAM on MB, that should be enough. Where and how should I use that "medvram" flag? Thanks.
Hi, i do not know how you can Install ComfyUI in a other Path but you can ReDirect the model folders used by ComfyUI with editing the "extra_model_paths.yaml.example" in the main Directory.
I just cannot get this running on my 7800XT "Error occurred when executing KSampler: The GPU device instance has been suspended. Use GetDeviceRemovedReason to determine the appropriate action." And then a bunch of script errors. Any ideas?
Yikes. I apologize. I have not seen that error come up. Were you maxing out your vram? Try running it with something up to monitor your vram. My guess is you might have tried to generate an image that was too big for it to handle. This is a big guess though. Monitor system resources and redo and see how it looks. Might give you some clues. Also. Do something like barebones stock. Stock prompt. Stock model. Stock pipeline setup. Stock image dimensions. Make sure you can run it with everything being as controlled as possible to see if you are changing something somewhere that is causing this weird errors. That’s my suggestion.
Thank you for perfect video. If you are doing as much experiment as I do, suggest you to store all models in specials folder and use hardlinks to put them to models folder. It saved a lot of disk space for me.
I have been meaning to do this. I already setup all of my images go to a shared location, regardless of whether it is generated in linux or windows etc. I really should do this, and connect all of my models to be in a central location to avoid duplicates and wasted space. Thanks for reminding me that I need to do this! :) Also thank you so much for watching and kind words!
Awesome video, thanks. Instant subscribe :). I tried to drop in a SDXL model and got an error ('aten::frac.out' is not currently supported). Non XL checkpoints work fine. Since I'm new to this I wonder what ROCm is and if it is needed or will be helpful in the future.
You are now the only channel I watch when it comes to stable diffusion tutorials and similar. I can finally generate awsome pictures of frogs in suits on my amd gpu.
😂😂 yessss. I love it. I am hoping to get some stuff in my website before too long for people to either generate or post SD images. And I would absolutely love to see frogs in suits that you generated.
Thank you so much for watching!
I second this.. many suited frogs are being made. Been trying to get this to work for ages and this by far seems to be the easiest set up. Thanks!@@FE-Engineer
You are very welcome! I’m really glad you got it working and are having fun with it!
Still working on the website and another site. But will be getting the ability for folks to send up photos or images from AI sometime in the not too distant future!
Exactly what i was going to post!
Thank you, i have followed about 20 different tutorials and none of them have worked, your video was very easy to follow and worked perfectly on my 7900xtx, again thank you, you have put to bed hours of turmoil from me trying to get this to work, excellent video
You are very welcome! I’m glad it worked and fixed your problems! Thank you for watching!
Thank you. The fact that I dont have to convert the models is a bonus. I know it's slower in image generation but less headaches for sure. Great job.
You are welcome. Thank you so much for watching! I am waiting for ROCm on windows and then everyone can basically do everything with all of the different tools out there without really any compromises. Good speed and good support. One of these days….
Hey man, I want to thank you from the bottom of my heart, I was having trouble with this waiting time, easy and clear tutorial to follow, I complete the process now in 10 seconds, it was around 1m30, thank you for this Christmas gift ! Merry Christmas
Merry Christmas and happy holidays to you and your family as well. Thank you so much I am glad it helped! :)
bro, thank you so much. i saw many tutorials, but this is the best
Thank you! FE-Engineer, I can run it now with my RX580 on win10,Good for you and Happy new year!
I’m glad it helped! Thank you for watching and the kind words! Happy new years to you and your family as well!
insanely straightforward, straight to the point and no questions left remaining. Thank you very much for this tutorial
You are very welcome thank you for watching!
Haven't seen any updates on comfyui github page about this ability of running comfy on windows. 😮 But happy to know this finally works. Amd is also cheaper ,and amd holders no need to switch for nvi
It worked for me and was mostly straight forward :)
Wow that really helped! Not as fast as yours since I had an error that shifted some of the work back to the CPU. But 3 mins is much better than 50! The error was
The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications.
And DirectML is saying I've only got 1GB VRAM, but I'm also using a 7900XTX
@@ajphilippineexpat Same error
@@ajphilippineexpat same for a friend of mine as well. Anyone knows how to fix this?
After hours of various tutorials this was the only one that worked for me. Thanks!
You are welcome! Thanks for watching!
Thank you for your video FE-Engineer! It works so perfectly!
You are very welcome! Thank you for watching!
Thanks a lot! I've been eagerly awaiting this video. I wish you happy holidays!
You are very welcome! Sorry it took a while I’ve been debating about what and how much to include and do for comfyui. And I wanted to make sure it could work on windows without ROCm as a requirement.
Lost me at 2:23 - ModuleNotFoundError: No module named 'safetensors'
same error
solved it with '' pip install -r requirements.txt' command
@@Remzicaliskan Thanks for this, was stuck here, too, and this fixed it.
How to fix comfyui detecting only 1gb vram
Same problem here.
Oh sweet baby Jesus, that is so much fast than running off my processor. Thank you so much.
Thanks for this video.. now I can run comfyui on my amd gpu, your video is the easiest tutorial to follow 👍👍👍
Does someone know where i can tell comfy to use more than 1024 ? i have 12gb
Same here, but I have 16gb.
git doesn't come standard with Anaconda, so you have to remember to install it: conda install -c anaconda git
How to install the ComfyUI Manager?
rx6650xt (in a SDXL workflow) ~22s/iteration with dreamshaperXL turbo v2, ~12s/iteration with SDXLRefiner v1.0_0.9vae and ~ 11s/iteration with SDXLRefiner v1.0
Lost me at "add directory to path"
same here, I still tried it and now im stuck at "The system cannot find the path specified." when i tried "cd comfyui"
@@spacetartadd .exe at the end of the name "minisota" in the directory path at the installation.
should solve it
@@ed1k37 I managed to back track it, it was on a different file forlder but now im stuck at something else, when I run "python main.py --directml" its showing me "[Errno 2] No such file or directory"
amazing video! thank you for this straightforward guide :)
You are welcome! Thank you for watching and for the kind words. :)
Does this run quicker than stable-diffusion-webui-directml on Windows using a 7900 XTX with zluda? I'm trying to avoid using shark because it takes up too much space and takes too long.
No it does not.
Auto1111 with zluda
Or
Sd.next with zluda will be the fastest most likely.
@@FE-Engineerthe auto1111 works for me, but sd.next gives me some issues
THX - Cause of you (and some tweek from my side) I can Render with ComfyUI, using my RX580 Radeon Card 👍
(ADD - on a 1920x1080 Picture i get 34s/it and on a 512x512 around 2s/it)
well desirved sub and like, tried a while back for days on end and no luck and this strait up worked
also can run from bat file if conda is on path, i followed this tutorial:
ua-cam.com/video/zFKD2Q9m_nQ/v-deo.html
Yea. For some reason my conda was added to path but was being finicky. I ended up more recently largely ditching conda because for videos I end up installing stuff a lot and conda overall was becoming a bit more hurtful than helpful overall in my specific scenario. :-/
Thank you so much! :) I’m glad this helped you!
dude! thank you _so_ much for this. seriously.
Any knows fixes for
RuntimeError: Device type privateuseone is not supported for torch.Generator() api.
?
I have never seen that. What were you doing when you got that error?
Same issue, when it hits ksampler
Edit line 31:
C:\Users\[USER]\miniconda3\envs\comfyui\Lib\site-packages\torchsde\_brownian\brownian_interval.py
From:
generator = torch.Generator(device).manual_seed(int(seed))
To:
generator = torch.Generator().manual_seed(int(seed))
Restart comfyui from scratch
@@double.parker same here Error occurred when executing KSampler:
work very nice , thanks for you help
works perfectly, thanks!
Hey FE-E ! Once again nice tutorial ! Could you please consider doing a tutorial for Comfyui with Zluda ? Thank you and keep up with the good work !
error import safetensors.torch
ModuleNotFoundError: No module named 'safetensors'
thank you very much!!! up and working, great tut!
Glad it worked! Heh recently someone had a comment saying this tutorial does not work anymore? So interesting that it did work without issue for you. Glad you got it up and running! :) thank you for watching!
works flawlessly! thanks man!
Glad it helped! Thank you for watching!
They seem to have updated ComfyUI and this is no longer working :/
Works like a Charm... I'm just done it myself and im rendering Pictures while i write this.
is this possible without adding miniconda to path? i dont wanna screw my computer up lol
UPDATE! I did get it running w/o having to add to path...had to downgrade numpy tho...
Hello,
do you remember which version you downgraded to?
@@Sereath i dont exactly, i honestly got it working played around with it a bit and dropped it, but i wanna say 7.x or 2.x, whichever sounds more relevant...
@@willismiller7035 Thank you.
After pip install torch-directml i got ERROR:
(base) C:\Users\user\ComfyUI>pip install torch-directml
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement torch-directml (from versions: none)
ERROR: No matching distribution found for torch-directml
How to fix who know? Thanks!))
Sounds like a python version issue, or potentially a permissions issue.
I would make sure you have things setup appropriately and using the correct python version.
really really helpfull !!! 5600 works Thank you.
Thank you dude, it works!
Hooray! Thanks for watching! :)
another great video, I was very excited for this video >❤❤❤❤
Thank you so much! I am glad this helped. Comfyui is awesome but definitely has a learning curve. So I tried to keep it really straight forward as there is a ton of creators going in depth for comfyui.
Excellent... thanks so much
You are very welcome! I’m glad you liked it and found this helpful! :)
thank you very much
Fun fat, this also works for the portable one so it's easy to install custom nodes.
Que capooooooooooo, sos el mejor! I love you bro
No module named: 'torch_directml'
You are probably not using the right python version.
Something in your setup is different…
i seem to be getting [error executing checkpontloadersimple] the torch not compiled with cuda, followed all the steps. made sure im using the (python main.py --directml). Is downloading the latest miniconda an issue? even though i typed the python3.10.12. And i made sure i clicked on path when installing the miniconda
*tried it with miniconda 10 and still same error
Code changed. I’m looking into it. If you roll back the git code it will work.
@@FE-Engineer ah, that explains my problems.
I actually made a start.bat inside the comfyUI folder with the following code:
@echo off
call conda activate comfyUI
call python main.py --directml
The issue is that if you don't have a prompt open and try to run without the double call it does not work. Adding the second call to ensure that the python call is deployed inside the opened up prompt when clicking on the file. Hope this helps!
is it normal for comfyui to allocate only 1gb of total vram? i tried using flux dev, from comfyui but always endup with "gpu ran out of memory" error.
I'm getting this
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely. (among other things) and then a big error in comfyUI when it hits the ksampler. Maybe I need a newer version of python or something from the video? (I have no idea about python, I just followed all the instruction :D )
I have a similar AMD card and did not want to run Linux, thank you for your videos!
You are very welcome! I’m glad it helped!
This explanation is so easy - thank you amazingly!!!!! Will try to generate something interesting, I fully support and love you ❤❤🤖🤖👾👾📎📎
You are very welcome! Thank you so much for watching and your support!
Can you make a video to Stable Video diffusion with ComfyUI on AMD GPU?
Always get this:
Error occurred when executing KSampler:
input must be 4-dimensional
Thank you! Great Video
Accordding /pytorch/issues/78341 , conda create --name comfyui python=3.11, pip install numpy==1.26.4, work on my pc. thanks for your work
when i want to clone the link it said git isnt a right command even though i installed it
'' ModuleNotFoundError: No module named 'safetensors' '' error after entering 'python main.py --directml'
De-Install Torch and Re-Install it again that will fix it... I had same Problem ✌(Un-Install = pip uninstall torch) Then use the Line in the Video to Re-Install
thanks so much, comfyui somehow performs a litlte bit better than sd for amd at least in my setup
wow, how? mine on comfy is much worse, only ~2it/sec while in A1111 i got up to ~14it/s on my 7800xt
You are welcome! That’s crazy, for me it is considerably slower. But I’m glad it is working well for you!
Sadly not working for me - always gives out an Error including
___________________
UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:\__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.)
___________________
Do you maybe have a hint for me?
Strange Path... DeInstall Torch and proper Install it into Pyton
Help please, when I run "python main.py --directml" its showing me "[Errno 2] No such file or directory"
hey thank for the video, but i get an error saying AssertionError: Torch not compiled with CUDA enabled. Know how to fix it?
Sounds like you aren’t using directml torch to me?
Hey try this: python main.py --directml
Otherwise I had the same error!
Comfy tried to call torch.cuda.current_device(), and it could not of course:
[...]
File "G:\AI\ComfyUI\comfy\model_management.py", line 83, in get_torch_device
return torch.device(torch.cuda.current_device())
File "G:\AI\VirtualEnvs\win_comfy\lib\site-packages\torch\cuda\__init__.py", line 674, in current_device
_lazy_init()
File "G:\AI\VirtualEnvs\win_comfy\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
A video on getting koyha ss to play nice with amd would be super useful. Can install it fine on Linux but it either wants to only use CPU or throws a ton of errors.
Great video! Happy Xmas!
Thank you so much. Merry Christmas and happy holidays to you and your family as well!
Spend time with your loved ones!
i dont know what i am doing wrong, I have tried 4 times, the last time I even fresh installed windows first. these tutorials never work for me and I cant figure out what I am doing wrong.
hello, i have this problem : CondaError: Run 'conda init' before 'conda activate'
thank you from french!!!
You are very welcome! Merry Christmas and happy holidays to you and your family!
Is your newest video strictly better than this one or is this still fine? I'm new to running sd locally, trying to use a 6900xt and windows.
how do we get the manager working on amd
thanks for the video. hows comfyui compared to a1111 for amd gpus?
For me. On windows. Using directml and NOT using ROCm (like I use in Linux). It is about 40% of my normal it/s speed under similar circumstances. So it is a big decrease in performance.
But. It works with everything I think. It is not limited to ONNX, and it should support all the fun stuff like inpainting controlnet etc as best I can tell.
I have not played with those things on it in windows though so I will have to actually test to be sure.
If someone absolutely wants all functionality and refuses to go to Linux. This is probably the best bet right now.
@@FE-Engineer i dont care speed in windows rather i need stability. i always run into vram issues on windows when using fork of a1111. only thing i want is seamless experience hence i have dual boot ubuntu
Love the tutorial, but I got all the steps as you said but when executing "python main.py --directml", I have the error "module 'torch' has no attribute 'Tensor'"
Tried to search the issue but with no results ...
And what can be done to make comfyui work with stability matrix?
not sure what you did differently from me, I followed verbatim and tried to run Flux with this set up. I keep getting a gpu device has been suspended. I also have 7900 xtx, 32 gbs of ram, ryzen 5900x. I've looked at drivers. There must be a better way?
how do i turn your conda commands into a bat file so i dnt have to type it everytime i start this comfy ui??????
How you make a bat file to automatically run it, paste what i put below into a bat file. Works for me.
@echo off
call conda activate comfyui
python main.py --directml
pause
hi, I am encountering a problem.
the installation went well and I managed to launch comfyUI but once in front of the panels it is not possible to generate any image, because the panel which loads the checkpoint models does not work.
it indicates "ckpt_name null" and when I interact it does not open any pop up with the list like in the video but goes to "ckpt_name undefined" and it is no longer possible to interact with the model selection line, although I have two models in my models folder.
I don't understand what I did wrong. Thank you for answering me.
According to the instructions, Comfyui has been installed on drive C. You can check drive C and it will be there. The checkpoint must be saved on drive C.
(comfyui) C:\Users\mrodeon\ComfyUI>pip install torch-directml
ERROR: Could not find a version that satisfies the requirement torch-directml (from versions: none)
ERROR: No matching distribution found for torch-directml
Anyone know how to fix it?
Might be the version of python you are using. But code has been updated since that video. Check the video description.
how does this work together with your AMD Zluda video from two months ago?
can i just use ComfyUI with that setup as well without redoing everything?
Do you know if ComfyUI has the same problem with inpainting? The Automatic1111's couldn't inpaint with directml and it was only solved by using the commands:
"--no-half --precision full --no-half-vae --opt-sub-quad-attention --opt-split-attention-v1".
ComfyUI doesn't have these exact commands and like the extension "sd-webui-inpaint-anything" without them, the Face Detailer and other inpaint segments from "ComfyUI-Impact-Pack" throws the error:
"The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1"
Honestly. I am not sure. I thought since it basically has a different overall architecture that it might not have the same fallbacks as using the directML fork of automatic1111.
have you found the solution for comfyui inpainting? Just find it really buggy with the result
@@HeinleinShinobu I found that Face Detailer from Impact Pack to be the best automated inpainting since it doesn't really search for CUDA (although if used with SAM, they must be loaded with CPU). While for manual masking, using ControlNet as auxiliary or using dedicated inpainting models was the only way to not get bad results
@@jameshenry347 I use SAM too but for some reason, when i click the area and click detect, it doesn't do its thing, i look at the cmd prompt and it has lots of error which i don't understand at all. Havent try Face Detailer yet.
@@HeinleinShinobu Well, when I used with Face Detailer, the SAM Loader node would always throw fatal errors if SAM was processed with the "auto" or "GPU" option and would only work with the "CPU" one, but never tried manually, so I don't know if the processing is the same way. I imagine it has to be CPU forced someway.
Error occurred when executing KSampler: i got this
Thank you for the tutorial, for some reason it wont let me cd like in 1:40 "The system cannot find the path specified." is the error.
i cannot thank you enough
:):) happy to help! Thanks for watching!
Hi,ERROR: Could not install packages due to an OSError: Missing dependencies for SOCKS support.Is there a good solution to this problem?
This is the first time anyone has mentioned this. My guess would immediately be your python version perhaps. Or the install did not finish or had a problem.
@@FE-Engineer Mine is a laptop with an integrated cpu, that's what's preventing it from running with the GPU put.
I get this error: Torch not compiled with cuda enabled. I think i just gonna buy a nvidia card. I been searching for solutions for 2 hours and couldnt find anything.. lmao
Yea. It seems they updated code. I really need to make new videos for these as they keep changing code around and re arranging things.
Are you having issues running img2vid? Anytime I try it fails durning ksample node🙃
thank you so much
Everything works great except I can not get Reactor Face swap working. Every time I load a face_restore_model and run it I get this error message "RuntimeError: Input type (torch.FloatTensor) and weight type (PrivateUse1FloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor"
Might be a problem with torch and directml would be my guess.
Can we use zluda with comfy ui like in your automatic1111 zluda video?
Just curious, why did you chose an AMD card?
Half the price for a better GPU in many ways (vram specifically). 4090s were melting power connectors when I bought it. And 3090s were 3 years old and the same price as a top of the line amd card. I wanted to run AI stuff so vram was pretty critical. And dropping off of the 3090 or 4090 meant big drops in vram on Nvidia cards unless you got an A series card.
Oh you rock, anybody try Flux with AMD yet?
Sorry. I have been moving and sort of out of it. What is flux with AI?
Is it normal for it to take all your system RAM + the page file as well? On top of using all the VRAM from my 7900 XTX; 24GB of vram. 💀
From my testing no. That is not normal. But it will depend on how much you have and the gpu and the model and your specific settings…
@@FE-Engineer I see, thank you.
32GB of RAM, W10, 7900 XTX Nitro+, Ive been experimenting with many CivitAI models. This stuff is painful.
Edit: Oh yeah I do generates 4 images at a time, 512x512 each.
I have 6650xt will this work?
I have this running un Nobara linux and it works fine. I got this set up in windows with this tutorial and everything installs fine, but when I try to generate an image I get a "Not enough VRAM/Memory" error. I am not sure what I am doing wrong. I am running a 6700XT and a 5700xt. I am not sure if it is referencing the wrong GPU or what. Anyway, this was a good tutorial and easy to follow.
Try using the medvram flag. See if that helps. Although if you are running dual video cards you might run into weird issues. I have never tried with dual video cards and unfortunately in that type of instance I don’t know how much help I can honestly provide :-/
@@FE-Engineer Hi, I'm totally new to this thing and I ran into this low vram thing too. Everything installs fine, the UI is on but it says I have only 1GB and I can't render even a tiny (128x128) image. I have an RX570 with 4GB of Vram and 32GB of RAM on MB, that should be enough. Where and how should I use that "medvram" flag? Thanks.
Is there any way to install the Comfyui folder in an external hard drive doing this method?
Hi, i do not know how you can Install ComfyUI in a other Path but you can ReDirect the model folders used by ComfyUI with editing the "extra_model_paths.yaml.example" in the main Directory.
I'm very new to this, how do I set a path to my checkpoints? in "\MiniConda\envs\comfyui" I don't see a "models" fodler :/
Hi, this is the "Enviroment" Path you have to go into ComfyUI (C:User/Name/ComfyUI)
I just cannot get this running on my 7800XT
"Error occurred when executing KSampler:
The GPU device instance has been suspended. Use GetDeviceRemovedReason to determine the appropriate action."
And then a bunch of script errors. Any ideas?
Yikes. I apologize. I have not seen that error come up. Were you maxing out your vram? Try running it with something up to monitor your vram.
My guess is you might have tried to generate an image that was too big for it to handle. This is a big guess though.
Monitor system resources and redo and see how it looks. Might give you some clues.
Also. Do something like barebones stock.
Stock prompt. Stock model. Stock pipeline setup. Stock image dimensions.
Make sure you can run it with everything being as controlled as possible to see if you are changing something somewhere that is causing this weird errors. That’s my suggestion.
Saved
Thank you very much. It works for my 7800xt. how to download the comfyui manager?
This video is great. after i've created my .bat file for quick startup how would i go about installing the comfyui manager
I have never used comfyui manager. What is it exactly?
Thank you for perfect video. If you are doing as much experiment as I do, suggest you to store all models in specials folder and use hardlinks to put them to models folder. It saved a lot of disk space for me.
I have been meaning to do this. I already setup all of my images go to a shared location, regardless of whether it is generated in linux or windows etc. I really should do this, and connect all of my models to be in a central location to avoid duplicates and wasted space. Thanks for reminding me that I need to do this! :) Also thank you so much for watching and kind words!
can work with AMD GPU 4GB?
I don’t know offhand. Probably?
Awesome video, thanks. Instant subscribe :).
I tried to drop in a SDXL model and got an error ('aten::frac.out' is not currently supported). Non XL checkpoints work fine. Since I'm new to this I wonder what ROCm is and if it is needed or will be helpful in the future.
Yes. Sdxl does not work or at least I have never gotten it to work as of yet.
The only place I have gotten sdxl to work is in Linux with ROCm
Rocm is basically AMD equivalent of cuda. It takes special cuda code and lets it work for an amd card.
cant i target latest python version
I doubt it. You can try. You will probably just have to undo it though.
got a rx6600 and can't get it running, always telling me that i don't have enough vram, any idea that could help ?
Is it when you try to load a model?