Sneaky Robot
Sneaky Robot
  • 13
  • 48 389
Faster Speeds: The Correct Way to Install WaveSpeed and Teacache For Flux, LTX, Hunyuan + Triton fix
Fast AI Generations: Install WaveSpeed & TeaCache the Right Way
WorkFlow: openart.ai/workflows/sneakyrobot/1rLM6HmEDU98GYLaYoH2
ComfyUI: github.com/comfyanonymous/ComfyUI
WaveSpeed: github.com/chengzeyi/Comfy-WaveSpeed
Teacache: github.com/welltop-cn/ComfyUI-TeaCache
Triton For Windows Wheels: github.com/woct0rdho/triton-windows/releases
Triton For Windows GitHub: github.com/woct0rdho/triton-windows
0:00 Intro
0:40 WaveSpeed and Teacache?
2:45 WorkFlow
4:13 Testing Wavespeed
5:53 Testing Teacache
6:42 Lora Compatibility
7:14 Verdict
8:54 Install WaveSpeed and Teacache Nodes
9:44 Installing Triton for Windows
Переглядів: 3 404

Відео

How to Effectively prompt For Hunyuan Video in ComfyUI & Can it run on low VRAM
Переглядів 3,6 тис.День тому
Revolutionary or Overhyped? Workflow: openart.ai/workflows/zD2zV9yx45eothT3ulmT ComfyUI: github.com/comfyanonymous/ComfyUI Models - Hunyuan video FP8: huggingface.co/Kijai/HunyuanVideo_comfy/blob/main/hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors Hunyuan Fast Video: huggingface.co/Kijai/HunyuanVideo_comfy/blob/main/hunyuan_video_FastVideo_720_fp8_e4m3fn.safetensors Hunyuan Video GGUF: hug...
Effortless Prompting in ComfyUI With Conditional Deltas
Переглядів 4,2 тис.21 день тому
Are Conditional Deltas the ultimate tool for creative freedom in AI? Find out now! Workflow: openart.ai/workflows/TmYMoFTu5ixDgm01jxik comfyui: github.com/comfyanonymous/ComfyUI ComfyUI-ConDelta: github.com/envy-ai/ComfyUI-ConDelta
Finally, Easiest Way to Add Cinema Grade Sound to AI Video Directly in ComfyUI: No Prompts Required.
Переглядів 1,8 тис.Місяць тому
The ultimate AI workflow for syncing sound and video perfectly. Watch the magic unfold! Workflow: openart.ai/workflows/zHLVMpwqHzqR9yHLzW06 Comfyui: github.com/comfyanonymous/ComfyUI MMAudio GitHub: github.com/kijai/ComfyUI-MMAudio Models: Kijai Huggingface MMAudio Models: huggingface.co/Kijai/MMAudio_safetensors/tree/main Apple Clip: huggingface.co/apple/DFN5B-CLIP-ViT-H-14-378/tree/main Bigvg...
Mastering Video Production in ComfyUI: From Script to Storyboards to Final Cut!
Переглядів 4,4 тис.Місяць тому
Discover step-by-step how to use ComfyUI for create amazing short videos with this video creation workflow. facexlib - python.exe -m pip install use-pep517 facexlib insightface- python.exe "path to insightface" onnxruntime. Workflow : openart.ai/workflows/4sikXINQpgJIpWZ051fS Comfyui: github.com/comfyanonymous/ComfyUI Models: LTXV NB=You can use either 1 of the models but I will advise Unet for...
The Ultimate Hand & Face Fix For SD3.5 In Comfyui With only 6 to 8GB VRAM Needed
Переглядів 9712 місяці тому
Simplify SD3.5! Fix image details effortlessly with ComfyUI's tools. Comfyui: github.com/comfyanonymous/ComfyUI Free Workflow: openart.ai/workflows/OVHlM6Y17VEUEhNuNXft Install comfyui: ua-cam.com/video/Ad97XIxaBak/v-deo.html MODEL LIST: Stable Diffusion 3.5 Large (GGUF) civitai.com/models/879251?modelVersionId=985076 Stable Diffusion 3.5 Large TURBO (gguf) civitai.com/models/880060?modelVersio...
Is SD 3.5 better than Flux, or is it still a Dud
Переглядів 1,1 тис.2 місяці тому
Explore SD 3.5 - What's new, what's good, and what needs work. Announcement: blog.comfy.org/sd3-5-comfyui/ Comfyui basic workflow: huggingface.co/Comfy-Org/stable-diffusion-3.5-fp8/blob/main/sd3.5-t2i-fp8-scaled-workflow.json SD3.5 Basic Workflow: openart.ai/workflows/CX6pkiT9lzJPlTpF9Cgu Models List: SD3.5 Large & Turbo FP8: civitai.com/models/879701/stable-diffusion-35-fp8-models-sd35 SD3.5 L...
Master Flux Turbo vs Hyper and ControlNet inpainting in ComfyUI
Переглядів 1,1 тис.3 місяці тому
Master Flux Turbo and ControlNet inpainting in ComfyUI with this ultimate guide! Flux turbo & Controlnet Inpainting workflow: openart.ai/workflows/McVvdme5RA6L8eo2Oe3F Flux Turbo Lora: huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha/blob/main/diffusion_pytorch_model.safetensors Flux ControlNet Inpainting Beta: huggingface.co/alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta/blob/main/di...
How To Edit Any Image With FlUX Dev and FlUX Shnell in ComfyUI -Inpaint/Outpaint & Background Remove
Переглядів 5 тис.3 місяці тому
Discover how to master inpainting/outpainting for low VRAM devices with Flux in comfyui. Image manipulation Workflow: openart.ai/workflows/6rTs9au6d3EXBHijCPwW Low Vram GGUF_NF4_FP8-16 Workflow: openart.ai/workflows/VOrcINUbEg3Akv7ZQO5Y Flux Upscale: . ua-cam.com/video/8M4OEGxACQk/v-deo.html Install ComfyUI: • ua-cam.com/video/Ad97XIxaBak/v-deo.html Hyper flux and Workflow introduction: • ua-ca...
ComfyUI Just Got Better: Finally, a Fix for Flux Upscaling and 2 more Mindblowing ControlNet Models
Переглядів 8 тис.3 місяці тому
Finally, an upscaling solution for Flux users-watch the full guide! Workflow: openart.ai/workflows/sneakyrobot/sneaky_robot-gguf_fp8-workflow-plus-controlnet-upscaler-and-controlnet/wUHlwiibPMgVxf7RPZsU Install ComfyUI: ua-cam.com/video/Ad97XIxaBak/v-deo.html Hyper flux and Workflow indroduction: ua-cam.com/video/G62irea95gU/v-deo.html Flux Controlnet: ua-cam.com/video/kcq81n9qsiQ/v-deo.html Th...
How to Run Flux ControlNet from Shakker labs and Mistoline on ComfyUI with Just 8GB VRAM.
Переглядів 2 тис.3 місяці тому
Boost ControlNet performance on low-end PCs with this easy guide. Apologies for not including the Mistoline installation instructions. First thing you need to do is to Navigate to the custom note's folder, (ComfyUI_windows_portable\ComfyUI\custom_nodes), right click on empty space inside the folder and select open in terminal. Next paste the following in the terminal that's now open, git clone ...
How to Unlock Faster Image Generation with Hyper Flux Lora! Easy 8-Step Method Revealed!
Переглядів 2 тис.4 місяці тому
Find out How to generate great images faster in comfy Comfy UI running on low VRAM systems. And all this in just 8 steps, plus get better text output and faster upscaling that won’t drain your GPU. Links Workflow: openart.ai/workflows/sneakyrobot/flux-dev-low-vram_v2/be3eVIlbfWwGB25saDE7 Clip: huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.sa...
Say Goodbye To VRAM Limitations With this FLUX Workflow, Auto Prompts & 1Step Upscale Magic!
Переглядів 11 тис.4 місяці тому
This video walks you through using the Flux model with ComfyUI, a user-friendly tool designed for high-quality results without overwhelming your system. I’ll take you from installation to creating detailed prompts and enhancing your images with powerful upscale techniques. You’ll learn how to use different settings and models to get the best results, even on low-spec systems. Subscribe for more...

КОМЕНТАРІ

  • @generalawareness101
    @generalawareness101 16 годин тому

    8s at 24p is all I can do on a 4090.

  • @MichaelFlores-t2z
    @MichaelFlores-t2z 17 годин тому

    anyone else get this error when trying to install the insightface asset? any ideas how to fix this? \insightface-0.7.3-cp310-cp310-win_amd64.whl", line 104 <title>Assets/Insightface/insightface-0.7.3-cp310-cp310-win_amd64.whl at main · Gourieff/Assets · GitHub</title> ^ SyntaxError: invalid character '·' (U+00B7)

  • @SteveProcterPhotography
    @SteveProcterPhotography 2 дні тому

    Hi mate, I cannot find a file named bigvgan_v2_44khz_128band_512x Am I missing something obvious? Thank you :)

  • @geoffphillips5293
    @geoffphillips5293 2 дні тому

    I hadn't heard of the video styler node before so thanks for that. The previous workflow I had fed both + and - prompts into the same input, which seems weird, and suggests it doesn't take any notice of negative prompts, certainly these don't always seem to have an effect.

  • @FanClubRUs
    @FanClubRUs 3 дні тому

    Awesome I finally got it working from your video! I noticed if my prompt is too long I get cache errors. Keeping it shorter like 150 words or less no problems. I have an RTX 3080ti 12GB vram

  • @BigWhoopZH
    @BigWhoopZH 3 дні тому

    The compilation node works badly on windows because of issues overwriting files in the temp directory. So moved my ComfyUI into the windows subsystem for linux. I installed the latest pre release version of pytorch and compiled xformers wirh cuda 12.6 support. These versions are as fast without any compilation as they were on windows with compilation. They also do not gain any additional speed through compilation. Maybe there is some kind of jit compulation going on in these new versions of pytorch and xformers? Then I tested the new setup and the two caching methods TeaCache and FirstBlockCache with the new setup. I used prompt that generate pictures with a high amount of text in them because thats where you can see reduced quality first, when it starts to get letters or spelling wrong. My result is for both caches: As soon as the cache reduces render times quality is reduced also. So there is no free cake here. The only speed improvement you get for free is by moving to Linux and using the latest dependencies for pytorch and xformers. When you run comfy with or without xformers using the latest software the speed gap between xformers or pytorch attention shrinks to almost nothing. I still have to check if there is a difference in quality though. Now that we know that virtualization doesn't hurt performance the next logical step is running comfy in a docker environment.

  • @marshallodom1388
    @marshallodom1388 3 дні тому

    I appreciate the depth of your descriptions, leaving no questions. I always wondered if I was crazy or had different versions of python rewriting itself on various mega-multi-layer deep folders.

  • @digilifex
    @digilifex 4 дні тому

    I'm experiencing an issue when generating videos in ComfyUI. I'm using the latest version of ComfyUI and have tried using the models shown in the tutorial video. While I can generate videos in my other workflows, this specific one is producing blocky and pixilated results (Looks corrupt). To troubleshoot, I've tried disabling the Power Lora node and a few others, but the issue persists. Does anyone know how to resolve this issue?

  • @talhaanwar2911
    @talhaanwar2911 4 дні тому

    My show prompt box is not displaying any text. any idea?

  • @VuTCNguyenArtist
    @VuTCNguyenArtist 4 дні тому

    with this workflow, not sure what I'm missing but the video (before upscale..etc) generated like it wasn't completed... only some weird artifacts rendered. No matter what version of model I'm switching from the loader section.... my simple hunyuan workflow works so Im not sure what I missed configured on this one. Any tips?

    • @digilifex
      @digilifex 4 дні тому

      I'm experiencing an issue when generating videos in ComfyUI. I'm using the latest version of ComfyUI and have tried using the models shown in the tutorial video. While I can generate videos in my other workflows, this specific one is producing blocky and pixilated results (Looks corrupt). To troubleshoot, I've tried disabling the Power Lora node and a few others, but the issue persists. Does anyone know how to resolve this issue?

  • @ztp2130
    @ztp2130 5 днів тому

    One question. Apologies if I missed the explanation in the video, but why must the prompts submitted by so short to get effective results?

  • @ztp2130
    @ztp2130 5 днів тому

    Thank you, so much, for clarifying the Hunyuan AI text2video must utilize a simple and shorter prompt. I was trying different solutions that were more complex and verbose (e.g., "ChatBot write me a prompt for...").

  • @ezbaisalgado4169
    @ezbaisalgado4169 5 днів тому

    hello, do you think you will ever make a video showing us how to install and make a lora for hunyuan using kohya ss musubi on windows?

  • @nomad186
    @nomad186 5 днів тому

    Yea never worked for me. Assuming its the triton thing. It works when I apply first block cache but failed with compile model+. Got a 3080ti

  • @FusionDeveloper
    @FusionDeveloper 5 днів тому

    Wavespeed doesn't work with 1080 ti cards, but teacache does.

  • @dowhigawoco
    @dowhigawoco 6 днів тому

    IDK whats wrong here. I installed it and first of all, it works, but onyl with schnell models not with dev models. i mean yeah its faster in generation but the pictres with dev are complete broken. the images looks like i would use only 3 or 4 steps on dev. i tried it with 50+ steps but everytime the same result with dev models

  • @runebinder
    @runebinder 6 днів тому

    I can definitely confirm the Apply First Block node works without Triton, installed Wavespeed last week and got an error with the included workflow with their nodes. I removed Compile Model+ and tested it the the Apply First Block node enabled and bypassed and found generation speeds were almost twice as fast with its enabled.

  • @Jewelsonn
    @Jewelsonn 6 днів тому

    I just installed the portable version comfyui 3.12 python and titron.. no issues. WaveSpeed is really great boost.

  • @Mopantsu
    @Mopantsu 6 днів тому

    I used the 3.11 wheel and lib/includes and I just get an OOM from the block cache. It also broke a number of custom nodes from starting so I had to uninstall Triton. I hear 3.10 is the way to go for reliability. I am looking for the portable version of it.

  • @Paulo-ut1li
    @Paulo-ut1li 7 днів тому

    It worked for after a whole month trying to run triton. Thanks!

  • @bstuartTI
    @bstuartTI 7 днів тому

    This won't work if your portable comfy is using 3.12 python. Triton bricks it since it uses a depreciated ImpImporter. I tried using the cp312 whl file and it still fails- preventing comfy from opening. To fix you need to delete the triton folders under lib/sitepackages

    • @jamesb6289
      @jamesb6289 7 днів тому

      thanks for the info. was in the process of reinstalling comfyui due to this situation with 3.12. deleted those folders got things working as before, at least

  • @kinkinkab8176
    @kinkinkab8176 7 днів тому

    i stuck at ksampler it showing 0/8 and the it crashed i use image motion guider shape '[1, 10, 17, 30, 16, 1, 2, 2]' is invalid for input of size 337280 i new to this and i need help thank you

  • @StefanKirste
    @StefanKirste 7 днів тому

    I'm a little surprised. I used the "flux-dev-f16.gguf" because it had the best quality. And I notice maybe 10-20% speed up. But if I use the flux1-dev.sft model and the "load diffusion model" node as usual, I have significantly more speed. and instead of 97% VRAM only 87%. my workflow: 217s and the one with gguf: 398s all on rtx 3090ti. But thanks , it may helps with Hunyuan.

  • @AB-wf8ek
    @AB-wf8ek 7 днів тому

    I collected a dozen links on installing Triton on Windows that I was about to slog through, including using Visual Studio to compile the whl file, installing WSL, and a bunch of horror stories of it ruining the entire environment. This is much more straightforward. Thank you for taking the time to explain it so clearly!

  • @philippeheritier9364
    @philippeheritier9364 7 днів тому

    It is the tutorial I was waiting for. It took almost one day to me to figure out how to install this triton. So Big Hug and Big tanks to you guys and thanks for the tutorial

  • @Bert684B
    @Bert684B 7 днів тому

    I am slightly confused: i would expect to install these commands inside the virtual environment of comfy, so using venv

    • @TheSneakyRobot
      @TheSneakyRobot 7 днів тому

      This is the portable version so no need to. As long as you open Cmd inside the python embed folder you'll be fine

    • @Bert684B
      @Bert684B 6 днів тому

      @@TheSneakyRobot For just using the ""apply first block cache" I did not need to install triton. And wow it is fast, almost no difference, even if I increase steps from 28 to 80

  • @lucifer9814
    @lucifer9814 7 днів тому

    I am sick and tired of this triton thing, I have a 4060, run 2 comfyUI, a manual one I don't fidget with ever because I don't wanna break it, but I do have a portable version which runs python 3.12.7 and pytorch version: 2.5.1+cu124, I tried and I've bloody trying to install this .whl file from the release page corresponding to my version and it just won't work, what am I doing wrong, all the videos show it like it were such a straight forward installation process. I just ran flux with the 'compile model' node plugged in and I get this error. backend='inductor' raised: ImportError: DLL load failed while importing cuda_utils: The specified module could not be found. Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True I get a different error when using this compile node with hunyuan or LTX, but all I know for a fact is that this god damn triton is an absolute pain to install, especially for the windows users. Just for the record since the newer version didn't work the last time I took your advice and installed this github.com/woct0rdho/triton-windows/releases/download/v3.1.0-windows.post5/triton-3.0.0-cp312-cp312-win_amd64.whl

  • @HikingWithCooper
    @HikingWithCooper 8 днів тому

    Great video! Do you have any suggestion for preventing Hunyuan from creating multiple shots in one video? Sometimes it’ll make 2 or 3 very short (and not very cohesive) shots in 1 gen.

    • @TheSneakyRobot
      @TheSneakyRobot 7 днів тому

      I've noticed the same thing, try simplifying the prompt, it easily gets confused, for example if you prompt a man running down the street, the man has a blue helmet on. Hunyuan seems to sometimes think you are asking for two shots, first shot being the man running down the street the second being another man with a helmet. So it's best to mention subjects just once. The new simplified prompt would be a man with a blue helmet running down the street. Subject is clear and action is clear. Hope this helps

  • @JackytheGentleman
    @JackytheGentleman 8 днів тому

    6th like first comment Thanks for this extra-ordinary approach and your kindness.

    • @TheSneakyRobot
      @TheSneakyRobot 8 днів тому

      Thanks for your like, hope you enjoyed the video

  • @polloloco6353
    @polloloco6353 8 днів тому

    Thank you very much..!

  • @JarppaGuru
    @JarppaGuru 8 днів тому

    no panix yet again same another comes. flux not get missed. it is just sdxl what was sd

  • @Fret-Reps
    @Fret-Reps 8 днів тому

    I can't get it working. I hope you can help. When installing the 2.5.1 whl, I right clicked in the python_embeded folder and opened terminal, then typed python.exe -m pip install torch-2.5.1+cu121-cp311-cp311-win_amd64.whl. It installed, but when i opened comfyui, it said Python was still on 2.3.1+cu121. So I tried it again but typed CMD in the nav bar. It then said "This app can't run on you PC". So i reopened by right clicking the Embeded/terminal folder again. The terminal opened up fine and I reinstalled the 2.5.1 whl. It said it was already installed. So I started Comfi and I got that message error that "This app can run on your PC". Comfi won't open.

  • @kizerme
    @kizerme 8 днів тому

    Do you have any suggestions for how to install on Runpod? I am not sure where to install facexlib and insightface.

  • @dilfill
    @dilfill 10 днів тому

    Can this work on Mac at all and also can you do image to video instead of text video?

    • @TheSneakyRobot
      @TheSneakyRobot 10 днів тому

      Hunyuan doesn't support image to video yet.

  • @philippeheritier9364
    @philippeheritier9364 10 днів тому

    A super good tutorial and above all, thank you for the free and beautiful workflow

    • @TheSneakyRobot
      @TheSneakyRobot 10 днів тому

      Glad you liked it, will be giving you more videos like this

  • @lideaecerta6762
    @lideaecerta6762 10 днів тому

    So is this creating a prompt from the video or image rather than using the actual input video as a blueprint for the generated one? Is there anyway to do image to vid or vid to vid?

    • @TheSneakyRobot
      @TheSneakyRobot 10 днів тому

      The native implementation does not support vid 2 vid, only kijais wrapper. And both do not support image to video. The devs promised a image to video version before the end of this month

  • @benalden2007
    @benalden2007 10 днів тому

    You are amazing! I'm worried about saving up for a rtx 5090 while I've been using a rtx 4070ti. I keep telling myself I need more vram to actually produce anything but you blow that idea out of the window! You're doing everything I dream of doing with a 8 gb vram card. I should be ashamed of myself. Thanks for all of the hard work you're doing bro!

  • @jomiller7332
    @jomiller7332 10 днів тому

    got this error with every mp4. clip. HunyuanVid mp4 could not be loaded with cv.

    • @5bpde
      @5bpde 9 днів тому

      Just bypass the Video-Comparison node, and it works

  • @cabinator1
    @cabinator1 10 днів тому

    The workflow looks great. Thank you. Keep on!

  • @HistoryViper
    @HistoryViper 10 днів тому

    Sub'd

  • @Rynwlms
    @Rynwlms 10 днів тому

    excellent content. thank you, sir

  • @TahaEttouhami-df3gs
    @TahaEttouhami-df3gs 11 днів тому

    Can huayuan video create long videos like 30 sec or 1 minute?

    • @TheSneakyRobot
      @TheSneakyRobot 11 днів тому

      The longest I did is 5 seconds.

    • @HikingWithCooper
      @HikingWithCooper 8 днів тому

      It probably would if your GPU has a TB of RAM. With a 4090 at 720p I can only get 125 frames or about 5 seconds.

  • @JackytheGentleman
    @JackytheGentleman 11 днів тому

    6th like 1st comment Thanks for your kindness

  • @camelCased
    @camelCased 16 днів тому

    Ouch, that workflow is huge and wants to install loads of stuff and some of it throws warnings and also errors about downgrading. It could break ComfyUI. It would be nicer to split it into minimalistic parts.

  • @metairieman55
    @metairieman55 20 днів тому

    Nice explanation to a great design. I always prefer the on/off switches but you added another gem with the model loaders section off to the left along with the switches, too. Plus the strategically placed ones around the modules, a concept others should use!

  • @ghettoandroid
    @ghettoandroid 23 дні тому

    Great tutorial! my only critique is to have a longer pause between sentences XD

  • @LuanStudios370
    @LuanStudios370 24 дні тому

    hello Sneaky, I m Stuck at PulidModelLoader Error(s) in loading state_dict for IDEncoder: i am not able unsderstand why am i getting this error

  • @Vanced2Dua
    @Vanced2Dua 26 днів тому

    Please A1111

  • @christianholl7924
    @christianholl7924 26 днів тому

    Have you tried it in combination with Redux?

  • @jorge0018
    @jorge0018 26 днів тому

    Thanks for taking the time to explain this !!