It depends on your hardware. I think my box crashed on a 13 second clip, but I was able to do up to 7 seconds without any issues. In-between 7 and 13 it depended on resolution size and settings
Last question. When you were using the smooth node (2nd last example). On my system the first video combine was sharp but the smoothed one was very blurry. Is there a setting to sharpen it up a bit (or is that just what happens to low rez videos)?
No problem. This is the main balance you need to find with the smooth video node. It could be your input is too low res, it could be that there's too much of a difference between your ksampler output and your input image, so it appears blurry. It could also just be you need to tweak the settings in the smooth video node. Something with high motion for example will need tweaks to the example settings. I think those are good places to start testing first tho
these are just the standard ipadapter models, nothing special. You can install ipadapter from the manager. The github page also has a whole tutorial specifically on how to use it from latent vision -- check it out here: github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file
thank you for the video and all the information but I had an issue at the start. it showed me that a custom node for DiffSynth-Studio (IMPORT FAILED). what I have to do with it. thank you in advance
Ah -- it should not be an issue, just continue as normal. I'm not 100% sure why it gives this notice, but it is updating one (possibly more) nodes from the other install. Perhaps there's no actual import just a code update 🤷♂️ Either way it will not affect your workflow
@@mhfx Thank you. Could you help me find the right ControlNet model? I used this one from Hugging Face: ControlNet-v1-1, but I keep having the same issue. It tells me to load the model in ComfyUI_windows_portable\ComfyUI\models\controlnet\, even though the file is already there.
So there's a few controlnets. Is this for the diffutoon workkflow? You'll need the tile and line art controlnets. Make sure you grab the pth files. If you're downloading from hugging face it should be these huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
If you use a realistic lora you can. This only works for the last 2 options though since the diffutoon workflow doesn't seems to be inconsistent with different loras.
When installing DiffSynth as you described, I encounter an error ("IMPORT FAILED"), and the nodes remain red. Even if I press the "Try Fix" button, it doesn't import. I’d really like to give it a try. Do you have any idea how to resolve this issue? Thanks so much!
O red nodes are something else. Does the import missing nodes button work? If not you may have an outdated comfyui, try "update all" in the manager. If both of those don't work you can try to install manually using git clone.
Diffutoon (speed) vs FastBlend (Balance) vs AnimateDiff (Detail) -- which style has the most potential to you?
Please share your workflow because the actions that you do are so fast I'm trying hard to follow your instructions
Love the detail, not just in the video, but in the description telling where to save files and such. Really grateful for this :)
Ah thank you so much, I really appreciate that 🤘🤘
Great work
Really appreciate that, thank you 🙏✌️
Nice process. How many frames max can you convert with this process? Limited to 32 or 48 frames? Thanks.
It depends on your hardware. I think my box crashed on a 13 second clip, but I was able to do up to 7 seconds without any issues. In-between 7 and 13 it depended on resolution size and settings
Last question. When you were using the smooth node (2nd last example). On my system the first video combine was sharp but the smoothed one was very blurry. Is there a setting to sharpen it up a bit (or is that just what happens to low rez videos)?
No problem. This is the main balance you need to find with the smooth video node. It could be your input is too low res, it could be that there's too much of a difference between your ksampler output and your input image, so it appears blurry. It could also just be you need to tweak the settings in the smooth video node. Something with high motion for example will need tweaks to the example settings. I think those are good places to start testing first tho
Plese more videos.
😁😁👍👍
Whats your vram and how long it took ?
I have an rtx4090, diffutoon takes a few min approx 1-2, fast blend takes about 4-5 and animate different takes around 10-20min
@@mhfx And this is for 1 second of video right?
where do you get the specific idapter files from and where do you place them? Thanks.
these are just the standard ipadapter models, nothing special. You can install ipadapter from the manager. The github page also has a whole tutorial specifically on how to use it from latent vision -- check it out here: github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file
thank you for the video and all the information but I had an issue at the start. it showed me that
a custom node for DiffSynth-Studio
(IMPORT FAILED). what I have to do with it. thank you in advance
Ah -- it should not be an issue, just continue as normal. I'm not 100% sure why it gives this notice, but it is updating one (possibly more) nodes from the other install. Perhaps there's no actual import just a code update 🤷♂️ Either way it will not affect your workflow
@@mhfx thank you. can you tell me where I can find that control net. is this named controlnet11Models_tileE.safetensors right version.
@@mhfx Thank you. Could you help me find the right ControlNet model? I used this one from Hugging Face: ControlNet-v1-1, but I keep having the same issue. It tells me to load the model in ComfyUI_windows_portable\ComfyUI\models\controlnet\, even though the file is already there.
So there's a few controlnets. Is this for the diffutoon workkflow? You'll need the tile and line art controlnets. Make sure you grab the pth files. If you're downloading from hugging face it should be these huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
*And vice versa? Here you can turn videos generated in an editor (for example UE5) into photorealistic ones?*
If you use a realistic lora you can. This only works for the last 2 options though since the diffutoon workflow doesn't seems to be inconsistent with different loras.
When installing DiffSynth as you described, I encounter an error ("IMPORT FAILED"), and the nodes remain red. Even if I press the "Try Fix" button, it doesn't import. I’d really like to give it a try. Do you have any idea how to resolve this issue? Thanks so much!
Ya no prob, just ignore this and proceed as usual, it will still work ✌️✌️
@@mhfx But if the nodes still remain red, I cannot input anything. How am I supposed to proceed?
O red nodes are something else. Does the import missing nodes button work? If not you may have an outdated comfyui, try "update all" in the manager. If both of those don't work you can try to install manually using git clone.