Start your Video 2 Video Transformations Now! ❤ Written Tutorial: www.nextdiffusion.ai/tutorials/transform-videos-into-any-style-with-animatediff-ip-adapters-a1111
Friend, something to improve faces for the animations? I have done it but with adetailer it seems that it is deformed or as if the face is superimposed, I am looking for other options
pretty sure it's pronounced "line art" but who knows, i'm brand new to this stuffs. : ) thank you for the videos btw, they have been hugely hugely helpful. do you teach webinars or workshops or anything in real time? or am i dating myself there?
Welcome, channel member! Thanks for your support! Currently, we don't offer workshops or webinars, but we're here to answer any questions you may have about the videos. Feel free to reach out anytime!
unfortunatelly does not work for me."RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)"
Make sure you revert to a ControlNet version compatible with Animatediff since the latest version of ControlNet no longer supports this workflow. - Navigate to "extensions/sd-webui-controlnet" folder and open the command prompt (terminal). - Type in the terminal: git checkout -b new_branch 10bd9b25f62deab9acb256301bbf3363c42645e7 - Type in the terminal: git pull - Restart Stable Diffusion let me know if this solved the problem!
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)
I got this error too. I restarted the WebUI. I reduced the video dimension to 315 x 560 with an aspect ratio of 9:16. The animation successfully rendered. But when I tried again, the error come back again.
Start your Video 2 Video Transformations Now! ❤
Written Tutorial: www.nextdiffusion.ai/tutorials/transform-videos-into-any-style-with-animatediff-ip-adapters-a1111
Very helpful. Glad you're using A111 :)
Awesome Tutorial !!! You Rock man... thanks and Please make more videos about stylish videos. ❤❤❤
The result is impressive
Friend, something to improve faces for the animations? I have done it but with adetailer it seems that it is deformed or as if the face is superimposed, I am looking for other options
pretty sure it's pronounced "line art" but who knows, i'm brand new to this stuffs. : ) thank you for the videos btw, they have been hugely hugely helpful. do you teach webinars or workshops or anything in real time? or am i dating myself there?
Welcome, channel member! Thanks for your support! Currently, we don't offer workshops or webinars, but we're here to answer any questions you may have about the videos. Feel free to reach out anytime!
My LCM lora file name is Pytorch lora weights, is it the same thing???
Yes that's the same. I just renamed my file.
unfortunatelly does not work for me."RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)"
Make sure you revert to a ControlNet version compatible with Animatediff since the latest version of ControlNet no longer supports this workflow.
- Navigate to "extensions/sd-webui-controlnet" folder and open the command prompt (terminal).
- Type in the terminal: git checkout -b new_branch 10bd9b25f62deab9acb256301bbf3363c42645e7
- Type in the terminal: git pull
- Restart Stable Diffusion
let me know if this solved the problem!
@@NextDiffusion thank you I did not know that
trying to use lineart without an Image throws an error for me, any suggestions?
incorrect link to Requirement 3: Requirement 3: Lineart ControlNet Model
Thank you for bringing it to my attention, the link has been updated and is now accurate.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)
If you find a solution to this please let me know - I have the same issue
I got this error too. I restarted the WebUI. I reduced the video dimension to 315 x 560 with an aspect ratio of 9:16. The animation successfully rendered. But when I tried again, the error come back again.
Hey can you create a video on how to change clothes realisticaly in a video using ebaynth or anything with stable diffusion
What would the settings be for openpose controlnet and lineart? Or tile and openpose?
AnimateDiff veya Deforum?
What site do you usually get your video's at?
BRO I HAVE A LAPTOP WITH I9- 13980HX and RTX4070 8GB. I DID EXACTLY SHOWN IN VIDEO BUT ERROR OCCURS, IT SAYS OUT OF MEMORY. WHY? AND WHAT IS SOLUTION
Excellent video! Anyone else find the chick in the init video a bit annoying. lol
These tiktokers are always annoying lol