If your new version doesnt show up, just install another version of forge on different folder and go back to the version from the video that has that tab, I explain it here ua-cam.com/video/BFSDsMz_uE0/v-deo.html I got the checkpoint model for SVD from here civitai.com/models/207992/stable-video-diffusion-svd Remember it can generate at 1024x576px or 576x1024px, you can use bigger images when you upload but try to keep them same size. It can generate 4 seconds video, you probably can get the last frame and get a continuation of it for another 4 seconds and so on. You need like 6-8 GB of VRAM. I used Stable Diffusion Forge UI If you want to learn more about AI or have questions join my facebook group facebook.com/groups/pixaromacommunity
Still can't get it to work in Forge. I don't know what I'm doing wrong, does it have to use SD 1.5 checkpoints and VAE first? Tried using Animagine XL 3.1 with XL VAE for the initial image feed and running SVD just sent me to BSOD
@@nietzchan it need a lot of video ram so many that why is crashing. It just need a photo on that that size 1024*576px and work with that svd model to generate, if it crash probably your video card can not handle it
@@pixaroma I think my forge installation have memory management issues, or probably the something wrong with the unet setting. I managed to run it once when I'm just running SVD, but the second time it just crashes. I'm currently using 12gb 3060 and 16gb ram, and I think the bottleneck is actually the ram when Forge is automatically load models to ram on start. I want to try offload from vram options and see if it helps.
Confirmed, I need more vram. Tried to use the offload models from vram args so SVD have plenty of room in GPU. I'm using RTX 3060 12gb despite SVD only uses around 8gb of vram the Forge backend still have the image diffuser model in vram, resulting in OOM on my GPU. the offload args works, but instead it didn't offload the SVD models once you generate video. So I'm back to square one after each generation. Oh well.
Thank you for the video using forge. Please make more video using forge since this is the only gui I can run SDLX model on my low end RTX 2060 6VRam PC.
Sure, i also have an older computer with same video card and with forge i managed to get it to work, but even on my new rtx4090 seems to work better, so for a while i will do only forge, unless automatic 1111 ad something that forge can't do :)
Thanks ! Yes shame about the lack of settings for the SVD output path. It'd be great to have camera movement and prompt guidance like on Deforum but with consistence of SVD. Soon I'm sure ;)
Hmm, I installed Forge just 2 days ago (f2.0.1v1.10.1-previous-605-g05b01da0) and I don't like the idea of going back in the version. Why did they delete SVD? Is there any other practical way to get it back? Thanks!
not sure why they remove it, but anyway the svd is not so great and is hard to control, need a lot of tries for a few seconds. They keep changing the interfaces, once they stopped updates for months, then started again, it was hard to follow that why I switched to comfyui, even if is harder at the beginning on long run is more stable just need time to adjust
How do websites like leonardo make it so it looks like the wings on a fly are flapping for example? I'm having a hard time generating a high quality video like that from svd its super grainy
not sure what kind of models they are using, probably if you generate a lot of them some of them would have more interesting movements, other AI I saw it have brushes control that paint and tell what to move in the image, so have more control. Or like with SORA when will be reclassed with prompt that can tell what to do
the commit hash that I used is this Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 i have created a bat file that let me go back to that specific version rollback.bat and add this text inside @echo off set PATH=%DIR%\git\bin;%PATH% git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 pause you can download the bat from google drive and place it next to your run.bat and update.bat in the main folder, first you run rollback.bat wait to finish, press enter to close, then you start forge normally from the run.bat and should be the old version. I saved bat files here drive.google.com/drive/folders/1bS-6HdLl5AH3Rbd2wHUm_nILUOnu9hmJ?usp=sharing
would you know why im getting this error whenever i try to generate? TypeError: KSamplerX0Inpaint.__init__() missing 1 required positional argument: 'sigmas'
They keep updating the forge some things work and others don't , was very unstable lately, you can see if anyone else have the same error or you can report the issue on their page github.com/lllyasviel/stable-diffusion-webui-forge/issues
Great video! Do you know if forge or any other UI like this has the capability of inpainting a certain section of your image, so that only that inpainted portion gets animated?
It has been a while since i used forge but i think it was x2 x3 on how many time it upscale or something, different version works better depends on how many time you upscale something
Thanks for the video. I use Comfy, although I generate with the same settings as you show, my results are very noisy, it has grain all over it. Is it related with Comfy itself, I don't know.
@@pixaroma hello, I did ungrade to 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 but it does not have the VSD in the interface. do you know another method to download a stable diffusion with VSD with comango.git?
I get the error message "attribute error; none type object has not attribute set manual cast" I upload an image from my computer. I don't know how to correct this error. Thank you
Did you use exact same settings? Also make sure the image size is the same like in the video, if not resize it. There are some bugs when you are using a width and height that is not divisible with 64 so maybe that can fix it
Great video, question for you, my GPU has 24 GB but the software is only showing the dedicated 8 GB, trying to figure out how to make sure it is utilizing all 24 GB. I tried more frames and Task Manager shows that it taps into the shared GPU so maybe I just am over analyzing it. (RTX 3060Ti for reference)
Depending on what you give it to do it will use more VRAM don't worry, if I have big images or doing video it will need more vram so it will use more. It takes what it needs in that moment. I have 24 gb of vram it takes like 4-5 sec to generate a 1024px image
@@woodtech1951 unfortunately that model only work with that size, so if you increase will not work. Is better you just use an upscaler after like topaz video ai or something. For images you can increase but for that specific video model you can not. Is an old model and didnt find a better version yet :( you can take a look at luma ai for image to video, is better then this, you have like 5 videos free, or runwayml version3, i rarely use this model anymore because it doesnt have to much motion. I am playing more with comfyui and there I have more options for animation, as i learn more about it I am doing more tutorials for it
I Don't have SVD TAB and my Forge is updated (run the Update.bat and it said "Already updated"). I'm on the main branch, and other branches (like dev) are not there if i do "git fetch" -> "git branch". What am I doing wrong? Help plz...
The latest version doesn't have it, check this video i talk about how you can get back to older version ua-cam.com/video/BFSDsMz_uE0/v-deo.htmlsi=lITYLYk1millsWY-
I seem to get a bit closer to animation using this. I have tried the animatediff but all I get is a still image. When I click generate with the SVD module I can see a progression bar but then I get Error. Is this because I did the one click download of Forge as that may be the issue with animatediff, or is it possible that I just don't have the right amount of vram. I have NVIDIA GeForce GTX 980 Ti
for stable video diffusion I didnt find a better one, stability ai released just one model for video, compared to the image model that released more then one
yes, but you can still install an older version that has it see how I explained it here ua-cam.com/video/BFSDsMz_uE0/v-deo.htmlsi=ygQEkbZg41I8aiYS&t=986
Hello. I installed stble diffusion, I downloaded stableVideoDiffusion_img2vidXt11.safetensors, inserted in SVD, but I don't see the SVD option on the home page, why?
Not sure what could be the cause, i didn't use it in the last 5 months. Are you using forge or other version,mayne you have automatic1111 instead of forge ui, or maybe you updated to another version that doesn't have that tab. Usually svd tab should be already there even if you didn't added the model
Is anyone else having an issue with "RuntimeError: Conv3D is not supported on MPS"? Some people on github said it might be something to do with the pytorch version, but I think mine is the right version.
I didnt got that error, but from what I found online that seems to be related to the macOS and apple processors. So what seems to be the problem with the error you're encountering is due to PyTorch's Metal Performance Shaders (MPS) backend not supporting the Conv3D operation on Apple Silicon (M1, M2, etc.). I am on windows so not sure what that does means, but maybe it has more sense to you, so probably pytorch doesnt support the apple proccesor how it should, yeah that suggest updating the pytorch but that will work only if they included that support for processor.
I didn't use svd for a few months now new version appears. And for comfyui i will do a video when i get to that part, I still have more to show on the image before i get to the video
I just revert it to a older version that has that svd, you can install it on separate folder and just have that older version ua-cam.com/video/BFSDsMz_uE0/v-deo.htmlsi=v9zBYtWpLJuAfidm&t=984 i created a bat file that go back to that version see in the video @echo off set PATH=%DIR%\git\bin;%PATH% git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 pause
Latest install probably is not the stable version that i use, i have a video on the channel with downgrade or update forge, i use the version with commit that start with 29
@@pixaroma I'll have to look for it. Then again, even though auto1111 is slower, it seems at least to be more stable. When ForgeUI gets their act together I'll check it out again.
@@olternaut well the problem is that was not updated officially and all the new version will might brake your forge. That why i switched to comfyui. A1111 is a little slower but also i have to wait for updates when something new appear like sd3 and so on, and in comfyui i have it next day. ua-cam.com/video/RZJJ_ZrHOc0/v-deo.html
@@pixaroma I hear what you're saying. But comfy seems to be needlessly complex. It's like Dad yelling at me to do my homework and I begrudgingly get to it after dragging my feet. lol
Check the latest video the one with forge and flux from the playlist, i use an older version that had a svd tab, the new version doesn't have it yet. So you can go back to that version to get the svd tab but it will not have flux and new stuff the new version have
Doesn't work for me, animated images are just being elongated or squished with some corruptions, instead of those cool animations you've showed. I use your dimensions and a model from link. Why?
@@pixaroma Sorry to bother, but it looks really interesting and I would like to get it running - maybe the cause of error is simple for someone far more experienced than me. I've discovered that it works normally when I'm using baseline realistic visions model that comes together with ForgeUI - but not when I'm using something generated with old SD1.5 models like abyssorangemix
Yeah I understand, hope they improve it in the future, now it is all random and needs a lot of tries to get something nice. But 2 years ago image generators were basic, so probably video get better, just needs time
sorry, i tried again after restarting forge and it did work. However, i cannot get good results yet. It mostly wants to do panning and the stuff outside the original pic becomes all distorted, so idk.@@pixaroma
@@lowserver2 try also using images that doesnt touch the edge, like is not cropped, so if you have a portrait make sure it has some space, then it can rotate that without distorting, if is on edge it tries to extend that and can fail
Such a piece of crap extension. I create one Video, VRAM gehts filled, Video is finished, VRAM stays full -> OOM when trying to create the next video. So, restarting SD after every video creation. How stupid.
@@pixaroma Huh? No, like I said: 1st video works, 2nd video OOM because the vram is still full from the first video. It doesn't get flushed. pytorch keeps the vram reserved.
Yeah is possible to not work how it should with ram, but if you have more it never gets full so it still works, never crashed on 24vram, but still seems to be an old version and didn't saw a new one that work for stable diffusion so i keep using that one, i am waiting for Sora or alternatives
3:08 yes seed like million variable. some complete grap. tells what AI actually do. programmed todo. not any intelligence. it will not create rabbit unless trained data has rabbit and it will be same rabbit for those prompt words. it works good with this robot bcos training data have many images from this robot. it did not work good for picture of man and face swapped myself. background move if find seed but "me" not change at all LOL got so bored first attempt worked but rest did not lol lol all that waiting to get grap! cant even choose render frame 7 without make video or render mulltip images using different seeds so can choose. seed is like motion from 1 trained clip. it will do exact that if your image match(trained todo no AI) seed 1 could me pan left seed 2 could be pan right ..etc what we learned? AI result need be checked. dont make skynet and plug it to red button(it will push red button if it programmed todo it) but if human check result and human push red button not AI. then we not have skynet. just AI (tool)(automated instructions like i say)
Well in this case since is based on a image, the image is the variable, you can have infinite unique images for input. And yeah is not a Ai that we see in the movie is a trained model that do what is trained and knows only that for now.
Hello. I installed stble diffusion, I downloaded stableVideoDiffusion_img2vidXt11.safetensors, inserted in SVD, but I don't see the SVD option on the home page, why?
I have this version: Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 you can see here how you can switch between different versions ua-cam.com/video/RZJJ_ZrHOc0/v-deo.html
If your new version doesnt show up, just install another version of forge on different folder and go back to the version from the video that has that tab, I explain it here ua-cam.com/video/BFSDsMz_uE0/v-deo.html
I got the checkpoint model for SVD from here
civitai.com/models/207992/stable-video-diffusion-svd
Remember it can generate at 1024x576px or 576x1024px, you can use bigger images when you upload but try to keep them same size.
It can generate 4 seconds video, you probably can get the last frame and get a continuation of it for another 4 seconds and so on.
You need like 6-8 GB of VRAM.
I used Stable Diffusion Forge UI
If you want to learn more about AI or have questions join my facebook group
facebook.com/groups/pixaromacommunity
Still can't get it to work in Forge. I don't know what I'm doing wrong, does it have to use SD 1.5 checkpoints and VAE first?
Tried using Animagine XL 3.1 with XL VAE for the initial image feed and running SVD just sent me to BSOD
@@nietzchan it need a lot of video ram so many that why is crashing. It just need a photo on that that size 1024*576px and work with that svd model to generate, if it crash probably your video card can not handle it
@@pixaroma I think my forge installation have memory management issues, or probably the something wrong with the unet setting. I managed to run it once when I'm just running SVD, but the second time it just crashes.
I'm currently using 12gb 3060 and 16gb ram, and I think the bottleneck is actually the ram when Forge is automatically load models to ram on start.
I want to try offload from vram options and see if it helps.
Confirmed, I need more vram. Tried to use the offload models from vram args so SVD have plenty of room in GPU. I'm using RTX 3060 12gb despite SVD only uses around 8gb of vram the Forge backend still have the image diffuser model in vram, resulting in OOM on my GPU. the offload args works, but instead it didn't offload the SVD models once you generate video. So I'm back to square one after each generation. Oh well.
@@nietzchan sorry it didnt work, usually they make it less vram consuming in time, so in a few months maybe we have better models and systems
Nice tutorial; I enjoy the animatediff extension I get really good and consistent results using it!
where do you use it? In comfy?
I installed AD inside Forge UI, and there is now any changes, AD didn't appear
Thank you for your efforts !!!
Thanks! Another good tutorial video!
Thanks :) glad you like it
Thank you for the video using forge. Please make more video using forge since this is the only gui I can run SDLX model on my low end RTX 2060 6VRam PC.
Sure, i also have an older computer with same video card and with forge i managed to get it to work, but even on my new rtx4090 seems to work better, so for a while i will do only forge, unless automatic 1111 ad something that forge can't do :)
Awesome tutorial. Keep up the great work
thank you :)
Thank you for this tutorial
Great video, subscribed. Keep making great content.
very good content, i apreciate it. keep it like this
Nice one!
cool stuff. thank you
How or with what tool did you upscale the video afterwards?
I use topaz video ai for video upscale
Thanks ! Yes shame about the lack of settings for the SVD output path. It'd be great to have camera movement and prompt guidance like on Deforum but with consistence of SVD. Soon I'm sure ;)
Hmm, I installed Forge just 2 days ago (f2.0.1v1.10.1-previous-605-g05b01da0) and I don't like the idea of going back in the version. Why did they delete SVD? Is there any other practical way to get it back? Thanks!
not sure why they remove it, but anyway the svd is not so great and is hard to control, need a lot of tries for a few seconds. They keep changing the interfaces, once they stopped updates for months, then started again, it was hard to follow that why I switched to comfyui, even if is harder at the beginning on long run is more stable just need time to adjust
How do websites like leonardo make it so it looks like the wings on a fly are flapping for example? I'm having a hard time generating a high quality video like that from svd its super grainy
not sure what kind of models they are using, probably if you generate a lot of them some of them would have more interesting movements, other AI I saw it have brushes control that paint and tell what to move in the image, so have more control. Or like with SORA when will be reclassed with prompt that can tell what to do
do you have a link to the version that you are using???
the commit hash that I used is this Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 i have created a bat file that let me go back to that specific version rollback.bat
and add this text inside
@echo off
set PATH=%DIR%\git\bin;%PATH%
git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
pause
you can download the bat from google drive and place it next to your run.bat and update.bat in the main folder, first you run rollback.bat wait to finish, press enter to close, then you start forge normally from the run.bat and should be the old version. I saved bat files here drive.google.com/drive/folders/1bS-6HdLl5AH3Rbd2wHUm_nILUOnu9hmJ?usp=sharing
would you know why im getting this error whenever i try to generate?
TypeError: KSamplerX0Inpaint.__init__() missing 1 required positional argument: 'sigmas'
They keep updating the forge some things work and others don't , was very unstable lately, you can see if anyone else have the same error or you can report the issue on their page github.com/lllyasviel/stable-diffusion-webui-forge/issues
i have the same error, can u fix it?
Great, thanks a lot.
Great video! Do you know if forge or any other UI like this has the capability of inpainting a certain section of your image, so that only that inpainted portion gets animated?
I don't know any, only some online platforms saw it has some motion brush, but didn't saw any to have in stable diffusion yet
Only with confy ui
Very cool! Wish i had the hardware to run it!
my Stable Diffusion Forge does not have SVD
I tried downloading again and the SVD still didn't appear. :(
Did you try the last stable version because the new updates mess up a lot of things ua-cam.com/video/RZJJ_ZrHOc0/v-deo.htmlsi=1d9jphW2PuLB0RK6
check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
ua-cam.com/video/BFSDsMz_uE0/v-deo.html
My best tip is to use Stability Matrix. Very nice to keep it all together and simple. Updates forge nicely to.
Is it available in automatic 1111?
Not sure I dont use it, maybe as an extension
thank you for the video! I've not svd folder on webui > models. Why?
it should be there since you installed forge ui, maybe you have other UI or something like A1111? not sure what to say
Whats the DAT upscaler? I never used it
Ia just an upscaler model you can use any other model you like
@@pixaroma OKi^^ I was just surprised that the drop down menu showed like many of them. (DAT 1, DAT 2..) That's what I saw maybe I missaw.
It has been a while since i used forge but i think it was x2 x3 on how many time it upscale or something, different version works better depends on how many time you upscale something
Im getting error when i try to generate. Do you know how to fix? Plzzzz
I don't, the video is a few months old, the forge changed hundreds of times since then
Thanks for the video. I use Comfy, although I generate with the same settings as you show, my results are very noisy, it has grain all over it. Is it related with Comfy itself, I don't know.
Not sure, I will play more in the coming months with comfyui
@@pixaroma I was using tiled upscale for the image, I realize it affects svd output quality in a bad way. Just leaving a little tip for others.
HI. In the downloaded template of dtable diffusion, there is no SVD option, what can I do?
Did you tried a stable version ua-cam.com/video/RZJJ_ZrHOc0/v-deo.htmlsi=kLp0fpY5boKvYImP
@@pixaroma hello, I did ungrade to 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 but it does not have the VSD in the interface. do you know another method to download a stable diffusion with VSD with comango.git?
I get the error message "attribute error; none type object has not attribute set manual cast" I upload an image from my computer. I don't know how to correct this error. Thank you
Did you use exact same settings? Also make sure the image size is the same like in the video, if not resize it. There are some bugs when you are using a width and height that is not divisible with 64 so maybe that can fix it
Great video, question for you, my GPU has 24 GB but the software is only showing the dedicated 8 GB, trying to figure out how to make sure it is utilizing all 24 GB. I tried more frames and Task Manager shows that it taps into the shared GPU so maybe I just am over analyzing it. (RTX 3060Ti for reference)
Depending on what you give it to do it will use more VRAM don't worry, if I have big images or doing video it will need more vram so it will use more. It takes what it needs in that moment. I have 24 gb of vram it takes like 4-5 sec to generate a 1024px image
@@pixaroma thanks for the quick reply! So does that mean I could possibly increase the WxH of the output video?
@@woodtech1951 unfortunately that model only work with that size, so if you increase will not work. Is better you just use an upscaler after like topaz video ai or something. For images you can increase but for that specific video model you can not. Is an old model and didnt find a better version yet :( you can take a look at luma ai for image to video, is better then this, you have like 5 videos free, or runwayml version3, i rarely use this model anymore because it doesnt have to much motion. I am playing more with comfyui and there I have more options for animation, as i learn more about it I am doing more tutorials for it
I Don't have SVD TAB and my Forge is updated (run the Update.bat and it said "Already updated"). I'm on the main branch, and other branches (like dev) are not there if i do "git fetch" -> "git branch". What am I doing wrong? Help plz...
The latest version doesn't have it, check this video i talk about how you can get back to older version ua-cam.com/video/BFSDsMz_uE0/v-deo.htmlsi=lITYLYk1millsWY-
Mine creates a video in the folder: \Stable Diffusion Forge\webui\output\svd which is the same output folder where you have img2img and txt2img at.
I didn't use it for a while but i think i put in settings all the paths to lead to the same folder
I seem to get a bit closer to animation using this. I have tried the animatediff but all I get is a still image. When I click generate with the SVD module I can see a progression bar but then I get Error. Is this because I did the one click download of Forge as that may be the issue with animatediff, or is it possible that I just don't have the right amount of vram. I have NVIDIA GeForce GTX 980 Ti
I think you need more then 6gb of vram, usually rtx cards with 8gb or more work better. I saw another comment saying that 6bb gave an error
@@pixaroma Thanks, I guess I need an upgrade. Your videos are very helpful.
I'm trying to utilize SVD on my GTX 1660 Ti, but it doesn't seem to be working. I'm encountering error messages.
Maybe I don't have enough vram your video card , not sure, for me it worked with those settings
is this the best SVD or there are other recommended models we can try
for stable video diffusion I didnt find a better one, stability ai released just one model for video, compared to the image model that released more then one
how can add a prompt to video, if i want make specific videos?
this one only works with images, so you can generate a text to image first then use that image to make the video, doesnt know text to directly video
So you can't have a prompt for "how" you want it to animate or move?
no, is all based on the image, maybe they fix that in the future
do you have any idea why the generated svd is worst, my svd generated the robot face is worst, i use legacy forge ui
Svd model is not so good lets hope they release a better model
does the checkpoitn matter??
I think so, this one works with this settings but others might have other recommended settings
Thanks!
Which stable diffusion model firge ui download because I don't see the svd option. Hrazie
I had one that started with commit that start with 29
Do you have a link to download the one as file.bat?@@pixaroma
Brother i got this error any help please:
raise FFExecutableNotFoundError(
ffmpy.FFExecutableNotFoundError: Executable 'ffprobe' not found
Is possible to need to install ffmpeg , i didn't had that error yet
I thought I would get past those expensive subsciption models of image2video AI. Now I got caught into Topaz for the hi-res fix
I use topaz video ai for video upscaling
Did SVD has been removed from SD forge ?
yes, but you can still install an older version that has it see how I explained it here ua-cam.com/video/BFSDsMz_uE0/v-deo.htmlsi=ygQEkbZg41I8aiYS&t=986
Hello. I installed stble diffusion, I downloaded stableVideoDiffusion_img2vidXt11.safetensors, inserted in SVD, but I don't see the SVD option on the home page, why?
Not sure what could be the cause, i didn't use it in the last 5 months. Are you using forge or other version,mayne you have automatic1111 instead of forge ui, or maybe you updated to another version that doesn't have that tab. Usually svd tab should be already there even if you didn't added the model
Extension name?
Is not as extension, it was in older version of forge integrated, now is not anymore only if you downgrade
can we do batch render?
I didn't find an option for video, so I don't think it is possible or I didn't find it.
@@pixaromathanks~ you tutorial is great and clear~
i installed forge by pinokio, i don''t have a svd tab in ui.
how to fix it, anybody?
Check pinned comment
@@pixaroma thanks so much, sorry for not paying attention to pinned one.
Is anyone else having an issue with "RuntimeError: Conv3D is not supported on MPS"?
Some people on github said it might be something to do with the pytorch version, but I think mine is the right version.
I didnt got that error, but from what I found online that seems to be related to the macOS and apple processors. So what seems to be the problem with the error you're encountering is due to PyTorch's Metal Performance Shaders (MPS) backend not supporting the Conv3D operation on Apple Silicon (M1, M2, etc.). I am on windows so not sure what that does means, but maybe it has more sense to you, so probably pytorch doesnt support the apple proccesor how it should, yeah that suggest updating the pytorch but that will work only if they included that support for processor.
how to generate 8/10 seocnd video? sd or comfy
I didn't use svd for a few months now new version appears. And for comfyui i will do a video when i get to that part, I still have more to show on the image before i get to the video
@@pixaroma any video generate model 8/10 second video generated ?? open source
I don't know any that does that long. Look for CogVideoX that is the last model for video i know
how to install it in forge ui?
I just revert it to a older version that has that svd, you can install it on separate folder and just have that older version ua-cam.com/video/BFSDsMz_uE0/v-deo.htmlsi=v9zBYtWpLJuAfidm&t=984
i created a bat file that go back to that version see in the video
@echo off
set PATH=%DIR%\git\bin;%PATH%
git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
pause
For some reason I don't see the Train, svd, or z123 tabs in my forge ui install. I'm sure I have the latest install. Anybody know what the problem is?
Latest install probably is not the stable version that i use, i have a video on the channel with downgrade or update forge, i use the version with commit that start with 29
@@pixaroma I'll have to look for it. Then again, even though auto1111 is slower, it seems at least to be more stable. When ForgeUI gets their act together I'll check it out again.
@@olternaut well the problem is that was not updated officially and all the new version will might brake your forge. That why i switched to comfyui. A1111 is a little slower but also i have to wait for updates when something new appear like sd3 and so on, and in comfyui i have it next day. ua-cam.com/video/RZJJ_ZrHOc0/v-deo.html
@@pixaroma I hear what you're saying. But comfy seems to be needlessly complex. It's like Dad yelling at me to do my homework and I begrudgingly get to it after dragging my feet. lol
check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
ua-cam.com/video/BFSDsMz_uE0/v-deo.html
the snow overlays were not created with sd I guess?
No, it is just a snow overlay video
my Forge UI does not have SVD tab !!!
The latest version doesn't have it anymore only if you put an older version, the version i used then was on wirh a commit that start with 29
check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
ua-cam.com/video/BFSDsMz_uE0/v-deo.html
great thanks 😍!!!
SVD doesn't come up to me.
Check the latest video the one with forge and flux from the playlist, i use an older version that had a svd tab, the new version doesn't have it yet. So you can go back to that version to get the svd tab but it will not have flux and new stuff the new version have
Thank you.
Doesn't work for me, animated images are just being elongated or squished with some corruptions, instead of those cool animations you've showed. I use your dimensions and a model from link. Why?
Not sure what to say maybe they changed something since i made the tutorial, if that happens with every image you use i cannot find an explanation
@@pixaroma Sorry to bother, but it looks really interesting and I would like to get it running - maybe the cause of error is simple for someone far more experienced than me.
I've discovered that it works normally when I'm using baseline realistic visions model that comes together with ForgeUI - but not when I'm using something generated with old SD1.5 models like abyssorangemix
the worse thing that I can't really control it :(
would be greate if I could add prompt, masks and etc, like in different SD tools
Yeah I understand, hope they improve it in the future, now it is all random and needs a lot of tries to get something nice. But 2 years ago image generators were basic, so probably video get better, just needs time
@@pixaroma yep, I agree :)
fam I wanna know about that A.i voice DOPE!! Vid
I am using VoiceAir ai
Its good, since until i read this, i had no idea its was ai voice. pretty cool.
✨👌😎🙂😎👍✨
still ran out of memory with these exact settings on 8gb vram
I don't have 8gb to test it, but online said it could work, sorry to hear it doesn't work :(
sorry, i tried again after restarting forge and it did work. However, i cannot get good results yet. It mostly wants to do panning and the stuff outside the original pic becomes all distorted, so idk.@@pixaroma
Try different seeds until you get one that works, unfortunately we dont have control, hope in future models they fix that
@@lowserver2 try also using images that doesnt touch the edge, like is not cropped, so if you have a portrait make sure it has some space, then it can rotate that without distorting, if is on edge it tries to extend that and can fail
299 one time payment for topaz but at least it’s a onetime payment ❤
I think I got it on black friday it was cheaper then :)
Such a piece of crap extension. I create one Video, VRAM gehts filled, Video is finished, VRAM stays full -> OOM when trying to create the next video. So, restarting SD after every video creation. How stupid.
It needs a lot of vram or you can't do much with it
@@pixaroma Huh? No, like I said: 1st video works, 2nd video OOM because the vram is still full from the first video. It doesn't get flushed. pytorch keeps the vram reserved.
Yeah is possible to not work how it should with ram, but if you have more it never gets full so it still works, never crashed on 24vram, but still seems to be an old version and didn't saw a new one that work for stable diffusion so i keep using that one, i am waiting for Sora or alternatives
3:08 yes seed like million variable. some complete grap. tells what AI actually do. programmed todo. not any intelligence.
it will not create rabbit unless trained data has rabbit and it will be same rabbit for those prompt words.
it works good with this robot bcos training data have many images from this robot.
it did not work good for picture of man and face swapped myself. background move if find seed but "me" not change at all LOL
got so bored first attempt worked but rest did not lol lol all that waiting to get grap!
cant even choose render frame 7 without make video
or render mulltip images using different seeds so can choose.
seed is like motion from 1 trained clip. it will do exact that if your image match(trained todo no AI)
seed 1 could me pan left seed 2 could be pan right ..etc
what we learned? AI result need be checked. dont make skynet and plug it to red button(it will push red button if it programmed todo it) but if human check result and human push red button not AI. then we not have skynet. just AI (tool)(automated instructions like i say)
Well in this case since is based on a image, the image is the variable, you can have infinite unique images for input. And yeah is not a Ai that we see in the movie is a trained model that do what is trained and knows only that for now.
SVD is so boring, it`s background light motion basically.
Yeah, definitely needs more work
SVD has better orbit camera motion I have tried
Other models not even making tracking well .. but SVD still need more training for better results
Hello. I installed stble diffusion, I downloaded stableVideoDiffusion_img2vidXt11.safetensors, inserted in SVD, but I don't see the SVD option on the home page, why?
I have this version: Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 you can see here how you can switch between different versions ua-cam.com/video/RZJJ_ZrHOc0/v-deo.html
Thanks!