Stable Video Diffusion Tutorial: Mastering SVD in Forge UI

Поділитися
Вставка
  • Опубліковано 6 жов 2024

КОМЕНТАРІ • 136

  • @pixaroma
    @pixaroma  7 місяців тому +11

    If your new version doesnt show up, just install another version of forge on different folder and go back to the version from the video that has that tab, I explain it here ua-cam.com/video/BFSDsMz_uE0/v-deo.html
    I got the checkpoint model for SVD from here
    civitai.com/models/207992/stable-video-diffusion-svd
    Remember it can generate at 1024x576px or 576x1024px, you can use bigger images when you upload but try to keep them same size.
    It can generate 4 seconds video, you probably can get the last frame and get a continuation of it for another 4 seconds and so on.
    You need like 6-8 GB of VRAM.
    I used Stable Diffusion Forge UI
    If you want to learn more about AI or have questions join my facebook group
    facebook.com/groups/pixaromacommunity

    • @nietzchan
      @nietzchan 6 місяців тому

      Still can't get it to work in Forge. I don't know what I'm doing wrong, does it have to use SD 1.5 checkpoints and VAE first?
      Tried using Animagine XL 3.1 with XL VAE for the initial image feed and running SVD just sent me to BSOD

    • @pixaroma
      @pixaroma  6 місяців тому +1

      @@nietzchan it need a lot of video ram so many that why is crashing. It just need a photo on that that size 1024*576px and work with that svd model to generate, if it crash probably your video card can not handle it

    • @nietzchan
      @nietzchan 6 місяців тому

      @@pixaroma I think my forge installation have memory management issues, or probably the something wrong with the unet setting. I managed to run it once when I'm just running SVD, but the second time it just crashes.
      I'm currently using 12gb 3060 and 16gb ram, and I think the bottleneck is actually the ram when Forge is automatically load models to ram on start.
      I want to try offload from vram options and see if it helps.

    • @nietzchan
      @nietzchan 6 місяців тому

      Confirmed, I need more vram. Tried to use the offload models from vram args so SVD have plenty of room in GPU. I'm using RTX 3060 12gb despite SVD only uses around 8gb of vram the Forge backend still have the image diffuser model in vram, resulting in OOM on my GPU. the offload args works, but instead it didn't offload the SVD models once you generate video. So I'm back to square one after each generation. Oh well.

    • @pixaroma
      @pixaroma  6 місяців тому

      @@nietzchan sorry it didnt work, usually they make it less vram consuming in time, so in a few months maybe we have better models and systems

  • @CornPMV
    @CornPMV 7 місяців тому +4

    Nice tutorial; I enjoy the animatediff extension I get really good and consistent results using it!

    • @iangillan1296
      @iangillan1296 3 місяці тому

      where do you use it? In comfy?
      I installed AD inside Forge UI, and there is now any changes, AD didn't appear

  • @PredictAnythingSoftware
    @PredictAnythingSoftware 7 місяців тому +1

    Thank you for the video using forge. Please make more video using forge since this is the only gui I can run SDLX model on my low end RTX 2060 6VRam PC.

    • @pixaroma
      @pixaroma  7 місяців тому

      Sure, i also have an older computer with same video card and with forge i managed to get it to work, but even on my new rtx4090 seems to work better, so for a while i will do only forge, unless automatic 1111 ad something that forge can't do :)

  • @baheth3elmy16
    @baheth3elmy16 7 місяців тому +1

    Thanks! Another good tutorial video!

    • @pixaroma
      @pixaroma  7 місяців тому

      Thanks :) glad you like it

  • @SumoBundle
    @SumoBundle 7 місяців тому +2

    Thank you for this tutorial

  • @SantanuProductions
    @SantanuProductions 20 днів тому

    I thought I would get past those expensive subsciption models of image2video AI. Now I got caught into Topaz for the hi-res fix

    • @pixaroma
      @pixaroma  20 днів тому

      I use topaz video ai for video upscaling

  • @cruz2480
    @cruz2480 4 місяці тому

    Great video, subscribed. Keep making great content.

  • @FranzGorask
    @FranzGorask 2 місяці тому

    very good content, i apreciate it. keep it like this

  • @fishpickles1377
    @fishpickles1377 4 місяці тому

    Very cool! Wish i had the hardware to run it!

  • @robroufla
    @robroufla 6 місяців тому

    Thanks ! Yes shame about the lack of settings for the SVD output path. It'd be great to have camera movement and prompt guidance like on Deforum but with consistence of SVD. Soon I'm sure ;)

  • @Ollegruss_Music
    @Ollegruss_Music 3 місяці тому

    Thanks!

  • @richctv
    @richctv 7 місяців тому

    Awesome tutorial. Keep up the great work

    • @pixaroma
      @pixaroma  7 місяців тому

      thank you :)

  • @DrDaab
    @DrDaab 6 місяців тому

    Great, thanks a lot.

  • @Robertinosro
    @Robertinosro 6 місяців тому

    cool stuff. thank you

  • @UmarandSaqib
    @UmarandSaqib 7 місяців тому

    Nice one!

  • @zimxh
    @zimxh Місяць тому +1

    would you know why im getting this error whenever i try to generate?
    TypeError: KSamplerX0Inpaint.__init__() missing 1 required positional argument: 'sigmas'

    • @pixaroma
      @pixaroma  Місяць тому

      They keep updating the forge some things work and others don't , was very unstable lately, you can see if anyone else have the same error or you can report the issue on their page github.com/lllyasviel/stable-diffusion-webui-forge/issues

  • @bekosh248
    @bekosh248 4 місяці тому

    Great video! Do you know if forge or any other UI like this has the capability of inpainting a certain section of your image, so that only that inpainted portion gets animated?

    • @pixaroma
      @pixaroma  4 місяці тому

      I don't know any, only some online platforms saw it has some motion brush, but didn't saw any to have in stable diffusion yet

    • @YoshikiBeats
      @YoshikiBeats 3 місяці тому

      Only with confy ui

  • @Rithman
    @Rithman 15 днів тому

    I Don't have SVD TAB and my Forge is updated (run the Update.bat and it said "Already updated"). I'm on the main branch, and other branches (like dev) are not there if i do "git fetch" -> "git branch". What am I doing wrong? Help plz...

    • @pixaroma
      @pixaroma  15 днів тому +1

      The latest version doesn't have it, check this video i talk about how you can get back to older version ua-cam.com/video/BFSDsMz_uE0/v-deo.htmlsi=lITYLYk1millsWY-

  • @FantasyArtworkAI
    @FantasyArtworkAI 4 місяці тому

    Mine creates a video in the folder: \Stable Diffusion Forge\webui\output\svd which is the same output folder where you have img2img and txt2img at.

    • @pixaroma
      @pixaroma  4 місяці тому

      I didn't use it for a while but i think i put in settings all the paths to lead to the same folder

  • @lorenzodecarlo9125
    @lorenzodecarlo9125 4 місяці тому

    thank you for the video! I've not svd folder on webui > models. Why?

    • @pixaroma
      @pixaroma  4 місяці тому

      it should be there since you installed forge ui, maybe you have other UI or something like A1111? not sure what to say

  • @woodtech1951
    @woodtech1951 3 місяці тому

    Great video, question for you, my GPU has 24 GB but the software is only showing the dedicated 8 GB, trying to figure out how to make sure it is utilizing all 24 GB. I tried more frames and Task Manager shows that it taps into the shared GPU so maybe I just am over analyzing it. (RTX 3060Ti for reference)

    • @pixaroma
      @pixaroma  3 місяці тому +1

      Depending on what you give it to do it will use more VRAM don't worry, if I have big images or doing video it will need more vram so it will use more. It takes what it needs in that moment. I have 24 gb of vram it takes like 4-5 sec to generate a 1024px image

    • @woodtech1951
      @woodtech1951 3 місяці тому

      @@pixaroma thanks for the quick reply! So does that mean I could possibly increase the WxH of the output video?

    • @pixaroma
      @pixaroma  3 місяці тому

      @@woodtech1951 unfortunately that model only work with that size, so if you increase will not work. Is better you just use an upscaler after like topaz video ai or something. For images you can increase but for that specific video model you can not. Is an old model and didnt find a better version yet :( you can take a look at luma ai for image to video, is better then this, you have like 5 videos free, or runwayml version3, i rarely use this model anymore because it doesnt have to much motion. I am playing more with comfyui and there I have more options for animation, as i learn more about it I am doing more tutorials for it

  • @MelizzanoDaquila
    @MelizzanoDaquila 2 місяці тому +1

    my Stable Diffusion Forge does not have SVD
    I tried downloading again and the SVD still didn't appear. :(

    • @pixaroma
      @pixaroma  2 місяці тому

      Did you try the last stable version because the new updates mess up a lot of things ua-cam.com/video/RZJJ_ZrHOc0/v-deo.htmlsi=1d9jphW2PuLB0RK6

    • @pixaroma
      @pixaroma  Місяць тому

      check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
      ua-cam.com/video/BFSDsMz_uE0/v-deo.html

  • @mert5809
    @mert5809 3 місяці тому

    Thanks for the video. I use Comfy, although I generate with the same settings as you show, my results are very noisy, it has grain all over it. Is it related with Comfy itself, I don't know.

    • @pixaroma
      @pixaroma  3 місяці тому +1

      Not sure, I will play more in the coming months with comfyui

    • @mert5809
      @mert5809 3 місяці тому

      @@pixaroma I was using tiled upscale for the image, I realize it affects svd output quality in a bad way. Just leaving a little tip for others.

  • @levagicien9904
    @levagicien9904 Місяць тому

    Did SVD has been removed from SD forge ?

    • @pixaroma
      @pixaroma  Місяць тому

      yes, but you can still install an older version that has it see how I explained it here ua-cam.com/video/BFSDsMz_uE0/v-deo.htmlsi=ygQEkbZg41I8aiYS&t=986

  • @makadi86
    @makadi86 6 місяців тому

    is this the best SVD or there are other recommended models we can try

    • @pixaroma
      @pixaroma  6 місяців тому +1

      for stable video diffusion I didnt find a better one, stability ai released just one model for video, compared to the image model that released more then one

  • @onlineispections
    @onlineispections 2 місяці тому

    Which stable diffusion model firge ui download because I don't see the svd option. Hrazie

    • @pixaroma
      @pixaroma  2 місяці тому

      I had one that started with commit that start with 29

    • @onlineispections
      @onlineispections 2 місяці тому

      Do you have a link to download the one as file.bat?​@@pixaroma

  • @anon3253
    @anon3253 4 місяці тому

    I'm trying to utilize SVD on my GTX 1660 Ti, but it doesn't seem to be working. I'm encountering error messages.

    • @pixaroma
      @pixaroma  4 місяці тому

      Maybe I don't have enough vram your video card , not sure, for me it worked with those settings

  • @manolomaru
    @manolomaru 5 місяців тому

    ✨👌😎🙂😎👍✨

  • @RenoRivsan
    @RenoRivsan 5 місяців тому

    does the checkpoitn matter??

    • @pixaroma
      @pixaroma  5 місяців тому

      I think so, this one works with this settings but others might have other recommended settings

  • @wayneout
    @wayneout 5 місяців тому

    I get the error message "attribute error; none type object has not attribute set manual cast" I upload an image from my computer. I don't know how to correct this error. Thank you

    • @pixaroma
      @pixaroma  5 місяців тому

      Did you use exact same settings? Also make sure the image size is the same like in the video, if not resize it. There are some bugs when you are using a width and height that is not divisible with 64 so maybe that can fix it

  • @MisterWealth
    @MisterWealth 4 місяці тому

    How do websites like leonardo make it so it looks like the wings on a fly are flapping for example? I'm having a hard time generating a high quality video like that from svd its super grainy

    • @pixaroma
      @pixaroma  4 місяці тому

      not sure what kind of models they are using, probably if you generate a lot of them some of them would have more interesting movements, other AI I saw it have brushes control that paint and tell what to move in the image, so have more control. Or like with SORA when will be reclassed with prompt that can tell what to do

  • @sircasino614
    @sircasino614 5 місяців тому

    So you can't have a prompt for "how" you want it to animate or move?

    • @pixaroma
      @pixaroma  5 місяців тому

      no, is all based on the image, maybe they fix that in the future

  • @WizzardofOdds
    @WizzardofOdds 4 місяці тому

    I seem to get a bit closer to animation using this. I have tried the animatediff but all I get is a still image. When I click generate with the SVD module I can see a progression bar but then I get Error. Is this because I did the one click download of Forge as that may be the issue with animatediff, or is it possible that I just don't have the right amount of vram. I have NVIDIA GeForce GTX 980 Ti

    • @pixaroma
      @pixaroma  4 місяці тому

      I think you need more then 6gb of vram, usually rtx cards with 8gb or more work better. I saw another comment saying that 6bb gave an error

    • @WizzardofOdds
      @WizzardofOdds 4 місяці тому

      @@pixaroma Thanks, I guess I need an upgrade. Your videos are very helpful.

  • @k_y_l_3
    @k_y_l_3 6 місяців тому

    Is anyone else having an issue with "RuntimeError: Conv3D is not supported on MPS"?
    Some people on github said it might be something to do with the pytorch version, but I think mine is the right version.

    • @pixaroma
      @pixaroma  6 місяців тому

      I didnt got that error, but from what I found online that seems to be related to the macOS and apple processors. So what seems to be the problem with the error you're encountering is due to PyTorch's Metal Performance Shaders (MPS) backend not supporting the Conv3D operation on Apple Silicon (M1, M2, etc.). I am on windows so not sure what that does means, but maybe it has more sense to you, so probably pytorch doesnt support the apple proccesor how it should, yeah that suggest updating the pytorch but that will work only if they included that support for processor.

  • @kalpeshthorat3066
    @kalpeshthorat3066 День тому

    Extension name?

    • @pixaroma
      @pixaroma  День тому

      Is not as extension, it was in older version of forge integrated, now is not anymore only if you downgrade

  • @onlineispections
    @onlineispections 2 місяці тому

    Hello. I installed stble diffusion, I downloaded stableVideoDiffusion_img2vidXt11.safetensors, inserted in SVD, but I don't see the SVD option on the home page, why?

    • @pixaroma
      @pixaroma  2 місяці тому

      Not sure what could be the cause, i didn't use it in the last 5 months. Are you using forge or other version,mayne you have automatic1111 instead of forge ui, or maybe you updated to another version that doesn't have that tab. Usually svd tab should be already there even if you didn't added the model

  • @justlivedekhing
    @justlivedekhing 4 місяці тому

    Brother i got this error any help please:
    raise FFExecutableNotFoundError(
    ffmpy.FFExecutableNotFoundError: Executable 'ffprobe' not found

    • @pixaroma
      @pixaroma  4 місяці тому

      Is possible to need to install ffmpeg , i didn't had that error yet

  • @onlineispections
    @onlineispections 2 місяці тому

    HI. In the downloaded template of dtable diffusion, there is no SVD option, what can I do?

    • @pixaroma
      @pixaroma  2 місяці тому

      Did you tried a stable version ua-cam.com/video/RZJJ_ZrHOc0/v-deo.htmlsi=kLp0fpY5boKvYImP

    • @onlineispections
      @onlineispections 2 місяці тому

      @@pixaroma hello, I did ungrade to 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 but it does not have the VSD in the interface. do you know another method to download a stable diffusion with VSD with comango.git?

  • @kridadkool1319
    @kridadkool1319 5 місяців тому

    fam I wanna know about that A.i voice DOPE!! Vid

    • @pixaroma
      @pixaroma  5 місяців тому

      I am using VoiceAir ai

    • @Kevlord22
      @Kevlord22 5 місяців тому

      Its good, since until i read this, i had no idea its was ai voice. pretty cool.

  • @caucho6.6.86
    @caucho6.6.86 6 місяців тому

    how can add a prompt to video, if i want make specific videos?

    • @pixaroma
      @pixaroma  6 місяців тому +1

      this one only works with images, so you can generate a text to image first then use that image to make the video, doesnt know text to directly video

  • @ArtistrystoriesUnleashed45
    @ArtistrystoriesUnleashed45 Місяць тому

    how to install it in forge ui?

    • @pixaroma
      @pixaroma  Місяць тому

      I just revert it to a older version that has that svd, you can install it on separate folder and just have that older version ua-cam.com/video/BFSDsMz_uE0/v-deo.htmlsi=v9zBYtWpLJuAfidm&t=984
      i created a bat file that go back to that version see in the video
      @echo off
      set PATH=%DIR%\git\bin;%PATH%
      git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
      pause

  • @denizkendirci
    @denizkendirci 24 дні тому

    i installed forge by pinokio, i don''t have a svd tab in ui.
    how to fix it, anybody?

    • @pixaroma
      @pixaroma  24 дні тому +1

      Check pinned comment

    • @denizkendirci
      @denizkendirci 24 дні тому

      @@pixaroma thanks so much, sorry for not paying attention to pinned one.

  • @olternaut
    @olternaut 2 місяці тому

    For some reason I don't see the Train, svd, or z123 tabs in my forge ui install. I'm sure I have the latest install. Anybody know what the problem is?

    • @pixaroma
      @pixaroma  2 місяці тому +1

      Latest install probably is not the stable version that i use, i have a video on the channel with downgrade or update forge, i use the version with commit that start with 29

    • @olternaut
      @olternaut 2 місяці тому

      @@pixaroma I'll have to look for it. Then again, even though auto1111 is slower, it seems at least to be more stable. When ForgeUI gets their act together I'll check it out again.

    • @pixaroma
      @pixaroma  2 місяці тому

      @@olternaut well the problem is that was not updated officially and all the new version will might brake your forge. That why i switched to comfyui. A1111 is a little slower but also i have to wait for updates when something new appear like sd3 and so on, and in comfyui i have it next day. ua-cam.com/video/RZJJ_ZrHOc0/v-deo.html

    • @olternaut
      @olternaut 2 місяці тому

      @@pixaroma I hear what you're saying. But comfy seems to be needlessly complex. It's like Dad yelling at me to do my homework and I begrudgingly get to it after dragging my feet. lol

    • @pixaroma
      @pixaroma  Місяць тому

      check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
      ua-cam.com/video/BFSDsMz_uE0/v-deo.html

  • @rakibislam6918
    @rakibislam6918 Місяць тому

    how to generate 8/10 seocnd video? sd or comfy

    • @pixaroma
      @pixaroma  Місяць тому

      I didn't use svd for a few months now new version appears. And for comfyui i will do a video when i get to that part, I still have more to show on the image before i get to the video

    • @rakibislam6918
      @rakibislam6918 Місяць тому

      @@pixaroma any video generate model 8/10 second video generated ?? open source

    • @pixaroma
      @pixaroma  Місяць тому

      I don't know any that does that long. Look for CogVideoX that is the last model for video i know

  • @fixelheimer3726
    @fixelheimer3726 6 місяців тому

    the snow overlays were not created with sd I guess?

    • @pixaroma
      @pixaroma  6 місяців тому

      No, it is just a snow overlay video

  • @idolgalaxy69
    @idolgalaxy69 7 місяців тому

    can we do batch render?

    • @pixaroma
      @pixaroma  7 місяців тому +1

      I didn't find an option for video, so I don't think it is possible or I didn't find it.

    • @idolgalaxy69
      @idolgalaxy69 7 місяців тому

      @@pixaromathanks~ you tutorial is great and clear~

  • @snatvb
    @snatvb 6 місяців тому

    the worse thing that I can't really control it :(
    would be greate if I could add prompt, masks and etc, like in different SD tools

    • @pixaroma
      @pixaroma  6 місяців тому

      Yeah I understand, hope they improve it in the future, now it is all random and needs a lot of tries to get something nice. But 2 years ago image generators were basic, so probably video get better, just needs time

    • @snatvb
      @snatvb 6 місяців тому +1

      @@pixaroma yep, I agree :)

  • @twd2
    @twd2 Місяць тому

    my Forge UI does not have SVD tab !!!

    • @pixaroma
      @pixaroma  Місяць тому +1

      The latest version doesn't have it anymore only if you put an older version, the version i used then was on wirh a commit that start with 29

    • @pixaroma
      @pixaroma  Місяць тому +1

      check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
      ua-cam.com/video/BFSDsMz_uE0/v-deo.html

    • @twd2
      @twd2 Місяць тому

      great thanks 😍!!!

  • @KevlarMike
    @KevlarMike 5 місяців тому

    299 one time payment for topaz but at least it’s a onetime payment ❤

    • @pixaroma
      @pixaroma  5 місяців тому +1

      I think I got it on black friday it was cheaper then :)

  • @dziku2222
    @dziku2222 4 місяці тому

    Doesn't work for me, animated images are just being elongated or squished with some corruptions, instead of those cool animations you've showed. I use your dimensions and a model from link. Why?

    • @pixaroma
      @pixaroma  4 місяці тому +1

      Not sure what to say maybe they changed something since i made the tutorial, if that happens with every image you use i cannot find an explanation

    • @dziku2222
      @dziku2222 4 місяці тому

      @@pixaroma Sorry to bother, but it looks really interesting and I would like to get it running - maybe the cause of error is simple for someone far more experienced than me.
      I've discovered that it works normally when I'm using baseline realistic visions model that comes together with ForgeUI - but not when I'm using something generated with old SD1.5 models like abyssorangemix

  • @CsokaErno
    @CsokaErno Місяць тому

    SVD doesn't come up to me.

    • @pixaroma
      @pixaroma  Місяць тому +1

      Check the latest video the one with forge and flux from the playlist, i use an older version that had a svd tab, the new version doesn't have it yet. So you can go back to that version to get the svd tab but it will not have flux and new stuff the new version have

    • @CsokaErno
      @CsokaErno Місяць тому

      Thank you.

  • @lowserver2
    @lowserver2 6 місяців тому

    still ran out of memory with these exact settings on 8gb vram

    • @pixaroma
      @pixaroma  6 місяців тому

      I don't have 8gb to test it, but online said it could work, sorry to hear it doesn't work :(

    • @lowserver2
      @lowserver2 6 місяців тому

      sorry, i tried again after restarting forge and it did work. However, i cannot get good results yet. It mostly wants to do panning and the stuff outside the original pic becomes all distorted, so idk.@@pixaroma

    • @pixaroma
      @pixaroma  6 місяців тому

      Try different seeds until you get one that works, unfortunately we dont have control, hope in future models they fix that

    • @pixaroma
      @pixaroma  6 місяців тому

      @@lowserver2 try also using images that doesnt touch the edge, like is not cropped, so if you have a portrait make sure it has some space, then it can rotate that without distorting, if is on edge it tries to extend that and can fail

  • @JarppaGuru
    @JarppaGuru 5 місяців тому

    3:08 yes seed like million variable. some complete grap. tells what AI actually do. programmed todo. not any intelligence.
    it will not create rabbit unless trained data has rabbit and it will be same rabbit for those prompt words.
    it works good with this robot bcos training data have many images from this robot.
    it did not work good for picture of man and face swapped myself. background move if find seed but "me" not change at all LOL
    got so bored first attempt worked but rest did not lol lol all that waiting to get grap!
    cant even choose render frame 7 without make video
    or render mulltip images using different seeds so can choose.
    seed is like motion from 1 trained clip. it will do exact that if your image match(trained todo no AI)
    seed 1 could me pan left seed 2 could be pan right ..etc
    what we learned? AI result need be checked. dont make skynet and plug it to red button(it will push red button if it programmed todo it) but if human check result and human push red button not AI. then we not have skynet. just AI (tool)(automated instructions like i say)

    • @pixaroma
      @pixaroma  5 місяців тому

      Well in this case since is based on a image, the image is the variable, you can have infinite unique images for input. And yeah is not a Ai that we see in the movie is a trained model that do what is trained and knows only that for now.

  • @retikulum
    @retikulum 4 місяці тому

    Such a piece of crap extension. I create one Video, VRAM gehts filled, Video is finished, VRAM stays full -> OOM when trying to create the next video. So, restarting SD after every video creation. How stupid.

    • @pixaroma
      @pixaroma  4 місяці тому

      It needs a lot of vram or you can't do much with it

    • @retikulum
      @retikulum 4 місяці тому

      @@pixaroma Huh? No, like I said: 1st video works, 2nd video OOM because the vram is still full from the first video. It doesn't get flushed. pytorch keeps the vram reserved.

    • @pixaroma
      @pixaroma  4 місяці тому

      Yeah is possible to not work how it should with ram, but if you have more it never gets full so it still works, never crashed on 24vram, but still seems to be an old version and didn't saw a new one that work for stable diffusion so i keep using that one, i am waiting for Sora or alternatives

  • @TheMaxvin
    @TheMaxvin 6 місяців тому +1

    SVD is so boring, it`s background light motion basically.

    • @pixaroma
      @pixaroma  6 місяців тому

      Yeah, definitely needs more work

    • @ranjithgaddhe9818
      @ranjithgaddhe9818 6 місяців тому

      SVD has better orbit camera motion I have tried
      Other models not even making tracking well .. but SVD still need more training for better results

  • @sb6934
    @sb6934 7 місяців тому

    Thanks!

  • @onlineispections
    @onlineispections 2 місяці тому

    Hello. I installed stble diffusion, I downloaded stableVideoDiffusion_img2vidXt11.safetensors, inserted in SVD, but I don't see the SVD option on the home page, why?

    • @pixaroma
      @pixaroma  2 місяці тому

      I have this version: Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 you can see here how you can switch between different versions ua-cam.com/video/RZJJ_ZrHOc0/v-deo.html