LTX Video In ComfyUI - The Fastest AI Video Model Run Locally - Tutorial Guide

Поділитися
Вставка
  • Опубліковано 13 січ 2025

КОМЕНТАРІ • 93

  • @TheFutureThinker
    @TheFutureThinker  Місяць тому +5

    Workflows Run On The Cloud (For Low/No GPU Users): home.mimicpc.com/app-image-share?key=d1ea3605ebe54306bc6876b6af49a85a&fpr=benji
    Freebie Workflows: www.patreon.com/posts/116627608/?
    For My Patreon Supporters: www.patreon.com/posts/116627998/?

  • @damarcta
    @damarcta Місяць тому +3

    I have to say that I'm grateful for how quickly the videos are uploaded. I use ComfyUI extensively, PDXL, Flux, CogVideoX, etc., and I never keep track of when new updates are released. So, thank you for the quick updates!

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому +4

      this is my hobby :) finding new AI and crafting something.

  • @content1
    @content1 6 днів тому

    Thank you for your video. Originally I had the problem of missing nodes that were not available in the missing nodes then I had to upgrade comfy and now it works.

  • @crazyleafdesignweb
    @crazyleafdesignweb Місяць тому +6

    Looks promising for local AI Video. It feels like last year when AnimateDiff began.

  • @cgonv
    @cgonv Місяць тому

    I managed to make it work very well on an 8GB Nvidia, as fast as on your 24GB one! Fantastic, thank you very much for your help! That was revolutionary!

  • @Blenderlands
    @Blenderlands Місяць тому

    Is it good for stylish animation videos?

  • @bause6182
    @bause6182 Місяць тому +3

    I can't wait to see controlnets and finetunes or consistent carachters on this model

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому +1

      There are v2v module for this AI, where it use a reference video to control the motion.

    • @VFXShawn
      @VFXShawn Місяць тому +3

      @@TheFutureThinker Where can we find that v2v module?

  • @joshuadelorimier1619
    @joshuadelorimier1619 Місяць тому +1

    For how fast it is it's incredible. Can do portrait and landscape but I also got some decent three character shots no dramatic action yet. One minute renders on a 4070 image to video.

    • @jac001
      @jac001 Місяць тому

      1m13s on 3060 12gb.

  • @jorgemiranda2613
    @jorgemiranda2613 Місяць тому

    Cool content!! Thanks for keeping us updated ! Subscribed !

  • @WildCrashBNG
    @WildCrashBNG 13 днів тому

    Sir, which python version is required for ComfyUI? Also, when I install LTX video from the manager, it says (IMPORT FAILED). What could be the reason for this?

  • @personmuc943
    @personmuc943 14 днів тому +1

    Why am I getting an error message after it finishes generating saying "LTXVModel.forward() missing 1 required positional argument: 'attention_mask'"? the video never shows up

  • @jac001
    @jac001 Місяць тому +7

    1min 13 sec on a 3060 12gb, default text to video scene, w/ Ponokio. Happy to have arrived in this reality finally, and it only gets better from here!

  • @mr.entezaee
    @mr.entezaee 15 днів тому

    size mismatch for decoder.up_blocks.6.res_blocks.2.conv2.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).

  • @ForeverNot-wv4sz
    @ForeverNot-wv4sz Місяць тому

    I'm wondering when/if we're ever going to get something akin to animate diff 1.5 lcm in sdxl terms. I can see we have motion models now for sdxl but we can't take use any other sdxl model with it, unlike 1.5 where it loads the motion model as a unet for the 1.5 model selected. We have the tools now for sdxl to make really fast images with lcm and dmd lora so we can use lcm on any non-lighting sdxl model. Unless I'm just uninformed and we already have something like this and I just missed it with all the new tech coming out so quickly to keep up with it all

  • @jcinewilliams8819
    @jcinewilliams8819 3 дні тому

    How do we save our generated videos?

  • @PyruxNetworks
    @PyruxNetworks Місяць тому

    Have you tried the SkipLayerGuidanceDit node in comfyui? outputs seems better with it

  • @Radarhacke
    @Radarhacke Місяць тому +1

    Did not work, cant get the nodes: LTXVConditioning, LTXVScheduler, LTXVImgToVideo. Updated All Comfyui and Manager. Also tried manual Install, same Result: When loading the graph, the following node types were not found: LTXVConditioning, LTXVScheduler, LTXVImgToVideo. Also cant select LTX in Clip Loader, you see, nothing worked for me.

    • @geoffphillips5293
      @geoffphillips5293 Місяць тому

      I needed to do a full comfyui upgrade using manager, I also needed to do "FIX NODE" right clicking the one that loads the model. And of course refresh the webpage and restart the server. Plus you need the git clone line and model as per the git hub page.

    • @content1
      @content1 7 днів тому

      I have same problem

  • @cr_cryptic
    @cr_cryptic Місяць тому +1

    13:31, lucky fuxkr… Mine’s been taking like an hour & it hasn’t even made it to step 1/30 yet. 🤦‍♂️ Why? 😭

  • @BlackMatt2k
    @BlackMatt2k Місяць тому

    Can it do equirectangular projection?

  • @CoreyJohnson193
    @CoreyJohnson193 Місяць тому

    If only we could use ControlNet with this... Perfect combo. Still waiting for X Portrait 2; combined would be illegal!

  • @eveekiviblog7361
    @eveekiviblog7361 Місяць тому

    Is it possible to make good video from inage? Or descent results are only with t2v?

  • @CSBRHO
    @CSBRHO Місяць тому

    Thank you my friend! U know how to upscale the video after the process?

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому

      In Comfyui, you can use Upscale by Model.
      Or another way use AI upscale software like Topaz.

  • @The_Python_Turtle
    @The_Python_Turtle Місяць тому

    thank you. Do you know how to save the output .webp as a video file so you can import into a video editor? I tried saving the .webp but it is just a static image. Was thinking there is a comfy UI that might be able to do this

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому +1

      Change the output image connect to Video Combine, and use the option out as mp4 format. So you don't have to convert one file to another format.

  • @howtowebit8033
    @howtowebit8033 Місяць тому

    Does it work in Colab?

  • @hicks100
    @hicks100 Місяць тому

    Excuse me:
    1. Can LTX Video generate animated videos? Can it be used with Lora and SDXL stylized prompt words?
    2. In image to videos, the prompt words are automatically analyzed and generated. How to manually modify the prompt words?
    3. I want the characters in the generated video to be closer to the original pictures. How should I set the parameters?

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому

      For your questions,
      - Anime video? Yes
      - You can use Text prompt in TextEncoder node
      - try higher step numbers, this model can run 100 -200 steps without slow down too much.

    • @hicks100
      @hicks100 Місяць тому

      @@TheFutureThinkerCan Lora work on LTX video?

  • @contrarian8870
    @contrarian8870 Місяць тому +1

    I recently saw the CogX video model in Comfywith the "orbit_left" and "orbit_up" LORAs from DimensionX, to make simple 3D clips (no complex motion). Can LTX be used with these LORAs instead of CogX to speed things up?

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому

      Is like can a sea lion merge with African lion? They are both call lion 🦁

  • @aivideos322
    @aivideos322 Місяць тому

    your hard drive must be huge, i have run out of space for all these new models.

    • @TheFutureThinker
      @TheFutureThinker  25 днів тому

      I deleted some old one , AI files over 1 month no using I deleted it.

  • @francsharma7276
    @francsharma7276 Місяць тому

    8gb VRM 3070ti not enough for image to video

  • @vasilybodnar168
    @vasilybodnar168 Місяць тому

    I'm interesting only I2V and in this case LTX do nothing. It generates allmost static shots - no camera movement, no object movement - nothing. Weird.

  • @francsharma7276
    @francsharma7276 Місяць тому +4

    on my 8gb 3070ti text to video work absolutely great

    • @navaneeth8260
      @navaneeth8260 15 днів тому

      is LTX paid?

    • @francsharma7276
      @francsharma7276 15 днів тому

      @navaneeth8260 no its open-source and now witn stg diffusion you can generate kling level videos through image to video, on 8gb vram

  • @Andro-Meta
    @Andro-Meta Місяць тому

    I've pushed it up to twelve seconds with decent results on an RTX 3080.

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому

      Nice! 👍

    • @ammarzammam2255
      @ammarzammam2255 Місяць тому

      And how much time did it take to generate those 12 seconds on your GPU

    • @Andro-Meta
      @Andro-Meta Місяць тому

      @@ammarzammam2255rough estimate (I was at a friend's, showing him how to build workflows and testing LTX Video with him on his computer) is around 3-4 minutes at 20 steps. It started doing a transition into a whole new scene, so we added "transition, new scene" into the negative prompt and surprisingly that worked.
      I'll test it on my 4090 later this week, but it was surprisingly zippy, even in making a twelve second video on a 3080.

  • @FusionDeveloper
    @FusionDeveloper Місяць тому

    Update Manager. Git pull
    Update ComfyUI. Git pull
    Update All (for updating nodes). In GUI.

  • @YoungBloodyBlud
    @YoungBloodyBlud Місяць тому

    well now i know where this is gonna be used it starts with N and ends with W iykyk

  • @ian2593
    @ian2593 Місяць тому

    Tried adding a cigarette to the girl with blood on her face/top and it couldn't handle it. It would be interesting to nail down what its strengths and weaknesses are. A good step forwards though.

  • @FusionDeveloper
    @FusionDeveloper Місяць тому

    Works on a 1080 ti 11gb vram.

  • @AB-wf8ek
    @AB-wf8ek Місяць тому +1

    It seems like a lot of people using these tools don't realize, having immediate access to these models is way beyond most peoples' experience in the past.
    In the 3D rendering community, if you watched a video on the latest simulation and rendering research, you'd be lucky if you actually got to use it in 5 years when a commercial application got around to integrating it, and it certainly wouldn't be free.

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому +1

      That's right. AI model , a same model put in commercial and an open source version. Its not the same. Like Mochi, and Pyramid Flow.

  • @geoffphillips5293
    @geoffphillips5293 Місяць тому

    It's almost insanely fast, in my ten minute test this morning, before heading out for work. Hoping the quality would improve with more steps. (edit later) Hmm. same problem as with others, face distorting, hands the same. Not so good for people walking as Cog, by a huge margin. It could do over 200 frames, using close to my 24 gigs max. If it could just be a bit more stable with people's features, it would be great. Text to video producing very fake looking people. Still, always fun to try out something new!

  • @mareck6946
    @mareck6946 Місяць тому

    You dont get better output with a lower end gpus. its the same output just takes longer. WHAT matters is vram tho - where nGreedia wants to rip you off. More vram less swapping or none at all - and more temporal framebuffers for temporal animation etc.

  • @tinfoilhatmaninspace4944
    @tinfoilhatmaninspace4944 Місяць тому

    Until it can do better than just 6 seconds I'm not bothering with the video aspect of ai, Images have perfected themselves with Flux so until their is sound and longer videos I'm not wasting my time with this.

  • @lenxie4501
    @lenxie4501 Місяць тому

    why all my video look like shit...must be prompt problem

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому

      As detail as possible, and describe style as much as you can think of. And put prompt in a good structure.

  • @ja-no6fx
    @ja-no6fx Місяць тому

    every single time i try to follow one of this guys' guides, it doesnt work.

  • @SecretsandFactschannel
    @SecretsandFactschannel 21 день тому

    bro does have a n.dify model, aw hell nah

  • @theradomguy5581
    @theradomguy5581 Місяць тому

    I ran this on a 1060 6Gb, and it was slow. Wouldn't recommend unless you have probably at least a 3070 or better

  • @aminesoulaymani1126
    @aminesoulaymani1126 Місяць тому

    maybe i'm stupid, I tested all day long the image2video functionality, I would be ashamed to release that and called it "video generation"

    • @kleber1983
      @kleber1983 Місяць тому

      What do you mean? I´m doing it right now and finding the results amazing...!

  • @insurancecasino5790
    @insurancecasino5790 Місяць тому

    I think most are better off just waiting a year if they don't want to use a GPU rental service. These guys are making it way more complicated than it should be by now. We have image generators now that can generate consistent images very quickly with low VRAM. I know they are going to have to go back to basics to generate videos if they want to achieve high quality video for low VRAM machines. It's just a fact. There are limits to this. I'm sticking with paid generators for now. I will still enjoy watching your vids, but overall this stuff is a headache for most. IMO.

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому +3

      Yes, for production me too.
      For local model we just have to keep track of it, and see what's going on and if there's any potential that we can make things happen. But this model is really lightweight, and using good images it can render better quality than CogVideoX.

    • @insurancecasino5790
      @insurancecasino5790 Місяць тому

      @@TheFutureThinker Well, that's good. I'm just waiting for the 1 click options like image generators have. I will try this when I have time. I just remember the early SD days. It was a nightmare for folks that were not developer savvy. Then we got pinokio that made it so simple to where I was easily downloading them and using them to this day. The early Faceswap broke my brain until I found Roop unleashed. That's like super simple. Just a few clicks. So, I'm hoping the vid models will get like that too.

    • @AB-wf8ek
      @AB-wf8ek Місяць тому +3

      Everyone gets burned out by trying to keep up with AI development. Even a lot of the developers can't keep up.
      I was telling a friend a while back, it's like trying to build a house on shifting sand. It doesn't make sense to overinvest in the current process, because it's all changing so fast.
      Whenever you reach your limit, it's ok to walk away and take a break.
      I actually completely skipped out on SDXL. I've been happy playing with AnimateDiff and SD1.5, and I'm just now getting into Flux.

    • @FusionDeveloper
      @FusionDeveloper Місяць тому

      ​@@AB-wf8ek try SDXL turbo Ultraspice with 7 to 11 steps, Euler, SGM Uniform, CFG 1 to 3. It's one of my favorite models to use. The quality is insane and it's fast.
      Flux is great, but slow.

    • @insurancecasino5790
      @insurancecasino5790 Місяць тому +3

      @@AB-wf8ek My hard drive can't keep up either, so have two externals now if I want to try all the models.

  • @LagostinaCookie
    @LagostinaCookie 9 днів тому +1

    another Patreon beggar

  • @SavageKillaBees
    @SavageKillaBees Місяць тому

    This isnt ground breaking at all, it produces content to "here's my first A.I video". The content it produces looks terrible. Human animation still looks bad

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому +7

      @@SavageKillaBees yes , for arty without knowledge and understanding how AI model works and the difference of it, and concern only the generated video, it is not a good one.
      For people who know the movment of decentralize AI, they are going to see another point of view.

    • @kalakala4803
      @kalakala4803 Місяць тому

      ​@@TheFutureThinkerthat's true

    • @SavageKillaBees
      @SavageKillaBees Місяць тому

      ​@TheFutureThinker do you think local models will ever come close to commercial models like KlingAI? None of the local models come close. If I want to create commercially viable image to video, I have to use websites. What do you think?

    • @TheFutureThinker
      @TheFutureThinker  Місяць тому +2

      @@SavageKillaBees yes, that's right like Kling, Runway my fave. one. Because their AI model size are very large, and privately trained with dataset that it might not be from the open source dataset online. So their video preformance are way better.
      I have also suggest people if nowadays, using AI video for video ads, movies, etc. just use those commerical one. Don't waste time generate in local yet, it's not at that level for local run model.
      Like this one, only 2B parameters dataset, how can it compete with a model over 40B AI video model, with a lot more motions for referencing?
      But the local AI model, is keep improving. It will be like how a PC evolves, from a room size to a box size, or pocket size.

    • @SavageKillaBees
      @SavageKillaBees Місяць тому

      ​​@@TheFutureThinker I am just extremely impressed with what KlingAI and Runway can do. It just feels and looks superior to local models. I wonder if by next year, we get some local models we can run with top end local hardware. I have a 4080 right now but plan on getting the 5090 next year. You are right, 5 billion parameter models won't really cut it. We need much larger , more robust models for higher quality generation. I want to create commercial grade content but I can't do it locally. Only images.
      I've seen what LTX video can do but it will be sidelined for now because it's great to demonstrate the concepts, but if you're results oriented, it's just not there yet.

  • @upscalednostalgiaofficial
    @upscalednostalgiaofficial Місяць тому

    does not work on my 8g vram "torch.OutOfMemoryError: Allocation on device" :(