A little explanation on base shift and max shift for anyone looking to play with that: Base shift is a small, consistent adjustment that stabilizes the image generation process, while max shift is the maximum allowable change to the latent vectors, preventing extreme deviations in the output. Together, these parameters balance stability and flexibility in image generation. Using a bird as an example: Increasing Base Shift: Raising the base shift results in a more consistent and stable depiction of the bird. For instance, the image might consistently show a bird with clear, well-defined features such as a distinct beak, feathers, and wings. However, this increased stability could lead to less variation, making the bird’s appearance feel repetitive or overly uniform. Decreasing Base Shift: Lowering the base shift allows for more subtle variations and intricate details, like nuanced patterns in the bird’s feathers or unique postures. However, this added variability might make the bird’s image less stable, with occasional irregularities or minor distortions. Increasing Max Shift: A higher max shift enables the model to explore the latent space more freely, leading to creative or exaggerated interpretations of the bird. For example, the bird might develop surreal colors, elongated wings, or fantastical plumage, but it risks straying far from a realistic bird representation. Decreasing Max Shift: Reducing the max shift tightly constrains the model, resulting in a more controlled and realistic depiction of the bird. The image is likely to stay close to a conventional bird appearance, but it might lack creative or distinctive elements that make the bird unique or captivating.
Could you create a video that explains all the basic parameters and what they affect? Examples would be great because written articles often don't have any and trying them with slow GPU is just very slow way to learn. I have learned by trial and error what for example CFG does but still don't fully understand its interactions with other node parameters.
I had to update from the batch file in the updates folder (comfyui) and then the custom nodes finally installed correctly. Simply using the built in update everything button in manager did not work.
These companies need to roll out this stuff more gradually-these constant dopamine spikes are wrecking my sleep! Oh, and don't think we forgot-you still owe us that deep dive into flux tools. 😉
Another 8-12 months and these obscure interfaces will start to go away in favor of far more intuitive controls and production friendly ways to create video.
THIS IS NOT HUGE....I tested half the day and all night....and no way this even comes close to what commercial platforms are able to offer....the only thing they do better is faster renders but the quality and prompt adherence is pure shite! Not to mention it is far too early to be claiming something that has unfriendly UX is going to be huge or is the next best thing.... Disingenuous, at best; dishonest, at worst.
@@fotszyrzk79 I don't know what you're expecting out of a fast model that doesn't require a GPU that breaks bank that isn't fully trained yet and will be fully trained and released open source, but it really seems like you're missing the point.
A little explanation on base shift and max shift for anyone looking to play with that:
Base shift is a small, consistent adjustment that stabilizes the image generation process, while max shift is the maximum allowable change to the latent vectors, preventing extreme deviations in the output. Together, these parameters balance stability and flexibility in image generation.
Using a bird as an example:
Increasing Base Shift: Raising the base shift results in a more consistent and stable depiction of the bird. For instance, the image might consistently show a bird with clear, well-defined features such as a distinct beak, feathers, and wings. However, this increased stability could lead to less variation, making the bird’s appearance feel repetitive or overly uniform.
Decreasing Base Shift: Lowering the base shift allows for more subtle variations and intricate details, like nuanced patterns in the bird’s feathers or unique postures. However, this added variability might make the bird’s image less stable, with occasional irregularities or minor distortions.
Increasing Max Shift: A higher max shift enables the model to explore the latent space more freely, leading to creative or exaggerated interpretations of the bird. For example, the bird might develop surreal colors, elongated wings, or fantastical plumage, but it risks straying far from a realistic bird representation.
Decreasing Max Shift: Reducing the max shift tightly constrains the model, resulting in a more controlled and realistic depiction of the bird. The image is likely to stay close to a conventional bird appearance, but it might lack creative or distinctive elements that make the bird unique or captivating.
Thank you for presenting this stunning LTX model. Try right now out
This is HUGE! Thanks for being a hero in the community and showing us how powerful local video gen could be!
Hi thanks for all your work. I will test it today. Will leave some review videos on civitai when I get it to work.
I think the model is trained at 25 also I've been getting 5 seconds no problem however have to do 3 seconds whenever I change the prompt
i was never interested in local models until this came out im going to find the best settings and squeeze every last thing of this goldmine
It would be cool if you opened a discord server where people could discuss their projects and problems
Could you create a video that explains all the basic parameters and what they affect? Examples would be great because written articles often don't have any and trying them with slow GPU is just very slow way to learn. I have learned by trial and error what for example CFG does but still don't fully understand its interactions with other node parameters.
This is fast, but we need to be able to control the strength of the latents and images.
❤thank You sir
What software do you use to create the white moving avatar bro . Thanks
so what is the longest you got it to do? can you do 30 seconds?
I had to update from the batch file in the updates folder (comfyui) and then the custom nodes finally installed correctly. Simply using the built in update everything button in manager did not work.
I had the same problem. I try the same thing you did and it worked. Thanks
What Vram is required to run this ? Or is it one of those that needs RTX 4090 to run locally ?
works fine on RTX 3060 12gb
its great and all but is it 8 vram crash proof
These companies need to roll out this stuff more gradually-these constant dopamine spikes are wrecking my sleep! Oh, and don't think we forgot-you still owe us that deep dive into flux tools. 😉
Another 8-12 months and these obscure interfaces will start to go away in favor of far more intuitive controls and production friendly ways to create video.
THIS IS NOT HUGE....I tested half the day and all night....and no way this even comes close to what commercial platforms are able to offer....the only thing they do better is faster renders but the quality and prompt adherence is pure shite! Not to mention it is far too early to be claiming something that has unfriendly UX is going to be huge or is the next best thing....
Disingenuous, at best; dishonest, at worst.
Yes all local video creation models are just for fun testing...
He made it pretty clear that for local video ai creation, this is huge. And this is.
Agree... it is fast as hell, as the OP mentioned, but quality isn't that good... I feel like CogVideoX is better, albeit slower.
The only huge thing is fast rendering time, the outcome is shit!
@@fotszyrzk79 I don't know what you're expecting out of a fast model that doesn't require a GPU that breaks bank that isn't fully trained yet and will be fully trained and released open source, but it really seems like you're missing the point.