Using a combination of Luma and Kling seems to yield the best results. Kling has the best end-frame tool, but Luma is better at camera movement. So it's best to get the end-frame from Luma, and use Kling to interpolate between the start and end frame since the quality and animation is better. Hopefully Kling's next update will provide an easier workflow
"Keep distance static" or 'static camera distance' works well with Kling. And it sometimes understands clockwise, but not counterclockwise 😅 Zooming in or out at the right place on the other hand is trickier
Luma is great for transforming shots, kling isn't very good at it, but much better at everything else. . Great job again! For video generators, I came up with COALS: Camera: Camera Movement (fpv, over the shoulder, etc.) Object: The subject of the video. Action: what the subject/objectnis doing. Location: Time, (1500 AD/CE) the place, (city, bar, bedroom, etc.) and background (including lighting). Style: 50mm lens, neon, silhouette, etc. lighting van be added here also (gel lighting) and any other styles (sepia, pixar, etc.). Doesn't work all the time and definitely doesn't work with runway but for kling, minimax and luma it works great.
I get the worst camera movement in kling. I've just moved over to kling from luma. Luma every now and then would give u s shot kubrick would be proud of, real fancy and energetic. I get very little in kling :(
Hmm. I use the RunwayML help section as a guide for the camera motions. Sometimes I have a hard time describing the shot I want. A lot of the time I will also either use Leonardo or MJ for the beginning frames... Kling you have to prompt style, I've gotten several shots of anime characters morphing into realistic style when I don't add style.
I have a test for you and any other AI filmmakers to try. “an extreme closeup of a stylish men’s dress shoe pressing the gas pedal of a sports car” or some iteration of that. I could not get this simple insert shot to work on any platform. I eventually had to find stock footage and run it through Gen-3’s video to video to get it to work. You can see this on my channel on the video titled “The Gauntlet Episode 1.” That shot was my greatest AI video challenge to date. A failed one.
Using a combination of Luma and Kling seems to yield the best results. Kling has the best end-frame tool, but Luma is better at camera movement. So it's best to get the end-frame from Luma, and use Kling to interpolate between the start and end frame since the quality and animation is better. Hopefully Kling's next update will provide an easier workflow
Luma 👏👏👏👏👏👏❤️❤️❤️
"Keep distance static" or 'static camera distance' works well with Kling. And it sometimes understands clockwise, but not counterclockwise 😅
Zooming in or out at the right place on the other hand is trickier
Luma is great for transforming shots, kling isn't very good at it, but much better at everything else. . Great job again! For video generators, I came up with COALS: Camera: Camera Movement (fpv, over the shoulder, etc.) Object: The subject of the video. Action: what the subject/objectnis doing. Location: Time, (1500 AD/CE) the place, (city, bar, bedroom, etc.) and background (including lighting). Style: 50mm lens, neon, silhouette, etc. lighting van be added here also (gel lighting) and any other styles (sepia, pixar, etc.). Doesn't work all the time and definitely doesn't work with runway but for kling, minimax and luma it works great.
I get the worst camera movement in kling.
I've just moved over to kling from luma.
Luma every now and then would give u s shot kubrick would be proud of, real fancy and energetic. I get very little in kling :(
Hmm. I use the RunwayML help section as a guide for the camera motions. Sometimes I have a hard time describing the shot I want. A lot of the time I will also either use Leonardo or MJ for the beginning frames... Kling you have to prompt style, I've gotten several shots of anime characters morphing into realistic style when I don't add style.
Hey, there's actually more movements in the dropdown menu. I guess you should use the middle mouse button to scroll down to see them.
Ha. Took me a while to notice that. 😅
I have a test for you and any other AI filmmakers to try. “an extreme closeup of a stylish men’s dress shoe pressing the gas pedal of a sports car” or some iteration of that. I could not get this simple insert shot to work on any platform. I eventually had to find stock footage and run it through Gen-3’s video to video to get it to work. You can see this on my channel on the video titled “The Gauntlet Episode 1.” That shot was my greatest AI video challenge to date. A failed one.
I'd generate it using either, ideogram flux or mystic.
They are really good at prompt understanding.