The one thing I've learned about purchasing AI, is always do the monthly because something better will come out. I'm ending my Luma this month due to having RunwayML, Kling, vidu, Leonardo and noisee.
Luma is by far the best for making clips for VJing, I do like the quality of the clip output on Runway better though. So made a little python script that I can take the output from runway and get the first and last frame out of it then I upload that to luma and then drop it back into the script that puts the two files together. You can kinda tell when it switches over though, from the quality diffrence, but then upscaling and interpolating in topaz and you can't tell the difference. (Except if there is a speed change in the two, but I actually think that makes it even better for use as a VJ loop when BPM mapped). Edit: Also you can just click the plus sign to add the frame and then in the popup you can select both frames at the same time, I don't know why you cant drag both in though.
I find Luma very unpredictable. These are some prompts I made to get a poor girl from the 19th century to walk up to the camera. I asked Bing for a picture of a 19th-century girl on a street with some horses in the background. Then I removed the girl and the horses from the picture in two steps, so I ended up with a series of three stages describing my movie. Prompt 1: Pic1 -> Pic2 The scene is set in a 19th century cobblestone street lined with Victorian-era buildings. Gas streetlamps and horse-drawn carriages are visible, reflecting the time period. A girl gracefully walks up and into the picture from the left and stops naturally in front of the camera. The entire body and surrounding area animate with natural movement, maintaining high image quality. Static tripod shot with no camera movement (no pan, tilt, zoom, scale). The environment should animate naturally to match the girl's movement. Ensure steady, natural animation of all elements. Negative Prompt: -blur, -distortion, -disfigurement, -low quality, -grainy, -warped, -JPEG artifacts, -pan, -tilt, -rotate, -zoom, -scale, -major background change. Prompt 2: Pic2 -> Pic3 The scene is set in a 19th century cobblestone street lined with Victorian-era buildings. Gas streetlamps and horse-drawn carriages are visible, reflecting the time period. The girl smiles warmly and cheerfully waves her hand at the photographer, while a man and two horses walk steadily up the street behind her. The entire scene, including all characters, animates naturally. Static tripod shot with no camera movement (no pan, tilt, zoom, scale). Ensure steady, natural animation of all elements including background elements like horses. Negative Prompt: -blur, -distortion, -disfigurement, -low quality, -grainy, -warped, -JPEG artifacts, -pan, -tilt, -rotate, -zoom, -scale, -major background change. Prompt 3: Pic3 The scene is set in a 19th century cobblestone street lined with Victorian-era buildings. Gas streetlamps and horse-drawn carriages are visible, reflecting the time period. The girl is amused by the camera and continues to wave at the photographer, while the horses behind the girl show increasing signs of restlessness: they paw at the ground with their hooves, toss their heads, flick their tails, and shift their weight from side to side. One horse might even let out a soft whinny or snort. The entire scene, including all characters, animates naturally. Static tripod shot with no camera movement (no pan, tilt, zoom, scale). Ensure steady, natural animation of all elements including background elements like horses. Negative Prompt: -blur, -distortion, -disfigurement, -low quality, -grainy, -warped, -JPEG artifacts, -pan, -tilt, -rotate, -zoom, -scale, -major background change.
While this is interesting, I would prefer you push the tool to do what you want when it goes off the rails. You allowed the tool to dictate where you modified your story. That isn't my focus. Also, why don't you worry about continuity? The kid's shirt changed from the start to the end key frame. Again, I'm more keen to see how we can force the tool into our narratives. Thoughts?
Do these sort of platforms have the same connotations that the music generators have in regard to “stealing” content from other artists or copyright owners? Are they being sued like Udio and Suno? I want to jump on board with this stuff but I feel like there’s this big cloud of negativity that needs to dissipate before I can use AI as a significant part of my workflow. But of course waiting and not doing anything never helped anyone. I’m trying to practice just not caring what others think but I still feel like there’s potential to shoot myself in the foot if nobody is going to be taking AI-enhanced art forms seriously.
Super idea bob gonna try today love dog and cat watching spaceship lol and we can get something even better ai thought of for ur us out of box lol beibg open minded is key
The conclusion of the video states that you can make a long form video with Luma Labs Dream Machine. But the reality is that in this case, Katalist, Dream Machine AND Filmora were all needed to create the longform video. You definitively CANNOT create a long form video using just Luma Labs Dream machine.Yet. :)
@@Baxterbrookies you use it to create the source material. I never meant to imply that you could do it all in one tool. One of the main points of my channel is that asking one tool to do everything is expecting too much. Many of these tools are designed to create bits and pieces of media that we are to creatively put together somewhere else. this video does explain how to make longform videos using Luma to create the source video and it is an interim technique that will work until Luma and the others allow for longer videos. Thanks so much for watching!
@@MabelYolanda-c9i well I think that’s something of an unfair statement because clearly people are using it. It may not live up to your personal standards, but there are people who are utilizing it and understand that it’s only going to get better from here, but to completely ignore it or trash it this early in the game seems just a bit shortsighted to me. What’s happening right now is absolutely incredible and a creative person can absolutely use the output of these video services to create an end result. Maybe you personally wouldn’t, but many would. So I do believe that AI is ready to be used… Just perhaps not for your particular use case.
No matter how long or not ... Without nothing to tell, without content, without an Story ... are just empty soulless, morphing weirdness videos... the tools are like 30% ... the other 70% are the story, the SCRIPT, the cinematography, that can't be learned just prompting and pressing enter or clicking lol!.
@@AEFox I totally agree, and in fact, we focus on story development tools on this channel and I’ve got a new video coming up in the next couple of weeks on just that. The public will eventually tire of “tech demos” and need something to engage their minds. So I’m completely with you on this.
The tech here is amazing, but we have to be honest and say that as an animation and piece of storytelling, it's pretty awful - and by that I mean if you had actually animated this WITHOUT AI, it might show some technical proficiency (albeit a weird use of it), but as an exercise in narrative and storytelling, it's just not very good. Of course, that's not YOUR fault, rather the shortcomings of what AI can currently achieve. But that's not really the point. The fact that it can do this AT ALL is impressive. It may highlight how janky this amazing tech is at this moment in time, but I don't doubt a year or two from now, videos like this are going to look like the OG Will Smith eating spaghetti one does to us today.
@@BobDoyleMedia Yeah I don't doubt that. It's a long way from replacing Pixar just yet though. The level of control required to really fine tune and get EXACTLY what you want just isn't there at this stage.
depends on the type if video. I wanted kling to do some celebrations video, it sucked pretty bad, runway did worse but I was shocked at lumas performance.
@@HappyBirthdayGreetings Image-to-video or text-to-video? I'm talking image-to-video only. I never use text-to-video anymore. With image-to-video Kling has higher succes rate than Luma.
@bpvideosyd yep, image to video. the best for the kind of video I do was vidu. but it's low res. Kling was okay but ruined some parts of my videos.note that I don't do characters in my videos. then I switched to Luma and it worked fine for me. but I realise it just sucks when I experiment with some other usecases
One thing I love about these videos is the matter-of-fact descriptions of bizarre events like, "And then the tree steps off."
@@LiFancier 😄😄😄😄
The one thing I've learned about purchasing AI, is always do the monthly because something better will come out. I'm ending my Luma this month due to having RunwayML, Kling, vidu, Leonardo and noisee.
Luma is by far the best for making clips for VJing, I do like the quality of the clip output on Runway better though. So made a little python script that I can take the output from runway and get the first and last frame out of it then I upload that to luma and then drop it back into the script that puts the two files together. You can kinda tell when it switches over though, from the quality diffrence, but then upscaling and interpolating in topaz and you can't tell the difference. (Except if there is a speed change in the two, but I actually think that makes it even better for use as a VJ loop when BPM mapped). Edit: Also you can just click the plus sign to add the frame and then in the popup you can select both frames at the same time, I don't know why you cant drag both in though.
live tutorial 🙏
I find Luma very unpredictable.
These are some prompts I made to get a poor girl from the 19th century to walk up to the camera.
I asked Bing for a picture of a 19th-century girl on a street with some horses in the background.
Then I removed the girl and the horses from the picture in two steps, so I ended up with a series of three stages describing my movie.
Prompt 1: Pic1 -> Pic2
The scene is set in a 19th century cobblestone street lined with Victorian-era buildings. Gas streetlamps and horse-drawn carriages are visible, reflecting the time period.
A girl gracefully walks up and into the picture from the left and stops naturally in front of the camera. The entire body and surrounding area animate with natural movement, maintaining high image quality. Static tripod shot with no camera movement (no pan, tilt, zoom, scale). The environment should animate naturally to match the girl's movement. Ensure steady, natural animation of all elements.
Negative Prompt: -blur, -distortion, -disfigurement, -low quality, -grainy, -warped, -JPEG artifacts, -pan, -tilt, -rotate, -zoom, -scale, -major background change.
Prompt 2: Pic2 -> Pic3
The scene is set in a 19th century cobblestone street lined with Victorian-era buildings. Gas streetlamps and horse-drawn carriages are visible, reflecting the time period.
The girl smiles warmly and cheerfully waves her hand at the photographer, while a man and two horses walk steadily up the street behind her. The entire scene, including all characters, animates naturally. Static tripod shot with no camera movement (no pan, tilt, zoom, scale). Ensure steady, natural animation of all elements including background elements like horses.
Negative Prompt: -blur, -distortion, -disfigurement, -low quality, -grainy, -warped, -JPEG artifacts, -pan, -tilt, -rotate, -zoom, -scale, -major background change.
Prompt 3: Pic3
The scene is set in a 19th century cobblestone street lined with Victorian-era buildings. Gas streetlamps and horse-drawn carriages are visible, reflecting the time period.
The girl is amused by the camera and continues to wave at the photographer, while the horses behind the girl show increasing signs of restlessness: they paw at the ground with their hooves, toss their heads, flick their tails, and shift their weight from side to side. One horse might even let out a soft whinny or snort.
The entire scene, including all characters, animates naturally. Static tripod shot with no camera movement (no pan, tilt, zoom, scale). Ensure steady, natural animation of all elements including background elements like horses.
Negative Prompt: -blur, -distortion, -disfigurement, -low quality, -grainy, -warped, -JPEG artifacts, -pan, -tilt, -rotate, -zoom, -scale, -major background change.
Only you could make the flying saucer glitch into a win 😂
@@techiechar 😄😄😄
Please just let me know when the video quality isn't garbage, thanks
Very informative video
First.. might actually try this out today. Thank you
@@heyGetoutofthere they gonna put u on a wait list. Only paid users are enjoying this
While this is interesting, I would prefer you push the tool to do what you want when it goes off the rails. You allowed the tool to dictate where you modified your story. That isn't my focus. Also, why don't you worry about continuity? The kid's shirt changed from the start to the end key frame. Again, I'm more keen to see how we can force the tool into our narratives. Thoughts?
What editing software do you use to make and edit your videos ? I love the quality
So this video demonstrates to me how smart the presenter is but also how unpredictable and mindless the AI program can be.
Do these sort of platforms have the same connotations that the music generators have in regard to “stealing” content from other artists or copyright owners? Are they being sued like Udio and Suno? I want to jump on board with this stuff but I feel like there’s this big cloud of negativity that needs to dissipate before I can use AI as a significant part of my workflow. But of course waiting and not doing anything never helped anyone. I’m trying to practice just not caring what others think but I still feel like there’s potential to shoot myself in the foot if nobody is going to be taking AI-enhanced art forms seriously.
Why doesn't the website load?
So the AI is high as a kite when it comes to video, I see
Pixar type shorts are well in reach for the informed hobbyist.
Greet job you did, I learned a lot from you🎉🎉
I'm trying to use 100% of Ai power but still hardly to beat the 5 second 😅
Super idea bob gonna try today love dog and cat watching spaceship lol and we can get something even better ai thought of for ur us out of box lol beibg open minded is key
wow just a month ago i created a children's song about a bunny and other animals, i was creating the video with noisee. the one shown is just right.
Well it's easy to extend the videos by clipping the images in the video... the hard part is keeping it from getting weird lol.
Still on the free tier and Luma Labs has been processing a prompt from me for 2 days and the video still isn't ready!
@@roll4sanity yes… That is a very unfortunate aspect of the free plan. The wait times are pretty insane for now.
The honeymoon is over, it is time to put those dollars in!
@@Greenthum6 Just like real marriage. 😬
@@BobDoyleMedia 😂😂
That`s creepy bra! xd
3 days in waiting list and still not able to use dream machine
Luma treats it's free users like 💩
Three day waits to get a clip back is diabolical 😅
@aaronwatts5724 yes, they purposely do that and now there's a wait list for free users
hello bob... hugs from my lair.
The conclusion of the video states that you can make a long form video with Luma Labs Dream Machine. But the reality is that in this case, Katalist, Dream Machine AND Filmora were all needed to create the longform video. You definitively CANNOT create a long form video using just Luma Labs Dream machine.Yet. :)
@@Baxterbrookies you use it to create the source material. I never meant to imply that you could do it all in one tool. One of the main points of my channel is that asking one tool to do everything is expecting too much. Many of these tools are designed to create bits and pieces of media that we are to creatively put together somewhere else. this video does explain how to make longform videos using Luma to create the source video and it is an interim technique that will work until Luma and the others allow for longer videos. Thanks so much for watching!
The tutorial should called: “why AI is not ready to be used at all”……
@@MabelYolanda-c9i well I think that’s something of an unfair statement because clearly people are using it. It may not live up to your personal standards, but there are people who are utilizing it and understand that it’s only going to get better from here, but to completely ignore it or trash it this early in the game seems just a bit shortsighted to me. What’s happening right now is absolutely incredible and a creative person can absolutely use the output of these video services to create an end result.
Maybe you personally wouldn’t, but many would.
So I do believe that AI is ready to be used… Just perhaps not for your particular use case.
@@BobDoyleMedia… said Dr. Frankenstein when he asked his daughter to marry the Monster :)
@@BobDoyleMedia Great answer and assessment! Things are changing rapidly, in unexpected and wonderous ways.
No matter how long or not ... Without nothing to tell, without content, without an Story ... are just empty soulless, morphing weirdness videos... the tools are like 30% ... the other 70% are the story, the SCRIPT, the cinematography, that can't be learned just prompting and pressing enter or clicking lol!.
@@AEFox I totally agree, and in fact, we focus on story development tools on this channel and I’ve got a new video coming up in the next couple of weeks on just that. The public will eventually tire of “tech demos” and need something to engage their minds. So I’m completely with you on this.
The tech here is amazing, but we have to be honest and say that as an animation and piece of storytelling, it's pretty awful - and by that I mean if you had actually animated this WITHOUT AI, it might show some technical proficiency (albeit a weird use of it), but as an exercise in narrative and storytelling, it's just not very good. Of course, that's not YOUR fault, rather the shortcomings of what AI can currently achieve. But that's not really the point. The fact that it can do this AT ALL is impressive. It may highlight how janky this amazing tech is at this moment in time, but I don't doubt a year or two from now, videos like this are going to look like the OG Will Smith eating spaghetti one does to us today.
I agree with you 80%, but I definitely could’ve gotten a better result that I put more time into it.
@@BobDoyleMedia Yeah I don't doubt that. It's a long way from replacing Pixar just yet though. The level of control required to really fine tune and get EXACTLY what you want just isn't there at this stage.
That background music is unnecessary and annoying.
This video is absolute mess...
Kling is much better than Luma.
depends on the type if video. I wanted kling to do some celebrations video, it sucked pretty bad, runway did worse but I was shocked at lumas performance.
@@HappyBirthdayGreetings Image-to-video or text-to-video? I'm talking image-to-video only. I never use text-to-video anymore. With image-to-video Kling has higher succes rate than Luma.
@bpvideosyd yep, image to video. the best for the kind of video I do was vidu. but it's low res. Kling was okay but ruined some parts of my videos.note that I don't do characters in my videos. then I switched to Luma and it worked fine for me. but I realise it just sucks when I experiment with some other usecases
👍
@@HappyBirthdayGreetingsluma is still bad. It should be beta not even released to the public let alone be paid.