I can't invest my time using generative AI to realize scenes from my script until I see a productive level of creative control. I call it the "shot list" test. Generate a shot list for a one-minute scene with dialog. Sort the shots into master and coverage. Generate AI prompts from the shot list. Upload character headshots for at least two characters in a composite scene at various angles. Render a one-minute master shot with reasonable blocking. This doesn't have to be a fight or other intense action shot but should be more than a static two-shot with no camera eye contact. Generate coverage shots from master shot frames of a specific duration. Assemble into a complete scene. Demonstrate consistent characters, costumes, lighting, set design, and color grading across all shots. Add lip-synch to the script dialog. When I can see someone do this in a reasonable amount of time without excessive regeneration, I'll be interested in working with whatever tool(s) works.
The Xmas Coca-cola commercial backlash is a great example of people’s rejection of carefully cherry-picked AI shots looking just strange and fake enough to cause a mass negative reaction by viewers. Humans notice subtle weirdnesses because our brains are trained from infancy to perceive the specific physics properties of different phenomena.
I really hope they update veo allow you to import your own images soon. It would just be more beneficial, for example, taking myself and implementing into AI films, but I also hope they learn from the safeguards of ChatGPT and make it less strict but still protective of copyright and the prevention of “bad intentions”
Please have a couple of _tried & tested_ "Image To Video" prompts ready when testing these platforms. It is for the sake of consistency. Otherwise, it's just confusing. We want to see what each platform produces with standardised tests. 🙏
people in the film industry are obviously going to explore AI just like they did digital FX, they always have been at the cutting edge of technology, Actors have always been replaceable as many have found when they start demanding too much, it just means the unknown ones willing to do it for less can be edited to look more the part, I can see closer historical resemblances, better aliens etc with less time attaching uncomfortable prosthetics, makeup etc. Screenwriters who refuse to use AI won't be replaced with AI but with screenwriters who use AI.
The unwanted „zoom in“ is actually a tracking shot, or a push in handheld. The „multicamera“ is just another shot which could have been done with the same camera. 35mm is a 35mm lens or a film shot with a 35mm film camera or stills or digital?The wording in terms of camera technique and equipment is giving me a hard time, not just in terms to hear from others but also when using myself, because what are terms on an actual set, or the camera department with very clear meaning is not understood well by the ai tools out there, so it’s very hard to tell the tool what to do, what would be in the real world an easy thing to communicate. We would need more people from the film world to teach the tools. On the other hand, do we want to hand out that knowledge this easy?
HI!!! Thanks again for the great video. I wanted to try Sora and YES I have a PAID sub to GPT, but when I went to sign up, I did not see where I can login if I already have an account with them. Can you help with that please?
I like the video that other AI UA-camr did where he tested out VEO versus SORA and they were both kinda' disappointing a lot of the time. That said, I prefer reality to hype.
There are problems in every shot example. Small snippets may pass on a phone but every shot shows glaring oddness or at least uncanny valley-like off-putting elements on a large screen.
200$ a month for surreal sora is not going to fly. Never mind that it's pretty much useless, if you are going to charge real money it better be their best model, not the model they hope can make them a lot of money or is convenient for them. Just charge per usage and offer their best. Then people with real use might use it. 200$ a month pays for a proper computer in only a few of month and that's a lot of compute
No more than 1.5 seconds of usable footage in all your test examples. AI video is too unpredictable, hallucinationatory and mix different physics speeds in the same footage (example: the dog in the pool’s slo mo is mixed with near realtime speed of the water ripples).
I can't invest my time using generative AI to realize scenes from my script until I see a productive level of creative control. I call it the "shot list" test. Generate a shot list for a one-minute scene with dialog. Sort the shots into master and coverage. Generate AI prompts from the shot list. Upload character headshots for at least two characters in a composite scene at various angles. Render a one-minute master shot with reasonable blocking. This doesn't have to be a fight or other intense action shot but should be more than a static two-shot with no camera eye contact. Generate coverage shots from master shot frames of a specific duration. Assemble into a complete scene. Demonstrate consistent characters, costumes, lighting, set design, and color grading across all shots. Add lip-synch to the script dialog.
When I can see someone do this in a reasonable amount of time without excessive regeneration, I'll be interested in working with whatever tool(s) works.
Wait a few months
this!
Yep
The Xmas Coca-cola commercial backlash is a great example of people’s rejection of carefully cherry-picked AI shots looking just strange and fake enough to cause a mass negative reaction by viewers. Humans notice subtle weirdnesses because our brains are trained from infancy to perceive the specific physics properties of different phenomena.
" carefully cherry-picked AI shots looking just strange" 99% of Ai crap on the net
In a few years you won't notice. Most people don't notice good CGI neither.
Veo is current clear winner. Can't wait to use.
I had a go at the image generator, its so so censored, i think you'd struggle to make horror or action film.
I really hope they update veo allow you to import your own images soon. It would just be more beneficial, for example, taking myself and implementing into AI films, but I also hope they learn from the safeguards of ChatGPT and make it less strict but still protective of copyright and the prevention of “bad intentions”
R.I.P. SORA
it's really crazy how nobody is talking about the book The Hidden Path to Manifesting Financial Power, it changed my life
Please have a couple of _tried & tested_ "Image To Video" prompts ready when testing these platforms.
It is for the sake of consistency. Otherwise, it's just confusing. We want to see what each platform produces with standardised tests. 🙏
people in the film industry are obviously going to explore AI just like they did digital FX, they always have been at the cutting edge of technology, Actors have always been replaceable as many have found when they start demanding too much, it just means the unknown ones willing to do it for less can be edited to look more the part, I can see closer historical resemblances, better aliens etc with less time attaching uncomfortable prosthetics, makeup etc. Screenwriters who refuse to use AI won't be replaced with AI but with screenwriters who use AI.
b-roll ads seem likely to be the first to use AI video. huge difference in production cost
Google has developed a new competitor to Sora, enhancing video editing capabilities by allowing expansion into any aspect ratio.
That coat tho. :D U can tell it's the holidays now :))) JK, you're doing great! Keep it up.
Love Caleb as Lil Yachty!!
0:04 how can we expand ? Is it open for public?
The unwanted „zoom in“ is actually a tracking shot, or a push in handheld. The „multicamera“ is just another shot which could have been done with the same camera. 35mm is a 35mm lens or a film shot with a 35mm film camera or stills or digital?The wording in terms of camera technique and equipment is giving me a hard time, not just in terms to hear from others but also when using myself, because what are terms on an actual set, or the camera department with very clear meaning is not understood well by the ai tools out there, so it’s very hard to tell the tool what to do, what would be in the real world an easy thing to communicate. We would need more people from the film world to teach the tools. On the other hand, do we want to hand out that knowledge this easy?
Nice informative video about AI advances in cinema, thanks
Veo 2 is clearly better than sora
Hey . Do you any videos that can help me with consistent characters with 1 image?
4:51. There are two plans, you can use Sora on the $20/m plan.
Yeah but not enough to do anything substantial with it given the number of attempts required per shot.
@user-on6uf6om7s OAI should consider a Sora only plan or two.and. Amos tier $50-70 plan with some relax hours.
This is really going to change the game for creators trying to make poor-quality five-second clips that can't be used for anything.
Comment of the YEAR!! 💜
HI!!! Thanks again for the great video. I wanted to try Sora and YES I have a PAID sub to GPT, but when I went to sign up, I did not see where I can login if I already have an account with them. Can you help with that please?
The second syllable of raccoon is a definite no-no. I had a similar problem 18 months ago when asking Midjourney for an image of Milford Sound🤣🤣🤣
34:46 There's something strange with the eyes too.
I like the video that other AI UA-camr did where he tested out VEO versus SORA and they were both kinda' disappointing a lot of the time. That said, I prefer reality to hype.
It wasn't happy with the coffee you brought
🎉🎉🎉🎉
There are problems in every shot example. Small snippets may pass on a phone but every shot shows glaring oddness or at least uncanny valley-like off-putting elements on a large screen.
200$ a month for surreal sora is not going to fly. Never mind that it's pretty much useless, if you are going to charge real money it better be their best model, not the model they hope can make them a lot of money or is convenient for them. Just charge per usage and offer their best. Then people with real use might use it. 200$ a month pays for a proper computer in only a few of month and that's a lot of compute
No more than 1.5 seconds of usable footage in all your test examples. AI video is too unpredictable, hallucinationatory and mix different physics speeds in the same footage (example: the dog in the pool’s slo mo is mixed with near realtime speed of the water ripples).
:(
Lol, ethically sourced just means - it wont be as good as the chinese AIs because they use UA-cam/netflix/hollywood as their biggest training models