Exactly. I don't know why anyone buys into this crap. They've taken ALL the skill away except for how you "prompt". "Oh, you can text? Wow, you must be a great artist."
That would certainly be helpful, but then again, so would having iterative control over precise elements, parametric control, and some kind of preset or templating system, etc, etc, etc. Prompts have to be the most idiotic interface to a complex generative piece of software that could have possibly been implemented.
I'm assuming in about 3 or 4 days there will be a "announcement" from Pika or Moonvalley or an unknown. Because nobody is allowed to be the "new best thing" for longer than a week.
@@TheoreticallyMedia Ive been watching your channel for a while. I’m about to finish a 35 minute video comprised fully of AI generated videos, AI narration, AI sound effects and featuring consistent characters. I remember you saying it would be awhile before that happens, you were right, it took three months to complete..
People are totally sleeping on this announcement, this should be SORA level of excitement because of what I'll explain further. This looks even better than Sora, the legs while walking seem much better than the SOra Tokyo example. Im using Luma right now and its the best we got so far, but its price is very high and the worse is their TOS, with limited commercial rights and a TOS so vague that it may as well be NO commercial rights. While runway has literally the best TOS commercial rights on the market hands down anywhere. Complete Commercial Rights: You retain all copyright and ownership of the videos you create or edit with Runway's tools. This applies to both free and paid plans (Free, Standard, Pro, Unlimited). Literally cannot get any better than that. Im completely sold on Gen-3, cant wait to use it.
@@iangillan1296 Show me an AI video generator that is currently available to everyone? and the highest quality. Nope, you cant, because Kling is not available to everyone, neither is Sora or Googles Vidu. Luma is it, but its commercial license is greedy, subscribe for life or lose rights to your video.
@@chicobraz4335 What do you think AI Science will come up with in the future? It can already see and hear. Smell, Taste & Touch is already being worked on. AI doesn't need to be embodied to create a believable reality.
On this planet there are like a billion+ gamers all running simulations on phones, consoles PC's etc... Imagine if we're just the character in one persons sim game, and there's billions of gamers all running sims up on the next level. This ones not even the latest or best game out, and the dude playing it might uninstall it for a better game.
Hot damn! We've officially entered the next phase of video models. I've been loving Dream Machine, combined with FaceFusion and upscaled has given me some incredible results. Looking forward to trying Gen 3 out.
I've tried Luma yesterday, the average waiting time for free video generation was like 25-30 minutes, for a 5 sec long video. So basically it's already behind a paywall, and the cheapest option is 30 USD per month, which is very pricey.
wow it looks amazing! its crazy how near sora level ai video options are coming closer and closer this fast i cant wait!!!!!!!!!! gives me something to HOPE for! wow in the coming days eh? i might just sign up for a unlimited plan for near my birthday then! cant wait to try it out!!!!!!
I can't wait for that either. But-- it does sound like that might be a bit longer. Last I heard, they were going to push 3D before video. That said, 3D Midjourney also sounds bonkers!
I've stopped using Midjourney... They make beautiful images, but I can't get the control that I can get with ideogram and stable diffusion... MJ's overpriced- - I certainly do look forward to their 3d and their video tools -- these things may get me to resubscribe...
I’m thinking this week! I have no inside info on that, but I’ve got a gut feeling we’ll see it before or on Friday. Fingers crossed, as I’d love to have a weekend to play with it for a Monday video!
This combined with lip-syncing, image to video, consistent characters, and ability to have multiple consistent characters interacting with each other in a scene would open the way to creating full movies with AI.
@@plinker439 That's what was said about music generators, like SUNO... but I just generated a complete song in one take that passes, and no one I've played it for has called it out as A.i.. Admittedly a really close listen does reveal its true origin to someone who knows exactly what to listen for, but overall it does most things well enough that most people don't appear to think twice about it. And while I think you're correct in some sense regarding where we currently are with A.i. video generators, I feel like you missed the common theme around A.i. - "that this is the worst it will be, and will likely only get better". Once many of the "creation" tools are refined, editing tools using the same A.i. will democratize high quality results that will compete with actual film studios... And I suspect by the end of this year, that this will be a foregone conclusion.... Barring any issues that develop regarding the contentions about how A.i. training was and is done, which potentially may stall such progress.
Last year I said something similar regarding a.i. music generators, that soon people will be able to create entire albums about music they want, this is basically true now. If this is allowed to progress, it makes perfect sense that people will be able to within a couple years max, have easily accessible A.i. tools to develop shorts or possibly full length films that will be difficult to distinguish from the real thing. Depending on legal issues... modifying real films with little effort, such as replacing an actor with the person you want to play that part, or changing certain aspects of the film to make more sense to you.
@@Fastlan3Yeah, the problem is that none of these AI companies are filmmakers; they don't understand how filmmaking actually works, and thus all the tools they have released thus far have been focused on generative AI without fine controls, which is useless for actual filmmaking, besides maybe occasional B-Roll. Sure, you get occasional pan and dolly options, and you have to walk before you run, but the way any actual filmmaker would ultimately want to use this stuff isn't the way the technology has been developing; basically, we've just been getting fancier and fancier tech demos. They could have gone the other way, been pushing this as a replacement for VFX work - which is probably it's ideal use case - but no, hype, hype, hype, and "we're gonna replace filmmaking in a couple years." Way to get everyone not onboard and pissed off at your technology, and most of it has just been done to hype the tech up and get more and more investor money. When we get those fine controls, when I can easily train my own characters into it, when I can move the camera and light the scene how I want; now, now you've actually got something...but that doesn't seem like the end goal of any of these companies or what they are working toward.
@@plinker439a pile of bshit! Only 3-4 years ago did you even think we d have all these ai tools? No. In not even a year ai video has done huge jumps forward, you’re not very familiar with moore’s law i see. Only few months ago people were thinking that to get at the level we are today it would have taken years, and it s already here. You make a critical mistake thinking naive. Once we get closer to real world simulations like nvidia omniverse, ai video will be so real that there won’t be the need for huge locations, actors and bla bla. Tyler Perry has canceled 800 millions studio expansion after he has seen sora. Do you think he s a moron? 🤣👍
Not a filmmaker more like a prompt ai video creator, filmmaker means to actually film something with a camera, working with equipment, and working with people to get the right shot. Definitely needs an official name for this type of field.
What these apps need to do is allow us to run locally so we can use it for uncensored work that allows mature stuff like horror, crime or adult movies. Eventually its going to have to be allowed and the company that does is going to make a good chunk of money. With Stable Diffusion 3 lagging behind right now, we kind of need that alternative.
It's so funny that we talked about this. You called it like a champ Tim. I gave you a shout-out in our coverage. Did you know something in advance? If you are free to say. I'm just curious.
Oh, that’s awesome! Honestly, this one caught me by surprise as well! There were faint whispers they were up to something (which I think we all knew) but no one knew what or when. When Dream Machine dropped though, I knew Runway wouldn’t be far behind…next stop: Pika!
Like I've said before, this kind of technology is going to revolutionize VFX. I'd love to see tools like this integrated into After Effects and editing software like DaVinci Resolve.
Great stuff, again! Don't you get tired trying to keep up with all the new platforms and new versions, lol. What I miss about all these updates is more creative control over the characters, the background swapping looked cool, but a lip sync (to audio) feature for consistent characters would be so powerful! It seems there are two paths now that need to merge: The text to video/ or image to video creation, and on the other hand the platforms that can (if available to the public) animate a person in a picture based on an audio fragment (speech or singing). I can't wait until all these things are brought together in one system with multiple editing possibilities. (Basically feeding a script page as a prompt containing an added dialogue prompt or an audio upload.) I imagine an interface with separate input boxes for various parameters like 1: background+situation+action. 2: characters/ action and 3: the dialogue between characters or voice over. Imagine being able to upload files for all three prompts: style/ background image, a character sheet and an audio file. The potential of A.I. comes alive with extensive creative control at every level.
Yup! Agreed! We’re still sort of in the kitbashing era of all this stuff. And, at some point there will be a platform that really nails it. On the one hand, I’m really excited by that idea. On the other, I always worry that’ll make everything look homogeneous. But, at the same time, I always count on creative people to push the envelope, so I’m sure that’ll work out fine. We’ll be there soon enough. I mean, heck, just look at the end of this video with the Gen-1 output from March of last year. That was “cutting edge” at the time!
The moment warping and morphing is finally removed. I think that's when Hollywood would become real afraid and start quaking in it's boots. It's only a matter of time till everyone in the world would be able to make their own blockbuster type movies without needing millions to make one. Things are gonna get very interesting indeed.
I love your reviews. Very spot on. I'd like to you do an full review of the emerging tools usable for the AI Film industry that's growing up around these releases. Cheers, S.
I'm happy to see Runway making progress. I briefly subscribed to Runway and Pika earlier this year but the results I was getting from their image to video models were pretty bad. Fingers crossed! Companies like these seem to be more accessible than OpenAI and their Sora. Also from what I've heard SD3 has some interesting, if not peculiar, licensing terms (based on the @OlivioSarikas video). Runway's Gen3 might just be the boot in the butt the others need.
WOW, WOW, WOW, now this looks amazing. Competition is heating up and that's great for us all who use the tools. The difference in quality from only a year ago is huge. I'm loving all this :D:D:D
Wow...how do you keep up with all these AI changes and updates. I can barely handle two or three at a time. UGH!. I realize I'm posting on your Gen 3 post but I have to confess between text to images, text to video, chatgpt, I find the latest music craze the hardest to grasp. I partially blame the creators. I have never seen such lousy documentation and yet they are raking in the doe. You by far are the best AI spokesperson so I come back with more suggestions. WORK FLOWS. I would love to see work flow videos on Udio. Workflow on using extension, inpainting, etc but a biggie is how do you fix or edit issues. For example, I have worked on lyrics and after creating in Udio, I had whole verses that didn't seem to fit, and I wanted to remove or replace. But how..what's the work flow?. I can't imagine how you feel. This one program is burning me out and the fun is starting to fade. Suno isn't much better, but they at least have a link taking you back to home page. Udio needs structure and better site to accommodate the flow. Keep up the good work and yes, I'm going to check out Gen 3 - dam you :)
oh, I was right in there! I did a pretty hilarious "Air Bud as a Dark 80s Fantasy Movie" -- Haha, I did end up canning it. I might highlight it here at some point as a "remember when" moment!
Wonder who is going to upgrade next? Pika, Happier, or will MidJourney up end the game by adding video capabilities? At this point anything is possible, the next episode of Star Wars Acolyte might actually be adored by fans.
Yup! Even right now, running a V2V tends to correct a lot of stuff, although that can also add in extra wonk. The custom training Gen3 mentioned is pretty interesting as well. Although, I presume costly
Great that Luma updated their features, too bad that everyone that already used up their generation credits has to wait a full two weeks to try it or buy into a higher tier.
Luma has a terrible TOS commercial license. While Runway literally has zero restrictions, even free users own their creation/video generation outright.
I hope to play with these soon. Morphing is not an issue for the types of videos I make. Image to video is what I am after as i don't want super realistic.
I find the comments decrying a lack of tension or storytelling in these clips to be amusing. You combine shots to tell a story. They gain context and tension when paced together. It's a little thing called editing!
This is it. It’s funny, one of the weirdest workflows is sitting down in Premiere (or any editor) dropping in a clip and thinking, what comes next? Generate that, rinse and repeat. It occurred to me that I was literally “writing” in a NLE. That’s wild.
I remember seeing the future of editing change when a bunch of people were excitedly watching an editor use a NLE for the first time. This is an even more seismic shift!
It might have been, but I don't think so. What's funny, is that I actually did run a Topaz on it, but forgot to put it in the video, haha. I did post up those results on my Twitter account if you want to see it. It does look better! Maybe Topaz x2?
I keep seeing this comment in different AI related things. At some point it's going to turn entirely the other one. Scary or awesome. It won't hang in the space between for long
@@Iancreed8592 Because there are a lot of click bait youtube videos as well as 60 minutes putting out videos that people are watching that only cover the negative side of AI (which may not even happen especially if we are cognizant of these possibilities) instead of the possible revolutionary breakthroughs in all fields that can come with AI.
Hi Tim, love your videos and thanks for keeping us always up-to-date in this fast paced world of Ai!!! One quick question I have - Do you know if there are any good tools on the market yet for inputting green screened shot people and generating a world around them which takes into account the camera move from your green screen shoot. I'm looking to turn a person into an anime styled character and put them into a world with prompt?? Any suggestions?? Thanks Rob
So, the way I’m currently playing around with that idea is to use an app called Skyglass (mobile). You shoot live action video and then it replaces your background with a 3d environment. And then, you can do a V2V stylize. I know I did a video on it at some point in the archives, but I can’t find it to save my life now! Sigh! I really need to get organized on that front! It kinda works. There are some more tools coming up I’ve seen on the horizon that make that workflow a lot easier though
I can't wait till an Si video generator will be able to make an Ai model based on photos and selfies. Then make that a consistent character model in the series of video generations. 😊
shes driving to the job that used to belong to a human
Ooooof, touché.
Exactly. I don't know why anyone buys into this crap. They've taken ALL the skill away except for how you "prompt". "Oh, you can text? Wow, you must be a great artist."
The examples are epic. Hope it delivers.
I hope they also add ‘Image to Video’ at launch. That’s what I need, because it gives me more control and it’s better for consistent characters.
I'm hearing that'll come later, but probably not TOO much later. And...I just found out: We get Video to Video as well! WILD!
That would certainly be helpful, but then again, so would having iterative control over precise elements, parametric control, and some kind of preset or templating system, etc, etc, etc. Prompts have to be the most idiotic interface to a complex generative piece of software that could have possibly been implemented.
@@dmwalker24 That's not how it works
I'm assuming in about 3 or 4 days there will be a "announcement" from Pika or Moonvalley or an unknown. Because nobody is allowed to be the "new best thing" for longer than a week.
Sora 2.0 coming out would break the internet
Was there any upgrade to Moonvalley in the past 6 months?
I’m rooting for Pika to get back in this race come out with something dope.
A week? Pffft. Maybe a day or two. God forbid we get breath for a few minutes to figure out just one tool.
I love your sense of humor Tim. Thanks for keeping us up to date.
ha! Thank you! This stuff is so insane you've got to make jokes and laugh a little. It's BONKERS!
@@TheoreticallyMedia Ive been watching your channel for a while. I’m about to finish a 35 minute video comprised fully of AI generated videos, AI narration, AI sound effects and featuring consistent characters. I remember you saying it would be awhile before that happens, you were right, it took three months to complete..
WOW! These are truly amazing times for A.I. art & now video…
Thanks, Tim, for all your work keeping us up to date 👍🏻
This does put a smile on my face to see others models becoming better.
I really hope we see this for video games asap..
None of these have been at a level worth paying for thus far. I'm paying for this one.
People are totally sleeping on this announcement, this should be SORA level of excitement because of what I'll explain further. This looks even better than Sora, the legs while walking seem much better than the SOra Tokyo example. Im using Luma right now and its the best we got so far, but its price is very high and the worse is their TOS, with limited commercial rights and a TOS so vague that it may as well be NO commercial rights.
While runway has literally the best TOS commercial rights on the market hands down anywhere. Complete Commercial Rights: You retain all copyright and ownership of the videos you create or edit with Runway's tools. This applies to both free and paid plans (Free, Standard, Pro, Unlimited). Literally cannot get any better than that.
Im completely sold on Gen-3, cant wait to use it.
Nah, get your eyes tested.
Luma is the best right now?
@@iangillan1296 Show me an AI video generator that is currently available to everyone? and the highest quality. Nope, you cant, because Kling is not available to everyone, neither is Sora or Googles Vidu. Luma is it, but its commercial license is greedy, subscribe for life or lose rights to your video.
@@iangillan1296How is not if it's the only thing we've actually used?
@@dniilii you mistook with runway maybe?
Pricing alone makes Runway the best option right off the bat!
That is 100% their secret weapon. They've got a warchest and can likely win on that alone.
Glad I kept my Runway sub cant wait to try it out. also love my Milk Street cookbooks.
This channel has helped me discover things that i've somehow not heard from other feed's. Much appreciated
Sora better release now or no one's gonna care because of the rising competition!
I can't finish projects fast enough to not look outdated with these drops.
"We Are Living In A Simulation" - is becoming more believable each month that passes by.
1000%
How ?
@@chicobraz4335 What do you think AI Science will come up with in the future? It can already see and hear. Smell, Taste & Touch is already being worked on. AI doesn't need to be embodied to create a believable reality.
He, would be nice if the writers of that simulation made things more interesting.
On this planet there are like a billion+ gamers all running simulations on phones, consoles PC's etc... Imagine if we're just the character in one persons sim game, and there's billions of gamers all running sims up on the next level. This ones not even the latest or best game out, and the dude playing it might uninstall it for a better game.
Hot damn! We've officially entered the next phase of video models. I've been loving Dream Machine, combined with FaceFusion and upscaled has given me some incredible results. Looking forward to trying Gen 3 out.
oh, 100% this is Phase 2 of AI Video! No question there!
Luma really shouldn't be working on "the next phase" while this phase doesn't even work most times.
I think the next phase is it stops failing or morphing 😆🤣😂😅
I would be interested in the prompts you are using.
I've tried Luma yesterday, the average waiting time for free video generation was like 25-30 minutes, for a 5 sec long video. So basically it's already behind a paywall, and the cheapest option is 30 USD per month, which is very pricey.
Think about the market right now. Moves so quick they have no choice but work on the “next” phase if they wanna be relevant.
@@slightlyambitious Right now they are known as the one that doesn't work. They could make 400 phases. It wont matter if they dont work
Awesome as always! LOVE your sense of humor. It's like a ninja. Boom. Slay, onward. Thank you Tim!
Haha, it’s dry and dad for sure! But, all this is so bonkers you just gotta laugh!
I totally ate up the jazz legend "Benny Kingston" bit 😄 Well done 4:26
It's the best Fake Jazz album of all time! Haha
he's had a hard fake life
@@TheoreticallyMedia I did note the weird piano key arrangement. I make music videos and am finding piano keys seem as difficult as fingers.🖐😵💫
wow it looks amazing! its crazy how near sora level ai video options are coming closer and closer this fast i cant wait!!!!!!!!!! gives me something to HOPE for! wow in the coming days eh? i might just sign up for a unlimited plan for near my birthday then! cant wait to try it out!!!!!!
Only when Midjourney drops their AI video is when the game truly changes
I can't wait for that either. But-- it does sound like that might be a bit longer. Last I heard, they were going to push 3D before video. That said, 3D Midjourney also sounds bonkers!
@@TheoreticallyMediamidjourney its just some people behind its not like big company idk
I also thought that, but this is pretty impressive...
@@zakaris7259 this is just wrong
I've stopped using Midjourney... They make beautiful images, but I can't get the control that I can get with ideogram and stable diffusion... MJ's overpriced- - I certainly do look forward to their 3d and their video tools -- these things may get me to resubscribe...
I would like to see what this new Runway can do with animation. Cell, anime, Pixar, etc.
Thank you for keeping your videos non-gimmicky, easy to comprehend, and to the point. You are skilled presenter.
WANT! Can't wait to play with this, I was just wondering when Runway was gonna level up. Thanks Tim for the news!
I’m thinking this week! I have no inside info on that, but I’ve got a gut feeling we’ll see it before or on Friday. Fingers crossed, as I’d love to have a weekend to play with it for a Monday video!
This combined with lip-syncing, image to video, consistent characters, and ability to have multiple consistent characters interacting with each other in a scene would open the way to creating full movies with AI.
will not happen, ai is ok for these slow shit 'portrait' videos. Base video and massive editing will always be needed.
@@plinker439 That's what was said about music generators, like SUNO... but I just generated a complete song in one take that passes, and no one I've played it for has called it out as A.i..
Admittedly a really close listen does reveal its true origin to someone who knows exactly what to listen for, but overall it does most things well enough that most people don't appear to think twice about it.
And while I think you're correct in some sense regarding where we currently are with A.i. video generators, I feel like you missed the common theme around A.i. - "that this is the worst it will be, and will likely only get better".
Once many of the "creation" tools are refined, editing tools using the same A.i. will democratize high quality results that will compete with actual film studios... And I suspect by the end of this year, that this will be a foregone conclusion.... Barring any issues that develop regarding the contentions about how A.i. training was and is done, which potentially may stall such progress.
Last year I said something similar regarding a.i. music generators, that soon people will be able to create entire albums about music they want, this is basically true now.
If this is allowed to progress, it makes perfect sense that people will be able to within a couple years max, have easily accessible A.i. tools to develop shorts or possibly full length films that will be difficult to distinguish from the real thing.
Depending on legal issues... modifying real films with little effort, such as replacing an actor with the person you want to play that part, or changing certain aspects of the film to make more sense to you.
@@Fastlan3Yeah, the problem is that none of these AI companies are filmmakers; they don't understand how filmmaking actually works, and thus all the tools they have released thus far have been focused on generative AI without fine controls, which is useless for actual filmmaking, besides maybe occasional B-Roll. Sure, you get occasional pan and dolly options, and you have to walk before you run, but the way any actual filmmaker would ultimately want to use this stuff isn't the way the technology has been developing; basically, we've just been getting fancier and fancier tech demos.
They could have gone the other way, been pushing this as a replacement for VFX work - which is probably it's ideal use case - but no, hype, hype, hype, and "we're gonna replace filmmaking in a couple years." Way to get everyone not onboard and pissed off at your technology, and most of it has just been done to hype the tech up and get more and more investor money.
When we get those fine controls, when I can easily train my own characters into it, when I can move the camera and light the scene how I want; now, now you've actually got something...but that doesn't seem like the end goal of any of these companies or what they are working toward.
@@plinker439a pile of bshit! Only 3-4 years ago did you even think we d have all these ai tools? No. In not even a year ai video has done huge jumps forward, you’re not very familiar with moore’s law i see. Only few months ago people were thinking that to get at the level we are today it would have taken years, and it s already here. You make a critical mistake thinking naive. Once we get closer to real world simulations like nvidia omniverse, ai video will be so real that there won’t be the need for huge locations, actors and bla bla. Tyler Perry has canceled 800 millions studio expansion after he has seen sora. Do you think he s a moron? 🤣👍
another fantastic blend of informative and humorous video all in one wrap ! grazie amigo :))))
Last comment: secret plot twist is that this is SORA but limited and licensed out 😂
OMG GEN-3!! This is breaking news for me! Thanks for covering it.
oh man, I actually got the scoop!! That's EXCELLENT! I mean, you're PLUGGED IN!
It was a pretty mad dash this morning for sure!
Can't wait for your video covering it ;)
Can't believe how exciting it is to be a filmmaker right now. Cheers Tim. Always love your summaries.
Not a filmmaker more like a prompt ai video creator, filmmaker means to actually film something with a camera, working with equipment, and working with people to get the right shot. Definitely needs an official name for this type of field.
Great Video Tim!
Thank you!!
Amazing! We cannot wait to try it! Another great video, thank you!
Great video, and a big step for AI with GEN-3. May the competition bring us more astonishing results.
What these apps need to do is allow us to run locally so we can use it for uncensored work that allows mature stuff like horror, crime or adult movies. Eventually its going to have to be allowed and the company that does is going to make a good chunk of money. With Stable Diffusion 3 lagging behind right now, we kind of need that alternative.
Incredible! This will open the creative video production for many!
3:14
This is quite a remarkable generation 😲 Wow!
It's so funny that we talked about this. You called it like a champ Tim. I gave you a shout-out in our coverage. Did you know something in advance? If you are free to say. I'm just curious.
Oh, that’s awesome! Honestly, this one caught me by surprise as well! There were faint whispers they were up to something (which I think we all knew) but no one knew what or when.
When Dream Machine dropped though, I knew Runway wouldn’t be far behind…next stop: Pika!
Can't wait to start creating with it. I am with you on Pika. If they dropped a new model tomorrow, no surprise. 😂
Like I've said before, this kind of technology is going to revolutionize VFX. I'd love to see tools like this integrated into After Effects and editing software like DaVinci Resolve.
The moving scenes and the music should all be as a multi modal AI Video world generation as a complete cinematic experience.
Great video like always Tim! Always like trying the newest Ai's you suggest! Ty
Wow! All this is almost overwhelming! I can't wait to try this out!
That's not a new chapter of creativity, it's a new book!
10 seconds is already cool! If it's really what they're showing, I might subscribe.
We need to start training AI to remake Game of Thrones season 7 and 8. It’s literally the most important reason for AI to exist.
With the help of stuff like Runway 3, the last season can be fixed!
It was fine , get over it
3:35 Those are definitely not ants. 😂
I can't wait to see what people make with this beyond the demos.
Hey man, always appreciate your videos. Keep it up.
Great stuff, again! Don't you get tired trying to keep up with all the new platforms and new versions, lol. What I miss about all these updates is more creative control over the characters, the background swapping looked cool, but a lip sync (to audio) feature for consistent characters would be so powerful! It seems there are two paths now that need to merge: The text to video/ or image to video creation, and on the other hand the platforms that can (if available to the public) animate a person in a picture based on an audio fragment (speech or singing). I can't wait until all these things are brought together in one system with multiple editing possibilities. (Basically feeding a script page as a prompt containing an added dialogue prompt or an audio upload.) I imagine an interface with separate input boxes for various parameters like 1: background+situation+action. 2: characters/ action and 3: the dialogue between characters or voice over. Imagine being able to upload files for all three prompts: style/ background image, a character sheet and an audio file. The potential of A.I. comes alive with extensive creative control at every level.
Yup! Agreed! We’re still sort of in the kitbashing era of all this stuff. And, at some point there will be a platform that really nails it.
On the one hand, I’m really excited by that idea. On the other, I always worry that’ll make everything look homogeneous. But, at the same time, I always count on creative people to push the envelope, so I’m sure that’ll work out fine.
We’ll be there soon enough. I mean, heck, just look at the end of this video with the Gen-1 output from March of last year. That was “cutting edge” at the time!
Benny Kingston is the GOAT. What a life story. That Nobel prize for acoustics was something else.
That sizzle reel, I want to see that movie!
Can't wait to get my hands on this. Warping and morphing kills so many scenes in Gen2. This looks like a big step forward.
The moment warping and morphing is finally removed. I think that's when Hollywood would become real afraid and start quaking in it's boots. It's only a matter of time till everyone in the world would be able to make their own blockbuster type movies without needing millions to make one. Things are gonna get very interesting indeed.
Things are going to get crazy!
2:57 That is just like Florence, Italy. The urban train that crosses the city
I think you’ve just given me an excuse to go to Italy. For…research. Not wine and food. Yes, AI video research!
@@TheoreticallyMedia You can offset it from taxes then :) 😎
Wow mate that was Quick! Great break down mate
Outstanding Reel!
Nicholas KILLED it there!
I love your reviews. Very spot on. I'd like to you do an full review of the emerging tools usable for the AI Film industry that's growing up around these releases. Cheers, S.
Yes. She left the iron on.
GREAT VIDEO MAN THANKS FOR THE INFO!
I'm happy to see Runway making progress. I briefly subscribed to Runway and Pika earlier this year but the results I was getting from their image to video models were pretty bad. Fingers crossed! Companies like these seem to be more accessible than OpenAI and their Sora. Also from what I've heard SD3 has some interesting, if not peculiar, licensing terms (based on the @OlivioSarikas video). Runway's Gen3 might just be the boot in the butt the others need.
WOW, WOW, WOW, now this looks amazing. Competition is heating up and that's great for us all who use the tools. The difference in quality from only a year ago is huge. I'm loving all this :D:D:D
Wow...how do you keep up with all these AI changes and updates. I can barely handle two or three at a time. UGH!. I realize I'm posting on your Gen 3 post but I have to confess between text to images, text to video, chatgpt, I find the latest music craze the hardest to grasp. I partially blame the creators. I have never seen such lousy documentation and yet they are raking in the doe. You by far are the best AI spokesperson so I come back with more suggestions. WORK FLOWS. I would love to see work flow videos on Udio. Workflow on using extension, inpainting, etc but a biggie is how do you fix or edit issues. For example, I have worked on lyrics and after creating in Udio, I had whole verses that didn't seem to fit, and I wanted to remove or replace. But how..what's the work flow?. I can't imagine how you feel. This one program is burning me out and the fun is starting to fade. Suno isn't much better, but they at least have a link taking you back to home page. Udio needs structure and better site to accommodate the flow. Keep up the good work and yes, I'm going to check out Gen 3 - dam you :)
Yes! A full Udio workflow tutorial would be very much appreciated, Tim! Poor you, Tim, you must be exhausted. But, we need your video tutorials!
This will plateau before it ever achieves what could be considered fully passable realism on par with actual cinematography
remember when people started making 'ai movies' or '80s dark souls movie adaptation'? yeah all we need is someone to start making it now
oh, I was right in there! I did a pretty hilarious "Air Bud as a Dark 80s Fantasy Movie" -- Haha, I did end up canning it. I might highlight it here at some point as a "remember when" moment!
Very coo. Great vid! Thank you.
Thanks for reporting on this fast developing technology.
Competition is so good :D Every one of those AI video generating companies are going full steam ahead, and we benefit from it all!
Awesome as always
Love your videos and how much invest on your work! thanks.
Now we're talking! 🙂
Thanks Tim! How do you ever keep up with this whirlwind of emerging tech?
Haha...gave up sleeping in March of 2023!
I was just told that Gen-3 will be available "the next days" or so ;) guess we will have a hard weekend ahead than :D
I’m legit hoping for a Friday drop, mostly so we all have time to play with it!
my God, this all looks so good!
Wonder who is going to upgrade next? Pika, Happier, or will MidJourney up end the game by adding video capabilities? At this point anything is possible, the next episode of Star Wars Acolyte might actually be adored by fans.
steady on now with that last quote!
@@piggywahwah 🤣
Wow, very excited for this.
I can see in the future video Controlnets and video in-painting tools that will help to clean up Ai mistakes and improve Ai video generation.
Yup! Even right now, running a V2V tends to correct a lot of stuff, although that can also add in extra wonk.
The custom training Gen3 mentioned is pretty interesting as well. Although, I presume costly
ok ... I am cautiously impressed.
Right there with you!
Great that Luma updated their features, too bad that everyone that already used up their generation credits has to wait a full two weeks to try it or buy into a higher tier.
I feel 'ya...
Luma has a terrible TOS commercial license. While Runway literally has zero restrictions, even free users own their creation/video generation outright.
I hope to play with these soon. Morphing is not an issue for the types of videos I make. Image to video is what I am after as i don't want super realistic.
Thanks for another great video dad.
Thanks, son! Let’s go play catch after you finish you chores! Actually, let’s just skip the chore and catch and go get some pizza.
Great info Tim! Can't wait to get my hands on their new model. Were you still going to link to Runway's World Model explainer?
God it should be criminal to advertise these models and then hit us with a ‘coming soon’
I am still waiting to tell my chatgpt to read a story like a robot.
A coupe of days wait isn't exactly a crime. Find something else to do while you wait.
They had to, since luma dropped
@@robertdouble559 not even with this specifically I’m talking about sora, eleven labs music etc
I find the comments decrying a lack of tension or storytelling in these clips to be amusing. You combine shots to tell a story. They gain context and tension when paced together. It's a little thing called editing!
This is it. It’s funny, one of the weirdest workflows is sitting down in Premiere (or any editor) dropping in a clip and thinking, what comes next? Generate that, rinse and repeat.
It occurred to me that I was literally “writing” in a NLE.
That’s wild.
I remember seeing the future of editing change when a bunch of people were excitedly watching an editor use a NLE for the first time. This is an even more seismic shift!
Hi thanks for updating us with all these crazy developments. Do we have anything which has resolved the consistent characters issue?
Currently, that’ll be image to video. But that’s not immediately launching with Gen3. Still, shouldn’t be far off.
I'm sure the best use case for this would be extending non-AI footage with clip extensions or using AI to inpaint videos.
excellent you make my dream come true.
The link to the world models video is the friends we made along the way
New Video, fine, saved as "Goodie" for later 😊
I want a Swinging London world simulator
You know, i was just talking bad about them saying that runway was abandoned. I take that back and praise them for this gift to humanity
That reel, whilst very impressive, looks like it's been "Topazed"
It might have been, but I don't think so. What's funny, is that I actually did run a Topaz on it, but forgot to put it in the video, haha. I did post up those results on my Twitter account if you want to see it. It does look better! Maybe Topaz x2?
That’s scary 😟 but in the same time awesome
It's pretty nuts. The door to creative filmmaking is pretty wide open right now!
@@TheoreticallyMedia Yeah why is everyone 'scared'? This opens the door to everyone to produce creative video works!
I keep seeing this comment in different AI related things. At some point it's going to turn entirely the other one. Scary or awesome. It won't hang in the space between for long
@@Iancreed8592 Because there are a lot of click bait youtube videos as well as 60 minutes putting out videos that people are watching that only cover the negative side of AI (which may not even happen especially if we are cognizant of these possibilities) instead of the possible revolutionary breakthroughs in all fields that can come with AI.
@@shredd5705it'll be awesome.
I almost thought the piano at 4:45 had the black keys in the correct order.
Hi Tim, love your videos and thanks for keeping us always up-to-date in this fast paced world of Ai!!!
One quick question I have - Do you know if there are any good tools on the market yet for inputting green screened shot people and generating a world around them which takes into account the camera move from your green screen shoot. I'm looking to turn a person into an anime styled character and put them into a world with prompt?? Any suggestions?? Thanks
Rob
So, the way I’m currently playing around with that idea is to use an app called Skyglass (mobile). You shoot live action video and then it replaces your background with a 3d environment. And then, you can do a V2V stylize.
I know I did a video on it at some point in the archives, but I can’t find it to save my life now! Sigh! I really need to get organized on that front!
It kinda works. There are some more tools coming up I’ve seen on the horizon that make that workflow a lot easier though
@@TheoreticallyMedia Thanks Tim, i will give the skyglass app a try and see how well this works. Cheers
I can't wait till an Si video generator will be able to make an Ai model based on photos and selfies. Then make that a consistent character model in the series of video generations. 😊
Dam... the most impressive one to be sure.
Wow, I have been all on luma and here gen 3 is coming at me next. When the future happened to me, it happened fast.
saw benny kigston live in paris in 82! was amazing.🤣
Maybe all reels should be "
sound off" to focus on the images ?
😅
The music is amazing !
Looking forward to this
Waiting for thoughts to video next
Maaaaaan Gen 3 looks great