hey!! been following you for a while, love your content! was looking forward to download that pdf but it says not available in my location (Lebanon). can i find it somewhere else?
Thanks for this super helpful guide. I've been using Gen 3 Alpha for a couple days now and I'm blown away. Can't wait to see how it improves on the next iteration.
Thank you so much! I love Luma as well! Really so appreciate the helpful comment, it’s something I always strive for on the channel. To go a little deeper into the “how” than most others. (Not bagging on any other channels FYI. It’s just how I like to push myself)
Thank you for this video. It helped me a lot. I was using just the turbo mode and using AI images. Now I can I can do way more and I have a better understanding.
Great video with some great tips! A funny (yet frustrating) thing I noticed in GEN-3 was that some prompt words secretly get modified, for example when I was trying to generate knights with their swords held high, they were holding instruments instead! Also for me, it most of the time seems to associate "running" with marathon runners and adds the tag on their shirt, which I coincidentally noticed it does on your runner to the coffeeshop as well at 7:41 😅 Either way great deep dive, it will surely only improve further from here on out!
100% have run into various props being tuned into instruments. A lot of guitars, I’ve noticed. Hoping at some point we can weight in our prompts! (Or, image 2 video. That’ll pretty much solve everything)
Just made my first AI film using Runway Gen-3 after trying Luma. I did miss Luma's amazing image-to-video feature, but the truth is that Luma fails way more than it succeeds. Gen-3 was nice because it felt more like directing a movie. I had less control over what the scene looked like but way more control over the camera and action.
@@NGW_Studio Be careful with Turbo. The quality is nowhere near as good. Detail is much lower, lighting is horrible and it can't do more complex movements. Turbo's only advantage is speed.
@@Retro_Movie_Rewind Good to know! I joined Runway ML this past Sunday and have been struggling to get it to do what I need it to do. But I just saw your comment and realized they had me defaulted to Turbo. So I am going to bump it down to just Alpha to see what results I get. I guess it's kind of how Anthropic has multiple models, but Claude Sonnet 3.5 is really the only model that is of any use (and actually the only one they promote despite having two other models)
@@theaifilmmakingadvantage Gen-3 Alpha (non Turbo) is what you want. The cinematic quality is amazing. Turbo should not exist. It's a huge step backwards in quality and nothing but a marketing gimmick. I never use it. Quick Gen -3 tip: Add "natural movement" or "dynamic movement" at the start of your prompts to get more motion from your subject.
Most commentators are just praising Gen 3 mindlessly without really trying it. You are to be commended for actually testing it thoroughly without over-praising. Personally, I find it crap and sympathize with anyone trying to make it do what you're asking it to do in 100 different ways.
I am days new to Runway Gen 3 and find that it struggles with even the most basic prompts. Some shots are stunning but still without giving me what I am asking for.
UPDATE: I just saw someone else's comment about being careful with Turbo, which is the default setting I've been using since joining. So I think I will switch the model down to Alpha to see how that turns out
Thank you so much, I learned words to add in that I never even thought about. I’m new to this and learning a lot by hit and miss. Does Runway work on IPad? Thank you!
If Gen-2 is any indication, probably a month? Unofficially I’ve heard rumors they already have it, it just wasn’t working well yet. So, I don’t think it’ll be very long.
Really helpful guidance, thank you . Gen 3 has very little adherence, and requires multiple generations, which is expensive. I thought I might use it commercially, but its just not there unless I regenerate all day long and maybe I get lucky. I feel like Runway is where Midjourney was two years ago. Now that they have partnered with corporate film makers maybe we will see some spill over benefits. Or maybe not, who knows.
As always, great stuff Tim! Bonus points for Dark Tower shout out, wouldn't it be amazing to actually see that whole series properly done? We can always hope.
Hi. Not really into the creative space, or AI video creation. I stumbled across your channel by pure chance and just found the topic fascinating and very well presented. So I've been tagging along for a few months now. I just wanted to write this comment to say thank you. The Dark Tower reference is greatly appreciated ;)
Man it’s so frustrating at this point of my own video creation attempts never create anything useable. Mine are never near the quality of what many platforms showcase. Gen 3 was included, 5 videos yesterday and today and a lot of morphing, even twice my astronauts legs are backwards. I’ll try a few of your prompts Curtis bro.
nearly jumped the gun a got gen-3 today not realising image to video isn't there yet, so going to get luma today and if runways img2vid model ends up better , ill swap and give that a try for a bit when its released.
Haha, the hopscotch game is the way to go! It’s like streaming services. I’ve got HBO until the dumb Dragon show ends, and then I’ll cancel and hop on something else!
Can you post an external link to the Exit Music For A Film music video you created? Great breakdown of Gen 3 prompt tips, by the way. Especially the "imax" style tip!
Yup! Thanks. I forgot to put it in the comments: x.com/theomediaai/status/1807168440379023522?s=46&t=Gezvp5mbBNzrK3tbLcCFzA And yeah, I’ve got some thoughts on why IMAX works. Talked with Dustin Hollywood today, and the thought is somewhere in the model, the higher res/cinematic stuff might be tagged w/ IMAX. So if you keyword it, you’ll get a more epic result.
@@TheoreticallyMedia that is an incredible video. I've always loved that track. Thanks for the extra details regarding the "imax" tag. Love your videos. Have subscribed and will share.
Do you know if the same character can be used for different scenes? I think Sora AI will integrate that option: design the character and put it in the descriptive scenes you want, in addition to importing your own videos where the AI will replace the person who appears with the created character. Gen 3 has a similar possibility? because if you want to make a short film, for example, or a music video, it needs to have continuity and not for people to look different in each shot.
Hey Tim, another great video. Question: In a bunch of your replies to comments, you mentioned a “community feed.” What are you referring to? UA-cam comments? Something on the Runway website? Your website?
Oh, no just the UA-cam community feed. I think you can find it on the channel page. I do need to get a real community spot soon. The discord ended up being a bit of a disaster! Gonna nuke it soon and try to build a new one correctly!
@@TheoreticallyMedia Thank you for the quick response! Really appreciate it and looking forward to checking out the community feed. Been using UA-cam for years and didn’t even know that was a thing!
After spending a day experimenting, I have to say Gen 3 is much better than Luma or Haiper. Both of those aren't even close in comparison. However, Gen 3 does take some time to master, which means you'll likely waste many credits on subpar results initially. Once you understand how it works and its limitations, the output is far superior to the others. At first, I was disappointed with Gen 3 and considered canceling my Pro plan (the $35 one). But as I approached my last 700 credits, I started getting amazing results. This convinced me to switch to the unlimited Pro plan for $95. I wouldn't recommend Gen 3 for beginners, but if you're looking for cutting-edge experiments and can handle some limitations, it's the best option available at the moment.
I had the same experience, starting with the $35 plan (otherwhise the cost per video is just insane! ). I had the impression the "engine" is superior, unfortunately I need to set a style and some precise characters, so I was just waiting for them implementing the "load image" (Luma got it just after a week since the debut)... but time is passing and nothing happens...
No inside info, but if we use Gen-2’s dev as a metric, about a month? Rumor has it I2V already exists, it just isn’t working as well as they wanted, so they’re holding off until it’s right. Can’t fault that.
Not…great. What I usually do there is prompt for very specific archetypes: like, Woman with Red Hair in a Black Dress. Or “Bald Man wearing a Black suit and Sunglasses” and you’ll tend to get results that can pass for the same character. But, it’s always a dice roll still.
Yeah, now that Kling is officially finally out, I'll do a big tutorial. I did one awhile back, but it was for the chinese version-- so, I should probably update/revise for the international one!
I try to make a cat shorts film "Wide shot: A cute cat with big, bright eyes sits in an outdoor park, using chopsticks to eat from a cup of instant noodles. The cup is colorful, with the words "Dog Food" prominently displayed in bold letters. The scene is whimsical and humorous, with the park background featuring lush trees, green grass, and a clear blue sky. Focus on the cat's adorable use of chopsticks and the quirky label on the cup."
So, if I wanted to create a 2 minutes video, should it be divided into numerous tiny ones and then combined all together? Will it give a constant output, that is the persons from the start to end be in the same frame. Thank you, I'm a newbie.
A clip can only be up to 10 seconds and usually there is little to no coherency with people. Even if you somehow was able to get good results in all your clips, you will have already spent your monthly credits after a minute of video.
Haha, frugal move! I think i2v is going to be clutch once it hits Gen-3, but for sure, but let me do that R&D for you! That’s literally why I’m here! (Well, that and to figure out how stuff works!)
Does anyone know of a video generator that can generate videos with no camera movement? I've tried Runway and set the camera movement to the lowest setting along with prompting "Tri-pod shot", "no camera movement" etc but nothing works and I need still camera video generations to edit in pre-existing characters filmed with a tri-pod.
As a baseline before the masses swamp the servers, what's the generation time on these? Luma Dream Machine is back to taking hours for me. Like 12 hours sometimes.
Geeeeze-- Yeah, over the weekend I was getting generations in about 1 to 1 1/2 minutes. I was working on this video when they went live, and everything slowed to a crawl. That said, Runway has some horsepower, so I'm sure it'll normalize in a few days.
Can it do seeds? For example, have it create a panning shot of a street with a parked car then do the same shot but without the car? Img to vid should do this but apparently it’s not ready yet. UPDATE: Slow down me. Seeds, yes. Perfect consistency, no. Thanks for mentioning that part!
I love V2V, and I do have some stuff coming up on it soon. Totally agree as well. I’ve been exploring a combination of v2v and i2v, and while it’s not 100% there yet, it’s super close to having your own version of Disney’s Volume.
Can we do a crowdfunded account or user so they can test it out? Were wasting too many credits just to try it out. Crazy how runway doesnt even have a tutorial for their own product? Like hows that making any sense
Haha. That seems to be my job! One of the reasons I posted up the PDF is to give you all some ideas on promoting without wasting credits. Use the PDF examples as base prompts and modify them. At least you’ll get something close. Gonna post more in the community feed as well. Let’s work smart/together!
I'll give it a shot. Someone mentioned 50mm, tripod shot. I think that idea of it being locked down tricks Gen-3 into no zooms and dissolves. Early days, we'll figure it out!
Well, right now it’s hard to tell. Over the weekend when I was making the Radiohead video, I’d say it was about 1 min per. As I was making this video, they went public and things slowed to about 3 mins per gen. But, that’s also the onrush of everyone hitting it at once. I’d say once the dust settles, it’ll probably be around 1 min. Faster than Luma currently, at least.
informative video - as always - you didn't mention the cost. Runway does this annoying thing of '1,000 credits for Ten bucks' which you think and sounds pretty cool... but then on Gen-3 that equates to about 5 generations. So it would be great if they could come up with a more realistic or understandable pricing methodology so you get to figure out how much you get for your dollar - cos as with all AI, you can burn through a tone of attempts before you get the one you like - or in Gen-3 attempts yesterday, the first iteration is cool, but then every subsequent one got worse and worse and suddenly $50 bucks is gone and you have one usable clip.
While I’d love to, I just don’t have the time. Between the YT channel and my own projects, I never sleep as is! Working on something that might be helpful to you though.
That’s the biggest thing here, I’m trying to save you guys some experimentation credits. Use the prompts in the pdf and build off them! I’ll post more in the community feed for sure!
Don't hold your breath. I used all my monthly credits today (I'm on a plan as well) with a prompt following their recommendations and got beautiful rubbish and cost me a fortune. I put the exact same prompt in haiper and got exact success first time! Don't waste your money basically beta testing for them and paying for the privilege. See what rubbish Gen3 gave me on my video and how haiper gave me the right video fist time. ua-cam.com/video/bBbSWO5okcc/v-deo.html
The notion of producing things so randomly makes no sense to me. That is, this kind of 'slot machine' approach: put coin in (prompt), pull lever (render), and see what you get. It seems detailed character modeling with actor face prompts, specific outfits or 'skins' for each scene that are both VERY consistent, plus story boarding with character colors for a series of shots to edit into a story makes A LOT of sense to me. Have you done that? That's the only way this tech could be in any way 'usable' IMO.
Love your map videos. Went over to X to see your Radiohead music video and was moved! Now Im off to sell some blood to get cash for some prompting credit...
Gen-3 is pretty fast. Although today might not be the best representation, as it just opened up the doors. Might want to wait a few days for the rush to die down!
Sheesh! I thought Luma was overpriced. Runway Gen-3 is $15/month and capped at 62 seconds of generated videos unless you pay more. $0.25 per second. HAHAHA! Better get that prompt right on the FIRST TRY. Frustrating... One of Runways upgrade links says 62 seconds/month while another says 125 seconds/month. Either way, that's almost no time.
I think they all have their strengths and weaknesses. I'm just happy we can (well, Kling being the exception for many) have access to them to mix and match.
its fun but really, its still a gimmic, unusable slush and the price is absolutely redonk. $15 bucks for basically 1minute vid whereof 99% is complete trash. In 2025 i bet we can see actually useful ai vid capability.
For sure. I think of AI video in this weird spot that is less cinema and more TV. If you think about TV/Netflix shows, it’s mostly an establishing shot, followed by 2 people talking in a room. (Big budget exceptions of course)- I think everything is mostly there for a typical CSI type show, minus the acting. That’s what I’m waiting for, that next iteration of Emotalker/Hedra. Lip sync and “acting”
People saying Luma better lol how if it takes 15/20mins to create a video clip (that is trash for video editors) Runway will get better over time and will defo do text to video soon
Made this NYC subway video with Luma. Their text to video is more imaginative IMO and almost never does slomo, which is great. I’ve been burned by Runway too many times to ever give them my money again. Gen-3 looks great…till they flub it with a poorly QA’d update like they always do…ua-cam.com/video/yTiK5pwgWA4/v-deo.htmlsi=Py6QBBKnJR6EpqR1
I've just made a music video on my channel using Gen3, time lapse flowers, which worked really well. Please check it out. A lot of my ideas using people really weren't working, but this simple idea worked well.
Sadly completely useless when you only get 6 10 seconds clips a month... How can I even experiment trying to create something unique when I have so little to work with?
@@Kreative_Katz That's my point. If anyone is not happy with what they get for free, there is a paid alternative. You either keep your money and accept the constraints or give your money to remove them.
@@bloxyman22 Sorry, my mistake, you are right. Still no problem to pay more (but don't fall for the higher plan, buying additional credits with the lower plan is cheaper)
Time and time again Runway lets me down. Overhyping and cherry picking videos that are not at all representative of an average output. Having used Gen 3, the typical output is mediocre, riddled with warping, bad movement and bland compositions. It’s clear to me they must’ve generated 1000s of videos for their cherry picked showcase. Also the price per generation is absolutely outrageous. Classic bait and switch. I’ll be sticking with Luma
Download the FREE Prompting PDF here: theoreticallymedia.gumroad.com/l/gen3
👋
hey!! been following you for a while, love your content! was looking forward to download that pdf but it says not available in my location (Lebanon). can i find it somewhere else?
Thanks for this super helpful guide. I've been using Gen 3 Alpha for a couple days now and I'm blown away. Can't wait to see how it improves on the next iteration.
wow that comparison blew me away.
Roughly 14 months in AI time is basically 14 years. It’s insane!
Luma still leading at the moment, this is the most helpful video for Gen 3 I've seen so far though I must say.
Thank you so much! I love Luma as well! Really so appreciate the helpful comment, it’s something I always strive for on the channel. To go a little deeper into the “how” than most others.
(Not bagging on any other channels FYI. It’s just how I like to push myself)
yaaaaaaaay begin the rise of gen3 videos :-D so glad its out!
So glad we’ve entered the 2.0 era! And..well; no one seems to be missing Sora anymore!
Thank you for this video. It helped me a lot. I was using just the turbo mode and using AI images. Now I can I can do way more and I have a better understanding.
Thanks for your knowledge and generosity regarding the PDFs
Great video with some great tips! A funny (yet frustrating) thing I noticed in GEN-3 was that some prompt words secretly get modified, for example when I was trying to generate knights with their swords held high, they were holding instruments instead! Also for me, it most of the time seems to associate "running" with marathon runners and adds the tag on their shirt, which I coincidentally noticed it does on your runner to the coffeeshop as well at 7:41 😅 Either way great deep dive, it will surely only improve further from here on out!
100% have run into various props being tuned into instruments. A lot of guitars, I’ve noticed. Hoping at some point we can weight in our prompts! (Or, image 2 video. That’ll pretty much solve everything)
Just made my first AI film using Runway Gen-3 after trying Luma. I did miss Luma's amazing image-to-video feature, but the truth is that Luma fails way more than it succeeds. Gen-3 was nice because it felt more like directing a movie. I had less control over what the scene looked like but way more control over the camera and action.
Now runway has img to video on turbo mode, made my work flow sooo much faster try it out!
@@NGW_Studio Be careful with Turbo. The quality is nowhere near as good. Detail is much lower, lighting is horrible and it can't do more complex movements. Turbo's only advantage is speed.
@@Retro_Movie_Rewind Good to know! I joined Runway ML this past Sunday and have been struggling to get it to do what I need it to do. But I just saw your comment and realized they had me defaulted to Turbo. So I am going to bump it down to just Alpha to see what results I get. I guess it's kind of how Anthropic has multiple models, but Claude Sonnet 3.5 is really the only model that is of any use (and actually the only one they promote despite having two other models)
@@theaifilmmakingadvantage Gen-3 Alpha (non Turbo) is what you want. The cinematic quality is amazing. Turbo should not exist. It's a huge step backwards in quality and nothing but a marketing gimmick. I never use it.
Quick Gen -3 tip: Add "natural movement" or "dynamic movement" at the start of your prompts to get more motion from your subject.
@@Retro_Movie_Rewind thanks for that tip!
I’m a Gen3 guy! Thank you for this!! I will share my stuff soon!
Most commentators are just praising Gen 3 mindlessly without really trying it. You are to be commended for actually testing it thoroughly without over-praising. Personally, I find it crap and sympathize with anyone trying to make it do what you're asking it to do in 100 different ways.
kinda agree with you - feels that still is not there yet- also the fact that consumes credits so much without having desired results is an issue
I am days new to Runway Gen 3 and find that it struggles with even the most basic prompts. Some shots are stunning but still without giving me what I am asking for.
UPDATE: I just saw someone else's comment about being careful with Turbo, which is the default setting I've been using since joining. So I think I will switch the model down to Alpha to see how that turns out
Thx Tim, that was exactly what i needed for the kickstart
Yay couldn't wait for ur video since we all have access can't wait to see ur prompt guide woohoo
Timing came out great on this! I was really hoping they were going to release on monday!! And I basically swung in just an hour or two after!
Thank you so much, I learned words to add in that I never even thought about. I’m new to this and learning a lot by hit and miss. Does Runway work on IPad? Thank you!
Fantastic, thanks so much for sharing !
I think with walking happily, happily is an adverb tim not an adjective. It describes the verb i.e walking.
Top Bloke. Great content as always. Nice One
very helpful! THANK YOU!
It blows my mind!
Many Thanks, my Good Sir, you are Awesome!!
so image to video not available yet? any word as to when it will be?
If Gen-2 is any indication, probably a month? Unofficially I’ve heard rumors they already have it, it just wasn’t working well yet. So, I don’t think it’ll be very long.
@@TheoreticallyMedia Weird because Luma Dream Machine does SO much better using image to video compared to text to vid.
thank you as always!
1000% And thank YOU for being here!
Really helpful guidance, thank you . Gen 3 has very little adherence, and requires multiple generations, which is expensive. I thought I might use it commercially, but its just not there unless I regenerate all day long and maybe I get lucky. I feel like Runway is where Midjourney was two years ago. Now that they have partnered with corporate film makers maybe we will see some spill over benefits. Or maybe not, who knows.
Yay old studio is back :)
We must never speak of Studio B again! Haha. It’s sooooo gooood to be back!
@@TheoreticallyMedia hahaha
As always, great stuff Tim! Bonus points for Dark Tower shout out, wouldn't it be amazing to actually see that whole series properly done? We can always hope.
Great! Thanks for the video. I have one question: how do I change aspect ratio to 9:16? Does that already work? Thank you very much 🙏
I'll have to look that up-- I'm not sure it currently does 9:16?
I remember when I 1st graduated media college I could see how everything was edited with a 3rd eye-like detection. whoa I got the AI eye now!😉
10:02 haha the return of the elusive "dream factory", when we mere mortals play with Luma's dream machine :) Thanks for this run through !
Hi. Not really into the creative space, or AI video creation. I stumbled across your channel by pure chance and just found the topic fascinating and very well presented. So I've been tagging along for a few months now. I just wanted to write this comment to say thank you. The Dark Tower reference is greatly appreciated ;)
Man it’s so frustrating at this point of my own video creation attempts never create anything useable. Mine are never near the quality of what many platforms showcase. Gen 3 was included, 5 videos yesterday and today and a lot of morphing, even twice my astronauts legs are backwards. I’ll try a few of your prompts Curtis bro.
Awesome guide!
Thank you so much!! I’m really jazzed for Runway! This 2.0 era of AI video is super exciting!
nearly jumped the gun a got gen-3 today not realising image to video isn't there yet, so going to get luma today and if runways img2vid model ends up better , ill swap and give that a try for a bit when its released.
Haha, the hopscotch game is the way to go! It’s like streaming services. I’ve got HBO until the dumb Dragon show ends, and then I’ll cancel and hop on something else!
Can you post an external link to the Exit Music For A Film music video you created?
Great breakdown of Gen 3 prompt tips, by the way. Especially the "imax" style tip!
Yup! Thanks. I forgot to put it in the comments: x.com/theomediaai/status/1807168440379023522?s=46&t=Gezvp5mbBNzrK3tbLcCFzA
And yeah, I’ve got some thoughts on why IMAX works. Talked with Dustin Hollywood today, and the thought is somewhere in the model, the higher res/cinematic stuff might be tagged w/ IMAX. So if you keyword it, you’ll get a more epic result.
@@TheoreticallyMedia that is an incredible video. I've always loved that track.
Thanks for the extra details regarding the "imax" tag. Love your videos. Have subscribed and will share.
Do you know if the same character can be used for different scenes? I think Sora AI will integrate that option: design the character and put it in the descriptive scenes you want, in addition to importing your own videos where the AI will replace the person who appears with the created character. Gen 3 has a similar possibility? because if you want to make a short film, for example, or a music video, it needs to have continuity and not for people to look different in each shot.
We need image to video or this is nearly useless.
One step at a time. It’ll get there. And remember, this is still Alpha, so it isn’t even finished yet.
Agreed
@@TheoreticallyMedia The problem I have is they advertise image to video, but it's not available yet.
My thoughts exactly. Luma did the next move, coming out with it out of the gate.
@@TheoreticallyMedia alpha is just an excuse to get money
Hey Tim, another great video. Question: In a bunch of your replies to comments, you mentioned a “community feed.” What are you referring to? UA-cam comments? Something on the Runway website? Your website?
Oh, no just the UA-cam community feed. I think you can find it on the channel page. I do need to get a real community spot soon. The discord ended up being a bit of a disaster! Gonna nuke it soon and try to build a new one correctly!
@@TheoreticallyMedia Thank you for the quick response! Really appreciate it and looking forward to checking out the community feed. Been using UA-cam for years and didn’t even know that was a thing!
Hi Theoretically Media, please can I ask you what your recording setup is? Especially the background removal. Hope you can reply. Thanks a million.
After spending a day experimenting, I have to say Gen 3 is much better than Luma or Haiper. Both of those aren't even close in comparison. However, Gen 3 does take some time to master, which means you'll likely waste many credits on subpar results initially. Once you understand how it works and its limitations, the output is far superior to the others.
At first, I was disappointed with Gen 3 and considered canceling my Pro plan (the $35 one). But as I approached my last 700 credits, I started getting amazing results. This convinced me to switch to the unlimited Pro plan for $95.
I wouldn't recommend Gen 3 for beginners, but if you're looking for cutting-edge experiments and can handle some limitations, it's the best option available at the moment.
I had the same experience, starting with the $35 plan (otherwhise the cost per video is just insane! ). I had the impression the "engine" is superior, unfortunately I need to set a style and some precise characters, so I was just waiting for them implementing the "load image" (Luma got it just after a week since the debut)... but time is passing and nothing happens...
Thanks for doing this! How long do you predict until image to video releases??
No inside info, but if we use Gen-2’s dev as a metric, about a month? Rumor has it I2V already exists, it just isn’t working as well as they wanted, so they’re holding off until it’s right. Can’t fault that.
What is your experience in getting consistent characters across multiple shots?
Not…great. What I usually do there is prompt for very specific archetypes: like, Woman with Red Hair in a Black Dress. Or “Bald Man wearing a Black suit and Sunglasses” and you’ll tend to get results that can pass for the same character. But, it’s always a dice roll still.
@@TheoreticallyMedia Yeah, I've read some interesting things on getting images to be consistent to make graphic novels. Its still very hard.
fantastic, i realy love your videos. is it possible to do same for KLING Ai?
Yeah, now that Kling is officially finally out, I'll do a big tutorial. I did one awhile back, but it was for the chinese version-- so, I should probably update/revise for the international one!
now this video helps alot
That is fantastic to hear! Can't wait to see what you cook up!
Man this looks sick...its a bit outta my price range for now but great video on it love theprogress videos made last 3 months
This might actually be great for "Sweded" films
Thanks for your vid🙏
I try to make a cat shorts film
"Wide shot: A cute cat with big, bright eyes sits in an outdoor park, using chopsticks to eat from a cup of instant noodles. The cup is colorful, with the words "Dog Food" prominently displayed in bold letters. The scene is whimsical and humorous, with the park background featuring lush trees, green grass, and a clear blue sky. Focus on the cat's adorable use of chopsticks and the quirky label on the cup."
Thank you.
This is awesome Tim, thanks! How do you work around the 500 character prompt limit?
So, if I wanted to create a 2 minutes video, should it be divided into numerous tiny ones and then combined all together? Will it give a constant output, that is the persons from the start to end be in the same frame.
Thank you, I'm a newbie.
A clip can only be up to 10 seconds and usually there is little to no coherency with people. Even if you somehow was able to get good results in all your clips, you will have already spent your monthly credits after a minute of video.
@bloxyman22
What is the solution here? If I'd request an expert, like yourself, can we achieve it? Can the credits be bought again?
Thank you so much
No Image to video for gen 3 yet which is disappointing. Does anyone know when that is out?
good comment, then I don't need to try it out at this moment XD
I love it how people instantly expect even more of just released cutting edge tech 😂
We watched House of Dragons too, yes, Germany loves it too 🤩
cool guide gen 3 looks great
2.0 AI filmmaking is here!!
Looks good but I was hoping to use it for image to video but no option on the website. As soon as that lands, I'm on it
Which one do you think is the best model so far? gen3, luma dream machine or kling?
Would we be getting any form of video to video with this also ?
do you think overall luma is a better tool or gen 3 in terms of quality? (if considering only text to video)
I could pay 1$ per trial - but for imagie -to - video generation - at the moment I will just watch your videos - and observe development of the tool.
Haha, frugal move! I think i2v is going to be clutch once it hits Gen-3, but for sure, but let me do that R&D for you! That’s literally why I’m here! (Well, that and to figure out how stuff works!)
Does anyone know of a video generator that can generate videos with no camera movement? I've tried Runway and set the camera movement to the lowest setting along with prompting "Tri-pod shot", "no camera movement" etc but nothing works and I need still camera video generations to edit in pre-existing characters filmed with a tri-pod.
As a baseline before the masses swamp the servers, what's the generation time on these?
Luma Dream Machine is back to taking hours for me. Like 12 hours sometimes.
Geeeeze-- Yeah, over the weekend I was getting generations in about 1 to 1 1/2 minutes. I was working on this video when they went live, and everything slowed to a crawl. That said, Runway has some horsepower, so I'm sure it'll normalize in a few days.
Even with paid subscription on Lima?
@@lukewilliams7020 I'm on the free plan. But i had fast generations yesterday and then it ground to a halt.
Can it do seeds? For example, have it create a panning shot of a street with a parked car then do the same shot but without the car? Img to vid should do this but apparently it’s not ready yet. UPDATE: Slow down me. Seeds, yes. Perfect consistency, no. Thanks for mentioning that part!
Haha...it's like we went on a whole journey with that comment!
Video to video is what will really change filmmaking forever. This is just interesting, but not more.
I love V2V, and I do have some stuff coming up on it soon. Totally agree as well. I’ve been exploring a combination of v2v and i2v, and while it’s not 100% there yet, it’s super close to having your own version of Disney’s Volume.
excellent
Thank you!!
Can we upload real images or videos to it? Like resize and reedit ?
Can we do a crowdfunded account or user so they can test it out? Were wasting too many credits just to try it out. Crazy how runway doesnt even have a tutorial for their own product? Like hows that making any sense
Haha. That seems to be my job! One of the reasons I posted up the PDF is to give you all some ideas on promoting without wasting credits. Use the PDF examples as base prompts and modify them. At least you’ll get something close.
Gonna post more in the community feed as well. Let’s work smart/together!
@@TheoreticallyMedia nice!
awesome video
When can we use this ?
Today! Well, yesterday. But tomorrow too! Haha, it’s out is basically what I’m saying!
@@TheoreticallyMedia ok I thought you said it is alpha stage now
I try it today thanks man
Looks good but I just got access to LTX Studio so busy with that.
What's gonna be really cool are all the tools to further finetune your output. Money better spent, more or less.
Maybe try "single shot, no transitions"?
I'll give it a shot. Someone mentioned 50mm, tripod shot. I think that idea of it being locked down tricks Gen-3 into no zooms and dissolves. Early days, we'll figure it out!
Very Timely, You might even say "Prompt".
Hayyyyyyy-oh!!!
The price is outrageous.. $1 per generation
Can it overlay music?
How long does it take to produce a video?
Well, right now it’s hard to tell. Over the weekend when I was making the Radiohead video, I’d say it was about 1 min per. As I was making this video, they went public and things slowed to about 3 mins per gen. But, that’s also the onrush of everyone hitting it at once.
I’d say once the dust settles, it’ll probably be around 1 min. Faster than Luma currently, at least.
Darn, I just subscribed thinking it had image-2-video. Too late now. Anyone know how long before it's here?
are your prompts made public with Runway or does the author retain all rights?
informative video - as always - you didn't mention the cost. Runway does this annoying thing of '1,000 credits for Ten bucks' which you think and sounds pretty cool... but then on Gen-3 that equates to about 5 generations. So it would be great if they could come up with a more realistic or understandable pricing methodology so you get to figure out how much you get for your dollar - cos as with all AI, you can burn through a tone of attempts before you get the one you like - or in Gen-3 attempts yesterday, the first iteration is cool, but then every subsequent one got worse and worse and suddenly $50 bucks is gone and you have one usable clip.
Can I send you a story, in text, which you could use to generate a video?
While I’d love to, I just don’t have the time. Between the YT channel and my own projects, I never sleep as is! Working on something that might be helpful to you though.
@@TheoreticallyMedia
Waiting then. Merci
Great timing before I waste too many credits doing trial-and-error at 50-100 credits a pop! 💸
That’s the biggest thing here, I’m trying to save you guys some experimentation credits. Use the prompts in the pdf and build off them! I’ll post more in the community feed for sure!
Don't hold your breath. I used all my monthly credits today (I'm on a plan as well) with a prompt following their recommendations and got beautiful rubbish and cost me a fortune. I put the exact same prompt in haiper and got exact success first time! Don't waste your money basically beta testing for them and paying for the privilege. See what rubbish Gen3 gave me on my video and how haiper gave me the right video fist time. ua-cam.com/video/bBbSWO5okcc/v-deo.html
The notion of producing things so randomly makes no sense to me. That is, this kind of 'slot machine' approach: put coin in (prompt), pull lever (render), and see what you get. It seems detailed character modeling with actor face prompts, specific outfits or 'skins' for each scene that are both VERY consistent, plus story boarding with character colors for a series of shots to edit into a story makes A LOT of sense to me. Have you done that? That's the only way this tech could be in any way 'usable' IMO.
Love your map videos. Went over to X to see your Radiohead music video and was moved! Now Im off to sell some blood to get cash for some prompting credit...
i think runway 3 is good for now, we just need to focus on delivering the message on film making and storytelling
Was waiting for Gen 3, i was decently impressed with Luma but its not my cup of tea because of generation times.
Gen-3 is pretty fast. Although today might not be the best representation, as it just opened up the doors. Might want to wait a few days for the rush to die down!
@@TheoreticallyMediahey Tim what are your thoughts on luma verses gen 3 as I still think Lima is really capable and astounding
"The man in black" to me will always be Johnny Cash.
But the promoting guide published by runway its different structure…camera movement: scene, details
Sheesh! I thought Luma was overpriced.
Runway Gen-3 is $15/month and capped at 62 seconds of generated videos unless you pay more. $0.25 per second. HAHAHA!
Better get that prompt right on the FIRST TRY. Frustrating...
One of Runways upgrade links says 62 seconds/month while another says 125 seconds/month. Either way, that's almost no time.
Tried myself with same prompts. Dream Machine and Kling is much, much better and cheaper.
I think they all have their strengths and weaknesses. I'm just happy we can (well, Kling being the exception for many) have access to them to mix and match.
@@TheoreticallyMedia Yes, competition is good for us.
its fun but really, its still a gimmic, unusable slush and the price is absolutely redonk. $15 bucks for basically 1minute vid whereof 99% is complete trash. In 2025 i bet we can see actually useful ai vid capability.
Need consistent characters, otherwise it is still limited. I need the ability to show the same character(s) with same faces in different scenes.
AI video: people AImlessly walking and looking around.
For sure. I think of AI video in this weird spot that is less cinema and more TV. If you think about TV/Netflix shows, it’s mostly an establishing shot, followed by 2 people talking in a room.
(Big budget exceptions of course)- I think everything is mostly there for a typical CSI type show, minus the acting.
That’s what I’m waiting for, that next iteration of Emotalker/Hedra. Lip sync and “acting”
People saying Luma better lol how if it takes 15/20mins to create a video clip (that is trash for video editors) Runway will get better over time and will defo do text to video soon
Made this NYC subway video with Luma. Their text to video is more imaginative IMO and almost never does slomo, which is great. I’ve been burned by Runway too many times to ever give them my money again. Gen-3 looks great…till they flub it with a poorly QA’d update like they always do…ua-cam.com/video/yTiK5pwgWA4/v-deo.htmlsi=Py6QBBKnJR6EpqR1
I've just made a music video on my channel using Gen3, time lapse flowers, which worked really well. Please check it out. A lot of my ideas using people really weren't working, but this simple idea worked well.
I'll swing over and check it out later tonight!
it just isnt there yet. too low res and awful results even after buying loads of credits
Sadly completely useless when you only get 6 10 seconds clips a month... How can I even experiment trying to create something unique when I have so little to work with?
Why not to pay for the subscription and credits?
It’s not free.
@@Kreative_Katz That's my point. If anyone is not happy with what they get for free, there is a paid alternative. You either keep your money and accept the constraints or give your money to remove them.
That is the amount of generations you get if you PAY for the standard plan.
@@bloxyman22 Sorry, my mistake, you are right. Still no problem to pay more (but don't fall for the higher plan, buying additional credits with the lower plan is cheaper)
Runway gen 3 provides smoother and more realistic motion quality
So far I've noticed that while it can do anime, it's not one of its strong points.
Expensive I love Runway but seriously 10 credits used and why? Maybe I should be in the world, and speaking truths.
Time and time again Runway lets me down. Overhyping and cherry picking videos that are not at all representative of an average output. Having used Gen 3, the typical output is mediocre, riddled with warping, bad movement and bland compositions. It’s clear to me they must’ve generated 1000s of videos for their cherry picked showcase. Also the price per generation is absolutely outrageous. Classic bait and switch. I’ll be sticking with Luma
Yeah no image to video... which sucks.... big time....
Patience, young grasshopper. It will be here soon.
I think it's kind of sad that they want money to use a product that is in alpha.