@@TheoreticallyMedia And great callout! I was not even aware that we did not get the main version of it. I know you said it should not be looked at as a bait-and-switch, but I don't know what else to call it, lol!
OUTSTANDING video! The bootleg footage really shows what Sora can actually do with the restraints off. So basically people can create their own TV and movies right now, with a little elbow grease. With a solid story and likeable characters, monetization options are many. My main concern as a content producer is copyright issues. Ideally you create all of the base content yourself and use the various generators as video upscalers. LOVE your content! Cheers!
Appreciate that! I’ll say one thing on that bootleg footage, weirdly I think the “bootleg” aspect is actually buying a little “quality” jump, oddly enough. I think that lower resolution is actually rounding out some of the edges and imperfections in the footage. Not saying that as a good or bad thing, in fact, watching it made me think a savvy filmmaker might be able to use that idea as a narrative device. Remember “found footage” movies ? Haha, we might have “bootleg footage” movies in the future!
@@7satsuOAI is just going the Apple route: make products overpriced because the fans will buy it anyways as long as they release minor upgrades once a year.
Wonder how much is costs them to run the non-turbo model they actually advertised with. That's definitely a bait and switch business move, but they should make that one available to premium paying customers.
Oh, man! This is what I love about putting those records up there! With that one in particular, I was like: No one is going to get this! haha, BUT HERE YOU ARE!! SUCH a good album!! I had actually listened to it that morning, which is why it got the display slot! Man, you've not only got great ears, but a great eye as well!
@@TheoreticallyMedia well thank you! I'm a subscriber to your channel and I LOVE your content! I've heard you drop hints in your videos from time to time about prog rock/prog metal which I'm a huge fan of 🤘 I love the video you uploaded of Jordan Rudess going to your house... I'm so jealous! As a keyboardist, he's my favorite. Happy Holidays Tim!!! Robert...
6:16 This is uncannily you, Tim. I mean, that's a high value production audition tape right there. Shows that probably the key is to define the correct workflow right now.
Greate video!! Its been a wild ride to see how the evolution of these tools has been enabling your detective noir film to come to life. Actually really looking forward to seeing rhe final version in the future!
As someone living in the UK (and therefore incapable of accessing Sora) it's actually quite a relief that vid-to-vid is the only thing that seems to compete - it's quite incomprehendable that they both neutered Sora pre-release to only the Turbo model for a whopping $200/m and yet it is still oversubscribed, if others knew about other Gen AI video platforms I'm sure they wouldn't have jumped in the 'deep end' like that.
Love them! I’ve spent a pretty good amount of time with the team, and I gotta say: hardworking and passionate about AI. Anytime I talk to them about the latest developments, they’re already 10x ahead of me about it, or working on getting it implemented. They’ve really upped my knowledge on what it takes to add new tech into an existing platform as well- needless to say, it’s a lot harder than I imagined. It’s really given me a new found appreciation for not only them, but all the other generators as well. Behind every LTX, Runway, Luma, Etc, there is a hard working team of people grinding hard every day!
Spending $200 a month only makes sense if your channel is already generating good revenue and you need a lot of daily generations. Otherwise, there’s no point in spending $200 a month! 😊
I think it's worth it if you don't utilise it monthly and only pay for a month when you have a specific project your working on and few filler or fx scenes to do with Sora, then cancel when you don't need it. It kind of becomes like a contracting out a film crew fit to tasks and that would save you noney versus the alternative.
Most of us don't have channels promoting AI services, either. It's much harder to make money off your channel if you are doing original artistic content.
I really think the game footage as a source video is going to end up being a winner, to remix ontop of, especially for fine direction control and consistency. Using something like unreal engine or blender to kit bash a previz scene , should give Sora so much context in in terms of camera , character cloths and facial features and lighting , that longer form film stitching, which keeps characters and scenes constant will end up being possible sooner rather then later. On top of that it's like having libraries of free props to utilise. The more I think about it, the more keen I am for the next 2 years of indi films.
I was at the LTX event (I wanted to say 'hi' Tim, but you were very popular) and I thought everyone there did a fantastic job of presenting. LTX actually listens to the Discord channel as far as constructive (and perhaps not so constructive) critiques. I like a company that has an open dialogue with its users.
Wait until these A.I. models are able to do all of this in real-time. Wholesale scene, character, sound and location generation should be possible from any or nearly no input by the user. That level of generative power should start appearing in a year or two, by my own estimation.
I’m hoping to see the same. If you could get trained characters or Loras at this level? I mean- that would be massive. As is, there is always face swapping. We’re still in the kitbashing era
@@benhansford4290 Swapface will run on a half decent laptop. Not free, but pretty cheap. Not censored (its on your computer). Seaart AI face swap is good. Think its free. Can get spicyish stuff through. Right now only one person in the shot. Last week they had multiple, but must have had a bug. Swapface does multiple faces in a shot.
You would definitely be a game changer if you could take unrendered CGI or ungraded smartphone footage and make it look like a polished Hollywood film but keep details consistent
Consistency is really the trick here. Everyone is working on it. I'd say: Mid next year? So many of these little problems are on the verge of being solved.
CitizenPlain from X here: Nice overview and thanks for sharing my tests! I think you'll enjoy blends and storyboard once you have time to test them out. They're the features that AFAIK aren't possible with any other tools.
Hey man!! Stellar work on those tests!! Agreed! I just posted a "consistent character" test on X you might find interesting. Still, stupid curves. I HATE curves!
Cool video. Have you come up with w workflow to put yourself with the same realismo that Sora gives? I need to resolve that…I’m thinking in wotking with Runway and Sora, but I’m not sure in I’ll be needing another software to make sure I get our faces and hairs right, like Kling or Akool
Please I would really love and answer of what you think could help to faceswap in video that went from midjourney, to Kling, or Runway and lastly Sora and that need to reface the original character
The workflow to get an amazing and consistent video would be fairly complex, but you have definitely demonstrated a use for it. I mean bringing a face swap element into the mix could make it very useful indeed.
200 bucks a month for a model that STILL cannot handle WALKS and FINGERS yet... It might be doing great upscaling videos from other models but these are not free, so that adds up to the monthly fee. Watching on the surface, things look cool but the background is full of errors. I cannot name the timeframes but just into the 1st minute 0:14 - 0:16 (just within 2 SECINDS) A guy in the BG standing in mid stride - A passerby whose legs TWIST as he scurries past - 2 people who melt away while walking. - Ongoing a few seconds later - -Fast forward to the girl with the red hair: the fingers are so UNFORGIVEABLY twisted. We do know that most of these realistic models are trained on clear- view AI's surveillance cameras in cities (there is a documentary about that) we then begin to understand the finger challenge.. cos people mostly move their fingers when they walk or hide it but nobody splays it in front for triangulation. 200 bucks a month is NOT worth it currently
Pretty crazy, right? I’ve been going overboard taking all these shots I’ve generated over the last few months and- to say the least, I’m really impressed with these new versions! Can’t wait for what’s to come in 2025!
Very creative uses, Tim. Kudos! Anyone with access to all these tools could really have some fun and start putting some real coherent stories together. I'm sure we will start to see more of that coming out from filmmakers over the next few months. I wish I was on that train.
Open AI has historically made much of a cost of compute but once the technology gets out to other companies it doesn't seem to be anywhere near as big a deal. 2 years ago they were claiming that just generating simple low res images what's fabulously expensive then stable diffusion matched them on consumer grade GPUs
I actually need to circle back and try out low a little more. Initial tests at low were trying to prompt out of more animated styles into a photo real, and it didn’t do very much. I probably should have tried it out just to see how it acted on “normal” footage and just trying to change details. I’ll say, though: if they eventually add a masking or brush tool to this? I mean, that would be insane
I don’t have the pro plan, but when I tried remix, it says that I cannot do that because there are people in the video. So upgrading to pro plan will allow me to render videos with people?
Yup. Kind of a dirty dog trick on OAI's part. I covered it in my earlier video. I'm not stoked about that either. I think that gate will eventually be lifted, but man...not cool.
Very helpful -really enjoyed the film noir short at the end, it’s coming along and the best thing about the genre is that it’s hard to overdo it (or is it!), so probably the best genre to work out. In terms of the costing, we’ve all got a book inside us, well actually now it’s a video and for an annual hit of $2400 you can get that film mad, and really well too with total control. Especially if you are retired (and a lot of boomers are now) so at that level, a lot of creativity could be on the way for those prepared to find the time. provided of course they have the story inside, that’s ready to go. Can you tell a story suckers? i say the future is ours - if only you can tell.
Thank you Tim, That was awesome! I'm really waiting for the next Gen with multiple input options, video to video + reference photo for character + reference image for style. I mean... the technology is there... just mash them up together.
For sure we’ll see what you’re talking about by mid-2025. I mean, just think about where we were this time last year. It’s insane how far this has come.
You should really pitch the idea to OpenAI to have the text to video and image to video prompts, then prompt the Sora to find stock video as a reference map to keep it clean.
Hey there, This is so insightful. Thank you. Question: I have some AMAZING high resolution character art from Magnific AI and I want to bring the characters to life in video, along with my unique plot and story arch. Which GenAI video tool or tools do you recommend please?
If you've got an animated style, you'll for sure want to check out Hailou/Minimax. They just released a new version of their model optimized for artistic outputs.
that was in Act-1 from Runway. I probably didn't cover that as well in the video as I wanted, and I'll likely revisit it. But basically, Act-1 will let you use your webcam (or other video source of a talking person) and "transpose" it onto another video. Kind of like faceswapping on steroids. What I've found is that you really over to OVER ACT to get those mouth movements. Like, REALLY ham it up. That seems to do the trick.
For most of what you've shown it seems like Sora is a really good video generator if you like 90s low budget movies. Otherwise, it seems like it's trying to give the Unreal Engine's Cinematic style a run for it's money. Seems like it needs more time or perhaps that big brother version is what we are really waiting for.
The $200 depends on the use case. As an AI enthusiast, it would still be an expensive hobby for me, especially in its current state. Thanks, as always, Tim. A follow-up on LTX would be fantastic. Their YT channel is a little 'light', but I feel they have something powerful going on. Many, including myself, probably don't know the depths. 👍
Can you make a rough estimate of how much „usable” material you got for the 200$? Working with Runway’s video to video you can burn through months’ worth of credits in an hour, and I see that being the case with most of the video generators.
I mean, sky's the limit, since once you burn through your monthly credits on Sora, you kick over to the "Unlimited" slow boat. And to be honest, thus far, the slow queue isn't that slow. I don't know if that'll change as time goes on--
Nice video. How did you generate your acting voices for your detective story? I am trying to give eleven labs some emotional queues, but it is never as close to being suitable as yours. So, do you already have a video about it?
Yeah, I used elevenlabs. There are some “temperature” controls you can play with, but I think the biggest thing is to really go overboard with your input audio. Almost like you’re stage acting. You gotta kinda ham it up a bit.
I've used runway in the past, and it was fine at that time. What I need is an AI tool that can get me to 2.5 to 5.00 min. Videos to match my AI audio. What tools do you think are closest to this now? Where should I invest my time for this eventual goal? Thank you for your channel.
Can you tell us how slow explore mode is with Sora after you used the credits up I’m concerned it will be snail pace if their servers are too slow. Can I ask how many videos you can queue and how long a 1080p clip takes in explore mode? Thanks for the amazing videos love your channel ❤
It’s not too bad. Probably somewhere around 2 minutes? Generally, I’ll kick off a generation, go work on generating an image, come back and it’s done. I haven’t noticed anything SUPER long. That said, I think you do get throttled a bit during heavy traffic times. But, it’s so early in, I can’t really tell. Overall, “slow mode” isn’t that slow. For now, at least.
8:30 not sure how far we are from Hollywood actually using these tools to directly make movies, but I can definitely see studios using them to make drafts of scenes or of whole films.
So the key is using the remix function with other videos, and ai programs. I hope that in a few months some one makes a good one stop shop for AI creation.
Tim, you’re still my favorite AI UA-camr. Always managed to make me laugh even when I’m feeling sick. Thank you for making my days better with your videos.
06:44 i really like how the lighting goes in that pirate scene it was my favorite. Kinda has its own style to it. Still at best my ai budget is 50€ a month so not gonna pay 190€ for the still cool video to video mode.
Use the cut editor. Take your last few frames and slide them to the beginning and then rerun it. Then, you’ll have to manually merge them together. To note: you do get a bit of a resolution and clarity hit. It’s not as soft as the usual method, but it has a weird look. Almost like more grain and less color. It’s weird
I've been using InsightFaceswapper on Pinokio. Local and Free! Did a whole video on it here: ua-cam.com/video/3pp7qw19nuA/v-deo.html Quick note: If you go this route, make sure to turn the dropdown to 512x512. It defaults at 128x128, but you'll get better results if you kick it up!
Thanks for the insights! Great video. My expectations were high for openai, unfortunately sora seems like beta software and $200 a month to beta test it seems rather excessive.
Yeah, this makes total sense... nice work figuring this out! It's essentially like a magnific upscaler at this point. 😂 Sort of also aligns with the amount of money it costs, too. Magnifiic, in my opinion is the best out there but also costs $39/month, which is tough.
I have yet to use Sora myself but is it not able to generate something usable every 20 generations or so? Because that's my standard for video generators. Some people I think expected Sora to always give them gold which was never realistic but with the 50 generations, that will only tend to get you a couple decent generations and that's also 480p vs the 720 you get from some other generators. But if you're not getting anything good from the 500 1080p pro generations, then that's bad. 7:25 That's clearly pirate Tommy Chong. That's it, hand in your boomer card.
1:05 So, instead of SORA, we got SORTA 😂
hahaha...YESSSSSS! Dad joke of the day Award here!
@@TheoreticallyMedia And great callout! I was not even aware that we did not get the main version of it. I know you said it should not be looked at as a bait-and-switch, but I don't know what else to call it, lol!
OUTSTANDING video! The bootleg footage really shows what Sora can actually do with the restraints off. So basically people can create their own TV and movies right now, with a little elbow grease. With a solid story and likeable characters, monetization options are many. My main concern as a content producer is copyright issues. Ideally you create all of the base content yourself and use the various generators as video upscalers. LOVE your content! Cheers!
Appreciate that! I’ll say one thing on that bootleg footage, weirdly I think the “bootleg” aspect is actually buying a little “quality” jump, oddly enough.
I think that lower resolution is actually rounding out some of the edges and imperfections in the footage. Not saying that as a good or bad thing, in fact, watching it made me think a savvy filmmaker might be able to use that idea as a narrative device.
Remember “found footage” movies ? Haha, we might have “bootleg footage” movies in the future!
So LTX has not only open sourced their model, but with the best motivation possible. The community is very thankful I must say. Kudos to CEO!
the improvements likely to happen in LTX while open source will end up making people regret their $200 purchase of Sora in a years time LOL
They do it to cover there own asses. They will later use the open source model to subvert copyright. Its not something to glorify.
@@7satsuit’s ltx like the video available free 😮
@@7satsuOAI is just going the Apple route: make products overpriced because the fans will buy it anyways as long as they release minor upgrades once a year.
@@kuromiLayfe tbf openai still have more releases, there could still be something big coming that could make it more worth it
I greatly appreciate every video that you produce. Thank you!
Wonder how much is costs them to run the non-turbo model they actually advertised with. That's definitely a bait and switch business move, but they should make that one available to premium paying customers.
I love the "Return To Forever - Where Have I Known You Before" Album cover in the background. Fantastic record!🤘
Oh, man! This is what I love about putting those records up there! With that one in particular, I was like: No one is going to get this! haha, BUT HERE YOU ARE!!
SUCH a good album!! I had actually listened to it that morning, which is why it got the display slot!
Man, you've not only got great ears, but a great eye as well!
@@TheoreticallyMedia well thank you! I'm a subscriber to your channel and I LOVE your content! I've heard you drop hints in your videos from time to time about prog rock/prog metal which I'm a huge fan of 🤘 I love the video you uploaded of Jordan Rudess going to your house... I'm so jealous! As a keyboardist, he's my favorite. Happy Holidays Tim!!! Robert...
So it’s worth 200 but you have to use every other video generator in tandem 😂😂😂😂
Haha, yeah, pretty much!
6:16 This is uncannily you, Tim. I mean, that's a high value production audition tape right there. Shows that probably the key is to define the correct workflow right now.
Greate video!! Its been a wild ride to see how the evolution of these tools has been enabling your detective noir film to come to life. Actually really looking forward to seeing rhe final version in the future!
As someone living in the UK (and therefore incapable of accessing Sora) it's actually quite a relief that vid-to-vid is the only thing that seems to compete - it's quite incomprehendable that they both neutered Sora pre-release to only the Turbo model for a whopping $200/m and yet it is still oversubscribed, if others knew about other Gen AI video platforms I'm sure they wouldn't have jumped in the 'deep end' like that.
LTX is amazing man, I'm generating clips for a new short as i type this. Love these guys!
Love them! I’ve spent a pretty good amount of time with the team, and I gotta say: hardworking and passionate about AI. Anytime I talk to them about the latest developments, they’re already 10x ahead of me about it, or working on getting it implemented.
They’ve really upped my knowledge on what it takes to add new tech into an existing platform as well- needless to say, it’s a lot harder than I imagined. It’s really given me a new found appreciation for not only them, but all the other generators as well.
Behind every LTX, Runway, Luma, Etc, there is a hard working team of people grinding hard every day!
Aha, the two guys walking clip looks like something out of a 90s cdrom game. The establishing shot and the following shot are pretty impressive tho.
Excellent video, thank you! Does anyone know which tool got used at 6:00 for the face swapping in videos? I can`t find the tool "Kutidian" 😅
Love how your Noir film keeps getting better and better
Tim has some good vids
Thanks for showing mixing multiple tools into the workflow. Invaluable!👏
Spending $200 a month only makes sense if your channel is already generating good revenue and you need a lot of daily generations. Otherwise, there’s no point in spending $200 a month! 😊
$200/month also gives you unlimited ChatGPT and extensive o1 Pro. Plus, there's life outside UA-cam channels where Sora is useful :)
I don’t have a UA-cam channel and yet found plenty of reasons to invest in my ongoing Ai growth. But you may certainly speak for you.
I think it's worth it if you don't utilise it monthly and only pay for a month when you have a specific project your working on and few filler or fx scenes to do with Sora, then cancel when you don't need it. It kind of becomes like a contracting out a film crew fit to tasks and that would save you noney versus the alternative.
Most of us don't have channels promoting AI services, either. It's much harder to make money off your channel if you are doing original artistic content.
it also makes sense If you know how to create a new channel that will explode.
does not just apply to existing channels.
Amazing stuff!! Very exciting 😁
I really think the game footage as a source video is going to end up being a winner, to remix ontop of, especially for fine direction control and consistency.
Using something like unreal engine or blender to kit bash a previz scene , should give Sora so much context in in terms of camera , character cloths and facial features and lighting , that longer form film stitching, which keeps characters and scenes constant will end up being possible sooner rather then later. On top of that it's like having libraries of free props to utilise. The more I think about it, the more keen I am for the next 2 years of indi films.
I was at the LTX event (I wanted to say 'hi' Tim, but you were very popular) and I thought everyone there did a fantastic job of presenting. LTX actually listens to the Discord channel as far as constructive (and perhaps not so constructive) critiques. I like a company that has an open dialogue with its users.
Wait until these A.I. models are able to do all of this in real-time. Wholesale scene, character, sound and location generation should be possible from any or nearly no input by the user. That level of generative power should start appearing in a year or two, by my own estimation.
Doubt it.
At this rate, it is very likely.
And videogames generated irt
when sora can upscale and maintain face-preserving image processing, ill consider upgrading
I’m hoping to see the same. If you could get trained characters or Loras at this level? I mean- that would be massive.
As is, there is always face swapping. We’re still in the kitbashing era
@@TheoreticallyMedia Who is your favorite for face-swapping?
@@TheoreticallyMedia Face swapping is the REAL friend right now. Turns video's gone wrong into video's gone right.
@@benhansford4290 Swapface will run on a half decent laptop. Not free, but pretty cheap. Not censored (its on your computer). Seaart AI face swap is good. Think its free. Can get spicyish stuff through. Right now only one person in the shot. Last week they had multiple, but must have had a bug. Swapface does multiple faces in a shot.
@@TheoreticallyMedia i'd also like to know how you did the faceswapping in this vid!
You would definitely be a game changer if you could take unrendered CGI or ungraded smartphone footage and make it look like a polished Hollywood film but keep details consistent
Consistency is really the trick here. Everyone is working on it. I'd say: Mid next year? So many of these little problems are on the verge of being solved.
Banging my head trying to get these SORA prompts to get me something. This is a great use case. King Tim delivers again
CitizenPlain from X here: Nice overview and thanks for sharing my tests! I think you'll enjoy blends and storyboard once you have time to test them out. They're the features that AFAIK aren't possible with any other tools.
Hey man!! Stellar work on those tests!! Agreed! I just posted a "consistent character" test on X you might find interesting.
Still, stupid curves. I HATE curves!
Quick, without going back, what color was her headband @4:02?
Cool video. Have you come up with w workflow to put yourself with the same realismo that Sora gives? I need to resolve that…I’m thinking in wotking with Runway and Sora, but I’m not sure in I’ll be needing another software to make sure I get our faces and hairs right, like Kling or Akool
Please I would really love and answer of what you think could help to faceswap in video that went from midjourney, to Kling, or Runway and lastly Sora and that need to reface the original character
How did you get around: "Error running this prompt, media contains people"?
The workflow to get an amazing and consistent video would be fairly complex, but you have definitely demonstrated a use for it. I mean bringing a face swap element into the mix could make it very useful indeed.
That’s not fair. I still have my free account not to buy any $200 subscriptions, but I must say, That’s the COOL ai video generator I ever seen!
It's not a generator in current state, it's a very expensive restyler/retexturer)
@ HoW Do YoU NoT KnOw aBoUt GeNeRaToR?!?!
@@RMCanimationOFFICAL it's subpar to almost every current commercial competitor, and with this price tag it makes no sense to use it as a generator.
LTX ROCKS!!!! 💯
Your content is great, helpful and entertaining! Thanks!
200 bucks a month for a model that STILL cannot handle WALKS and FINGERS yet...
It might be doing great upscaling videos from other models but these are not free, so that adds up to the monthly fee.
Watching on the surface, things look cool but the background is full of errors.
I cannot name the timeframes but just into the 1st minute
0:14 - 0:16 (just within 2 SECINDS)
A guy in the BG standing in mid stride
- A passerby whose legs TWIST as he scurries past
- 2 people who melt away while walking.
-
Ongoing a few seconds later
-
-Fast forward to the girl with the red hair:
the fingers are so UNFORGIVEABLY twisted.
We do know that most of these realistic models are trained on clear- view AI's surveillance cameras in cities (there is a documentary about that) we then begin to understand the finger challenge.. cos people mostly move their fingers when they walk or hide it but nobody splays it in front for triangulation.
200 bucks a month is NOT worth it currently
12:55 What an unexpected turn of events! This is some quality writing right here. I want to know how this story unfolds!
Thanks for this. I had NO idea that I could upload video into Sora!! I found the option and I'm going through and remixing my Hailuo and Kling stuff
Pretty crazy, right? I’ve been going overboard taking all these shots I’ve generated over the last few months and- to say the least, I’m really impressed with these new versions!
Can’t wait for what’s to come in 2025!
Very creative uses, Tim. Kudos! Anyone with access to all these tools could really have some fun and start putting some real coherent stories together. I'm sure we will start to see more of that coming out from filmmakers over the next few months. I wish I was on that train.
Thanks!
"Pirate Cheech Marin"... LOL!
Haha, in the edit I was like: “I should have gone with Tommy Chong!” Haha, but I glad that joke still landed!
Yup. My old brain said “Tommy Chong”… and you said “Cheech” and I was like… cool, man… either way, still smokin’.
Open AI has historically made much of a cost of compute but once the technology gets out to other companies it doesn't seem to be anywhere near as big a deal. 2 years ago they were claiming that just generating simple low res images what's fabulously expensive then stable diffusion matched them on consumer grade GPUs
In video to video how much luck did you have keep in the level low to just subtly improve the look without changing the details of the scene?
I actually need to circle back and try out low a little more. Initial tests at low were trying to prompt out of more animated styles into a photo real, and it didn’t do very much. I probably should have tried it out just to see how it acted on “normal” footage and just trying to change details.
I’ll say, though: if they eventually add a masking or brush tool to this? I mean, that would be insane
Hey thanks for all the infos! 13:00 👀 is quite impressive!
I don’t have the pro plan, but when I tried remix, it says that I cannot do that because there are people in the video. So upgrading to pro plan will allow me to render videos with people?
Yup. Kind of a dirty dog trick on OAI's part. I covered it in my earlier video. I'm not stoked about that either. I think that gate will eventually be lifted, but man...not cool.
What face swapper did you use @6:15 ?
Could this be suitable for some kind of compositing work? Does the camera movement and movement of the actors match the original shots?
I always enjoy seeing your videos thank you for all the time you put into every one of these!
Very helpful -really enjoyed the film noir short at the end, it’s coming along and the best thing about the genre is that it’s hard to overdo it (or is it!), so probably the best genre to work out. In terms of the costing, we’ve all got a book inside us, well actually now it’s a video and for an annual hit of $2400 you can get that film mad, and really well too with total control. Especially if you are retired (and a lot of boomers are now) so at that level, a lot of creativity could be on the way for those prepared to find the time. provided of course they have the story inside, that’s ready to go. Can you tell a story suckers? i say the future is ours - if only you can tell.
'Everyone was very impressed with her...watch.'
That anime girl with the hair scene blew me away because how different it looks like it kinda has its own cool style, very unique well done.
What if you re-upload a video from Sora again to Sora? Is it improving even more? I was thinking about the video inside the castle for example.
Thank you Tim, That was awesome! I'm really waiting for the next Gen with multiple input options, video to video + reference photo for character + reference image for style. I mean... the technology is there... just mash them up together.
For sure we’ll see what you’re talking about by mid-2025. I mean, just think about where we were this time last year. It’s insane how far this has come.
Hi man nice to see you ,, what this tool it’s that Sora or ltx or what pls answer 13:08
You should really pitch the idea to OpenAI to have the text to video and image to video prompts, then prompt the Sora to find stock video as a reference map to keep it clean.
I totally would, but no one from OpenAI ever talks to me. I’m not cool enough.
Hey there, This is so insightful. Thank you. Question: I have some AMAZING high resolution character art from Magnific AI and I want to bring the characters to life in video, along with my unique plot and story arch. Which GenAI video tool or tools do you recommend please?
If you've got an animated style, you'll for sure want to check out Hailou/Minimax. They just released a new version of their model optimized for artistic outputs.
Great video Tim. How are you able to generate the talking lady with consistent mouth movements, facial emotions and voice?
that was in Act-1 from Runway. I probably didn't cover that as well in the video as I wanted, and I'll likely revisit it. But basically, Act-1 will let you use your webcam (or other video source of a talking person) and "transpose" it onto another video. Kind of like faceswapping on steroids.
What I've found is that you really over to OVER ACT to get those mouth movements. Like, REALLY ham it up. That seems to do the trick.
Is the video to video feature only available in the $200 version?
Whay wahn I remix i get back 9:16 instead of 16:9 ?
Thank you for doing the videos. $200 is alot to show us this.
I'm going to ABUSE that Unlimited part for the next 25 days for sure!
For most of what you've shown it seems like Sora is a really good video generator if you like 90s low budget movies. Otherwise, it seems like it's trying to give the Unreal Engine's Cinematic style a run for it's money. Seems like it needs more time or perhaps that big brother version is what we are really waiting for.
The $200 depends on the use case. As an AI enthusiast, it would still be an expensive hobby for me, especially in its current state. Thanks, as always, Tim. A follow-up on LTX would be fantastic. Their YT channel is a little 'light', but I feel they have something powerful going on. Many, including myself, probably don't know the depths. 👍
LOL!!!!!!!!!!!!!!!!!!!!!!!!!!! 4:08 🤣🤣🤣🤣🤣 You said what I was thinking! Love you Tim!!!
Eyes up front, Mister!!
@@TheoreticallyMedia Yes sir!
Can you make a rough estimate of how much „usable” material you got for the 200$? Working with Runway’s video to video you can burn through months’ worth of credits in an hour, and I see that being the case with most of the video generators.
I mean, sky's the limit, since once you burn through your monthly credits on Sora, you kick over to the "Unlimited" slow boat. And to be honest, thus far, the slow queue isn't that slow. I don't know if that'll change as time goes on--
i am wonderig how long does it take to generate 10s videos on turbo mode and on the relaxed mode
Thanks for the video Tim, love that you can show us how Sora works.
Appreciate that so much!!
Very informative, thanks! 👍
100%!!
That was a wonderful video thanks for shedding light on the secret! Question: do you have to use prompt w remix or leaving as is does it
You can do either!
Thanks for all the info! May I ask how long do you have to wait for generations in relax mode when you spent all your credits?
Nice video. How did you generate your acting voices for your detective story? I am trying to give eleven labs some emotional queues, but it is never as close to being suitable as yours. So, do you already have a video about it?
Yeah, I used elevenlabs. There are some “temperature” controls you can play with, but I think the biggest thing is to really go overboard with your input audio. Almost like you’re stage acting. You gotta kinda ham it up a bit.
I've used runway in the past, and it was fine at that time. What I need is an AI tool that can get me to 2.5 to 5.00 min. Videos to match my AI audio. What tools do you think are closest to this now? Where should I invest my time for this eventual goal? Thank you for your channel.
Oh wowlove your videos! Thx!
Thank you!
Can you tell us how slow explore mode is with Sora after you used the credits up I’m concerned it will be snail pace if their servers are too slow. Can I ask how many videos you can queue and how long a 1080p clip takes in explore mode? Thanks for the amazing videos love your channel ❤
It’s not too bad. Probably somewhere around 2 minutes? Generally, I’ll kick off a generation, go work on generating an image, come back and it’s done. I haven’t noticed anything SUPER long.
That said, I think you do get throttled a bit during heavy traffic times. But, it’s so early in, I can’t really tell. Overall, “slow mode” isn’t that slow. For now, at least.
8:30 not sure how far we are from Hollywood actually using these tools to directly make movies, but I can definitely see studios using them to make drafts of scenes or of whole films.
So the key is using the remix function with other videos, and ai programs.
I hope that in a few months some one makes a good one stop shop for AI creation.
what's crazy is if you end up looking like that older version in 5-10 years lol
Can someone tell me when art & money became intertwined please?
Hah. There will always be someone at the center of that intersection pocketing the money,
That person is named “management”
Sigh
Love the content and sense of humor) Terry Gilliam in drag, lol)) More like Jim Varney(remember the Ernest?) in drag)))
Thanks for checking it out as promised!
8:30
Yep, Hollywoods going to love this
Tim, you’re still my favorite AI UA-camr. Always managed to make me laugh even when I’m feeling sick. Thank you for making my days better with your videos.
Appreciate that! And hope you feel better soon!
Yarrr ese. Subscription unlocked. 👍🏽
IM wondering what would be the best AI to generate animation form reference footage? Anyone?
06:44 i really like how the lighting goes in that pirate scene it was my favorite. Kinda has its own style to it. Still at best my ai budget is 50€ a month so not gonna pay 190€ for the still cool video to video mode.
btw, how are videos extended using sora?
Use the cut editor. Take your last few frames and slide them to the beginning and then rerun it.
Then, you’ll have to manually merge them together. To note: you do get a bit of a resolution and clarity hit. It’s not as soft as the usual method, but it has a weird look. Almost like more grain and less color. It’s weird
@ thanks!
What do you recommend for face swap?
I've been using InsightFaceswapper on Pinokio. Local and Free! Did a whole video on it here: ua-cam.com/video/3pp7qw19nuA/v-deo.html
Quick note: If you go this route, make sure to turn the dropdown to 512x512. It defaults at 128x128, but you'll get better results if you kick it up!
@ Thank you!!
Worth it for sure. Relaxed que doesn't take much longer once you've run out of credits.
yah, the hair and watch really sold it for me lol
For me it was the watch. It’s so stable! Haha. Nothing else in that video to take notice of!
Solid video. Subscribed
An entire month of playing with alien storytelling technology for only $200? Makes me wish I had $200!
see, YOU get it!
Great summary, thanks a lot!
Great video Tim. Thank you.
1000%! Thank you!
Now ltx has animition feature with which u can do live animition of characters.something sora ???.
Thanks for the insights! Great video. My expectations were high for openai, unfortunately sora seems like beta software and $200 a month to beta test it seems rather excessive.
Yarr Ese! LMAO Tim Lives!
Wow, Runway Act One and Sora...
Def need openai to replicate that because that is impressive.
8:40 king Terry Davis🧎
That's a name I haven't heard in awhile! Y'know, honestly, the story of Terry would make for an EXCELLENT AI Short film/Documentary.
You are doing the machine overlord's work. Thank you and praise server!
It's pretty cool that you can feed this thing potato footage with random props in everyday locations and get such good results
5:33
😆 🤣 😂 😹 😆 🤣 your commentary is hilarious!
Yeah, this makes total sense... nice work figuring this out! It's essentially like a magnific upscaler at this point. 😂 Sort of also aligns with the amount of money it costs, too. Magnifiic, in my opinion is the best out there but also costs $39/month, which is tough.
How do you make a video like this and NOT say Magnific! Tune in to find out! haha-- yup, EXACTLY that!
Tim, I need you to zoom in on her watch and give us an extended side by side comparison please.
I have yet to use Sora myself but is it not able to generate something usable every 20 generations or so? Because that's my standard for video generators. Some people I think expected Sora to always give them gold which was never realistic but with the 50 generations, that will only tend to get you a couple decent generations and that's also 480p vs the 720 you get from some other generators. But if you're not getting anything good from the 500 1080p pro generations, then that's bad.
7:25 That's clearly pirate Tommy Chong. That's it, hand in your boomer card.
been waiting on vid to vid - I can input my architectural rendered animations and turn them into photoreal animations
100%! That’ll work like a charm!