Hey friends, quick note! It turns out 'Where the Robots Grow' isn't technically the first Animated AI Feature Film. While it is likely the first AI Feature Film to use AI Mocap and traditional 3D tools together in a feature film, other AI films like "Window Seat" and "DreadClub: Vampire's Verdict" came before it. All of the films though mark huge achievements in storytelling and we are super excited for what this means for filmmakers around the world. Thanks for the heads up Tasha!
Great video. As a Mac user in the film industry, I have found it hard to find any AI tools that can run locally and use just cpu processing power. Would love to see a video where you show some solutions that don’t require a ton of hit and miss time spent in terminal. Ai voice cloning, text to speech with cloned voice model and ai text to video with reference images are what spark my interest.
0:08 Great episode! But in all fairness, Hooroo Jackson created the first ever AI feature film, ("Window Seat," [on UA-cam] summer '23), and first ever animated AI feature film ("DreadClub: Vampire's Verdict [on Amazon Prime Video], summer '24) 👌🏽😇
We'll update everyone next week. :) It'd probably be helpful to create a timeline page so folks can see all of these amazing achievements in one place. So many firsts! Keep up the good work.
27 днів тому+1
I like Krea’s approach to combine all video models in one platform but the downside is that they don’t have any of the lipsync features of Kling or Runway integrated yet.
1:41 whoa that fit too perfect. You were nearly in sync with character... That could be you in 20 years bro. If your mother in law knits u a green sweater .... RUN!!😮
I discovered your channel a few weeks ago and I'm so glad I did. You have such incredible content and information, thank you! Do you know how they were able to add movement to the background in "Diner" after using Act-One?
Another great overview! A Krea API would be great. Like a single endpoint that handles image to video, user can pick the video AI model Krea integrates with, drop that into an automation workflow. Is this possible, or just wishful thinking for now? :)
I was asked a few months ago if AI was going to affect the creative industry. I responded that typically in the past it took years for a new technology to infiltrate the system and become a usable took in the industry. AI still is a ways out as a viable tool for creative film making but it is progressing at a hyper speed. I personally feel there are a few projects I currently have where AI could replace my standard workflow and tools. Should we be worried? No, take the bull by the horns and use it, embrace it as part of a pipeline for your creative imagination.
I dunno, I find all of this sad and tragic and the amount of people willing to sign off their souls for this feels apocalyptic. Like when Oppenheimer was convincing scientists to build the bomb and not seeing the devastation it's about to do on people (artists, creatives). If you're not actually following the projection of AI, and its impact, negative--then its easy to be excited by this.
I agree but if you have dedicated your life to art you arent being given much of a choice. I am directing a feature film in Q1 of next year and I am staying up nights trying to find ways of incorporating AI into the production. Pre viz is easy but come March when I am in post, there will most certainly be tools I need to be using. I actually hate this. It terrifies me more than excites me but I have no choice.
@SCharlesDennicon true and we have the luxury of seeing that distinction today. My children won't. Movies, photography, novels, and music will all just be things they don't have to understand the creation of unless they choose to learn. Like us now with the internet or cell phones.
@@berhunt A Banner with "Introducing Act-One" appeared on my dashboard with a button to try it out. I guess they roll it out randomly, maybe taking into account which Plan you have and how much you use the service?
I love this. I’d love The class but ouchie. $699 for the one course is price for me. I’d pay it I could though. Seems worth it. I hope you all get great students and succeed.
Where do you draw the line between AI production support like 3D Models from prompts, background Images from genai, animation from video (tracking) and pure AI videos created by models like Kling? The Storybook studio SciFi film was cute, but it was made classic with support by AI tools. Does this film is AI video for you?
The actors will be able to walk into a studio if they even bothered to do that and there will be a real time AI that transforms whatever they’re doing into the most psychologically appealing interview or performance that is possible. They could be standing there in their pajamas cussing, but AI will transform what they say and what they look like into something completely different to draw in more ratings. The same thing will happen with politicians. You’re not going to know what you’re looking at pretty soon.
PS. Sorry one more thing, image to video is NOT always better. Why? because there is a complete lack of motion and a lack of action for the most part in image to video. Whereas if you do text to video, you get very action oriented videos. Images still lack movement big time compared to text to video.
Certainly pros and cons to each. but Image2Video is better because of all the intial control you have from the original image generation. The tools are now good enough to add action.
@@curiousrefuge That's only if you want the visual and aesthetic appeal more than motion. Having an image beforehand is better in the sense that you can get the feel of what you want and the essence and environment that you're looking for, but less motion. If you want action and movement then you won't get that with images. So it just depends. If you get just the right text the video generation then it beats image to video. But you have to regenerate over and over until you get the perfect text to video generation. It is at that point is when text video is better than image to video. It just takes more work.
Two things: 1. You do lose out on the additional features in most of the tools like camera control and negative prompting. 2. There are some limitations. For example in Kling I believe it only uses the 1.0 model and not the 1.5.
Not a fan that you have to look at the camera directly. I'm looking for the moment that one character can be looking slightly off camera towards another character. More importantly, we need the ability that once we have a character we like, that we can then carry over that same character to a different scene.
So sorry about that! Just fixed the link to our site. The specifics links are on that blog page. You can also find them down below inside of f the description! Sorry about the issues!
Well, it will only cost a million dollars to use runway because they charge so many credits and output so many mistakes. You run out of credits before you get something useful as of now.
It's not just about which way the character is facing, it's the fact that you can't have motion picture and still perform for the actor. That makes this very useless. There is no motion picture with these, let alone you can't even face the character different directions, which is useless for video production in its own right lol It seems like they are just showcasing the technology. Otherwise this is just another cheap gimmick from runway
But how are these videos made? We know someone can talk and it can be animated as an animated character but are you selecting the character are you prompting the character are you giving it a photo come on give us some details of this we don’t care about the samples I want to know how it’s made
Which segment? It's often a combination of inputs whether it's video or image. Also, sometimes the breakdown are cloud tools, local tools like comfy, or some complex tools like wonderstudio.
@@curiousrefuge Runway Act One. how is the character selected? are they giving you choices of preset characters or are you giving it a still image?? or?
You forgot to mention that mid journey editing only comes with one year subscription or if you generated 10,000 images. Have no idea what these guys are doing by limiting people using their products. It’s like saying you can only buy this car if you’ve driven 10,000 miles in one before just absurd.
New features like this are typically beta tested by large volume of users at a time which is why they have that limitation. Eventually, perhaps after 1 month, it opens up to everyone. We wish we didn't have to wait but luckily it wno't be too long before general release!
The perfect film prompts already exist. They're called BOOKS. I have some fav books to have turned to film. Perhaps starting with basically a picture book with voices, until eventually the quality of a spiritual travel movie set in the 1930s. I'd love to understand better what I can do on various budgets. For the main character and voice over I have audio recordings from the 1940s and 1950s. How good a movie could we make with $1M or even $50K? Some drone shooting on location in India and Tibet, some filming of people to be used as characters, put it all in a database and then start rendering scene to scene? A human screen writer can do the creative job of slimming down the books if needed, or even add to them. AI would be there to help as well. Ideally I'd use actors for live action filming, use AI for the landscapes and travel scenes where needed. Recreating 1930s locations might be costly. Landscapes need to be translated from 2020s to what they would have approximately looked like 90 years ago.
Hey friends, quick note! It turns out 'Where the Robots Grow' isn't technically the first Animated AI Feature Film. While it is likely the first AI Feature Film to use AI Mocap and traditional 3D tools together in a feature film, other AI films like "Window Seat" and "DreadClub: Vampire's Verdict" came before it. All of the films though mark huge achievements in storytelling and we are super excited for what this means for filmmakers around the world. Thanks for the heads up Tasha!
Appreciate you showing love to my video Mr. Stitch! Even if I don’t win appreciate the love!!! 25:25 is my video 🙌🏾🙌🏾💪🏾💪🏾💪🏾
Thankyou Caleb, My A.I channel and short movie production making is far better because of your consistent A.I video's weekly!!
I appreciate that!
Thanks for the update! Always waiting to hear the news from you first, then it’s official, haha
Thank you for helping me to stay up to date.
Great video, thank you for sharing, appreciate you
Great video. As a Mac user in the film industry, I have found it hard to find any AI tools that can run locally and use just cpu processing power. Would love to see a video where you show some solutions that don’t require a ton of hit and miss time spent in terminal. Ai voice cloning, text to speech with cloned voice model and ai text to video with reference images are what spark my interest.
thanks! comfyui is being developed to work much smoother on a mac. More on that soon!
0:08 Great episode! But in all fairness, Hooroo Jackson created the first ever AI feature film, ("Window Seat," [on UA-cam] summer '23), and first ever animated AI feature film ("DreadClub: Vampire's Verdict [on Amazon Prime Video], summer '24) 👌🏽😇
Fair point!
We'll update everyone next week. :) It'd probably be helpful to create a timeline page so folks can see all of these amazing achievements in one place. So many firsts! Keep up the good work.
I like Krea’s approach to combine all video models in one platform but the downside is that they don’t have any of the lipsync features of Kling or Runway integrated yet.
It's certainly a cool way to put it all together!
This is incredible!
Me encantó!! Muchísimas gracias por vuestro contenido!!
Thanks for watching!
multiple Camera shots are the way forward. And Camera direction of course.
1:41 whoa that fit too perfect. You were nearly in sync with character... That could be you in 20 years bro. If your mother in law knits u a green sweater .... RUN!!😮
hahah thanks for watching and for the laugh!
So much is happening so fast 🤯
What a historic year this has been
So true!!
See you in Chicago!
I discovered your channel a few weeks ago and I'm so glad I did. You have such incredible content and information, thank you! Do you know how they were able to add movement to the background in "Diner" after using Act-One?
Thanks so much! Very likely did some compositing work.
@@curiousrefuge That was my suspicion, thanks!
진짜 AI 춘추 전국 시대네요. 1년후가 궁금해 지네요.
Thanks for watching!
Love you keep the good work ❤️
Thank you! Will do!
Another great overview! A Krea API would be great. Like a single endpoint that handles image to video, user can pick the video AI model Krea integrates with, drop that into an automation workflow. Is this possible, or just wishful thinking for now? :)
Thanks for watching!
I was asked a few months ago if AI was going to affect the creative industry. I responded that typically in the past it took years for a new technology to infiltrate the system and become a usable took in the industry. AI still is a ways out as a viable tool for creative film making but it is progressing at a hyper speed. I personally feel there are a few projects I currently have where AI could replace my standard workflow and tools.
Should we be worried?
No, take the bull by the horns and use it, embrace it as part of a pipeline for your creative imagination.
Well said!
The cooler thing with the midjourney editor is that you can put a beach around an actual photo of you!
Haha, definitely a possiblity!
thanks for the amazing video
Glad you liked it!
Incredible 💯
Thanks for watching!
I dunno, I find all of this sad and tragic and the amount of people willing to sign off their souls for this feels apocalyptic. Like when Oppenheimer was convincing scientists to build the bomb and not seeing the devastation it's about to do on people (artists, creatives). If you're not actually following the projection of AI, and its impact, negative--then its easy to be excited by this.
I agree but if you have dedicated your life to art you arent being given much of a choice. I am directing a feature film in Q1 of next year and I am staying up nights trying to find ways of incorporating AI into the production. Pre viz is easy but come March when I am in post, there will most certainly be tools I need to be using. I actually hate this. It terrifies me more than excites me but I have no choice.
@@reddjinn911 AI and art are antithetical.
@SCharlesDennicon true and we have the luxury of seeing that distinction today. My children won't. Movies, photography, novels, and music will all just be things they don't have to understand the creation of unless they choose to learn. Like us now with the internet or cell phones.
Agree to disagree! Like photoshop, we see many existing artists using AI tools to help create new art in new ways.
@@curiousrefuge agreed 😁 but soon there won't be any need to even open that software
Actually Runway was pretty fast with rolling out the new feature, I for example already have access to "Act One", as they call it.
Amazing. How did you get access ?
@@berhunt A Banner with "Introducing Act-One" appeared on my dashboard with a button to try it out. I guess they roll it out randomly, maybe taking into account which Plan you have and how much you use the service?
Lucky you!
The resolution of the end result is very bad.... any clue???
Jump in our discord and we'll try and help!
@09:00 exactly this!
Thanks for watching!
I love this. I’d love
The class but ouchie. $699 for the one course is price for me. I’d pay it I could though. Seems worth it. I hope you all get great students and succeed.
Hey you should message us. We love to help out folks who have unique financial needs.
No harm, No fowl! LOL I see what you did there
Where do you draw the line between AI production support like 3D Models from prompts, background Images from genai, animation from video (tracking) and pure AI videos created by models like Kling? The Storybook studio SciFi film was cute, but it was made classic with support by AI tools. Does this film is AI video for you?
It's typically a ratio and sometimes people simply say "ai assisted".
The Krea update is great its a user friendly platform
Glad you gave it a shot!
Does the Runway unlimited plan also cover Act One generations?
It should! but definitely double check before any purchaes!
Ah if Runway only had severs that were not - not available - it would rock.
That would be cool for sure!
No harm, no fowl...good one.
thanks for watching!
Some reason the discord invite for the AI animation is expired or not working?
We'll fix it! Thanks!
The actors will be able to walk into a studio if they even bothered to do that and there will be a real time AI that transforms whatever they’re doing into the most psychologically appealing interview or performance that is possible. They could be standing there in their pajamas cussing, but AI will transform what they say and what they look like into something completely different to draw in more ratings. The same thing will happen with politicians. You’re not going to know what you’re looking at pretty soon.
Very true, it will be difficult to know what is true or not modified.
It looks like we got a bad link for the Curious Refuge page for the links in this video.
Oh sorry we'll get that fixed!
note that Krea is using Kling 1.0 and not 1.5 like on the main website
Good point!
PS. Sorry one more thing, image to video is NOT always better. Why? because there is a complete lack of motion and a lack of action for the most part in image to video. Whereas if you do text to video, you get very action oriented videos. Images still lack movement big time compared to text to video.
Certainly pros and cons to each. but Image2Video is better because of all the intial control you have from the original image generation. The tools are now good enough to add action.
@@curiousrefuge That's only if you want the visual and aesthetic appeal more than motion. Having an image beforehand is better in the sense that you can get the feel of what you want and the essence and environment that you're looking for, but less motion. If you want action and movement then you won't get that with images. So it just depends. If you get just the right text the video generation then it beats image to video. But you have to regenerate over and over until you get the perfect text to video generation. It is at that point is when text video is better than image to video. It just takes more work.
Do we know if Krea is using latest Video models API ? Because what's the catch lmao, let's all move to Krea in that case ?
Two things:
1. You do lose out on the additional features in most of the tools like camera control and negative prompting.
2. There are some limitations. For example in Kling I believe it only uses the 1.0 model and not the 1.5.
@@curiousrefuge Thanks for the answer, shame they didn't mentionned it.
Not a fan that you have to look at the camera directly. I'm looking for the moment that one character can be looking slightly off camera towards another character. More importantly, we need the ability that once we have a character we like, that we can then carry over that same character to a different scene.
Definitely has some room to grow! But so far it's looking good!
Your links in the description is 404. Do you have a link to the comfy ai Mac sign up?
So sorry about that! Just fixed the link to our site. The specifics links are on that blog page.
You can also find them down below inside of f the description! Sorry about the issues!
Thank you so much.
Well, it will only cost a million dollars to use runway because they charge so many credits and output so many mistakes. You run out of credits before you get something useful as of now.
We certainly recommend the unlimited version for serious projects
It's not just about which way the character is facing, it's the fact that you can't have motion picture and still perform for the actor. That makes this very useless. There is no motion picture with these, let alone you can't even face the character different directions, which is useless for video production in its own right lol
It seems like they are just showcasing the technology. Otherwise this is just another cheap gimmick from runway
Definitely has some room to grow!
Bikers by Spellman Muzik Is AI generated Music Video first ever I seen.
Thanks for watching!
❤
Thanks for watching!
You use the word fidelity quite often. A word I associate with sound production. What are you referring to in your description of an image or video?.
Basically details that go beyond simply more resolution. More pixels doesn't always equal better overall quality.
@@curiousrefuge fidelity does not mean quality, it means adherence, faithfulness
@@reezlawhigh fidelity has been used in audio & video for decades?
But how are these videos made? We know someone can talk and it can be animated as an animated character but are you selecting the character are you prompting the character are you giving it a photo come on give us some details of this we don’t care about the samples I want to know how it’s made
Which segment? It's often a combination of inputs whether it's video or image. Also, sometimes the breakdown are cloud tools, local tools like comfy, or some complex tools like wonderstudio.
@@curiousrefuge Runway Act One. how is the character selected? are they giving you choices of preset characters or are you giving it a still image?? or?
You forgot to mention that mid journey editing only comes with one year subscription or if you generated 10,000 images. Have no idea what these guys are doing by limiting people using their products. It’s like saying you can only buy this car if you’ve driven 10,000 miles in one before just absurd.
New features like this are typically beta tested by large volume of users at a time which is why they have that limitation. Eventually, perhaps after 1 month, it opens up to everyone. We wish we didn't have to wait but luckily it wno't be too long before general release!
Infinite Jest
Thanks for watching!
Also Krea lost tons and tons of subscribed users because they are all censored now LOL
Ah good point!
The perfect film prompts already exist. They're called BOOKS.
I have some fav books to have turned to film. Perhaps starting with basically a picture book with voices, until eventually the quality of a spiritual travel movie set in the 1930s. I'd love to understand better what I can do on various budgets. For the main character and voice over I have audio recordings from the 1940s and 1950s. How good a movie could we make with $1M or even $50K? Some drone shooting on location in India and Tibet, some filming of people to be used as characters, put it all in a database and then start rendering scene to scene? A human screen writer can do the creative job of slimming down the books if needed, or even add to them. AI would be there to help as well.
Ideally I'd use actors for live action filming, use AI for the landscapes and travel scenes where needed. Recreating 1930s locations might be costly. Landscapes need to be translated from 2020s to what they would have approximately looked like 90 years ago.
Give it a year and you can do all that with AI and a few dollars. No humans or drones needed.
We're excited to see your projects based on these books!
That Robots "film" is not a film at all. If you can actually get through it then you have an iron will. None of this stuff is impressive to me.
We appreciate the feedback!
500 bones for a course!...greedy much
We pack tons of value into our course and community. We hope you'll join us sometime. happy to answer any questions you have!
Thanks..but, the crushed Duck clip was a bit vile; and no it didn't look like it was made of rubber.
Tahnks for watching! It definitely didn't look real! XD
Thank you for bringing the news to us! Little contribution- AI Revolution song - ua-cam.com/video/aaH72KChtBU/v-deo.htmlsi=vrROfXG0tEIbII2N
Thanks for sharing!
"creates ai video professionally"...
Yes!
warnings before scary videos please.
Need time to grab your mommy?
which movie?