I hope they will add a feature where you can give a picture (or more pics) as a reference of the outcome. This way a poorly rendered 3d animation could be a final render very fast.
@@culttelevision . Are you sure?.. I still don't see a way to add an image reference to a video to video generation. (Just image to video. Or video-to-video restyled based on a text prompt?).
@@AIAnimationStudio on the right hand side. There's a style reference with an image upload and i've literally just done a video to video using just a text prompt. under STYLE REFERENCE (IMAGE / PRESET / TEXT PROMPT)
I need a way to do this with longer clips, if you tried to do an entire video, or even just a minute, it would currently be impossible to maintain the same look throughout. Can't wait for that.
@@GadrielDemartinos . Good question .. and no I’d not checked that yet. Hopefully with the same text and seed number we should get some consistency. Something to try tomorrow 👍
@GadrielDemartinos Yes, I just tested it by using the same seed from the transformed video and reusing the same prompt for another clip of the same character (from a short AI film I'm creating). So, it works by uploading ten seconds at a time of the original video, then finally editing them together into a longer video.
@@AIAnimationStudio It works, I just tried it using the same seed value and same prompt on another clip I uploaded (of the same AI character) and it gave me an identical result in its style, as far I can tell.
Thanks... that geometric one had the prompt: "Low poly 3D model, with paper textures. Origami. 3D polygon man talking to camera. Dynamic lighting. Cinematic."
This tool is realy cool BUT I have a big problem: The minimal structure transform amount of 0 is still too big to get controllable and therefore usable outcomes. I would really love to be able take a 3D render which looks semi real and then keep it mostly the same but just adding a bit of detail with ai. Sadly this is not possible yet because the outcome is still very varied
@@SSLCLIPS-TV . You could try ComfyUI locally with an elaborate workflow to achieve something similar to DomoAI. I think Pika can do it with its older v1 but I don’t think that’s covered by their free plan. (Version 1.5 is). MiniMax can do image to video but not video to video yet. Does Kling do it yet?? … I need to revisit Kling soon.
Not sure... potentially a temporary glitch. If it's not the correct ratio, it now seems to prompt me to ask you how you'd like to crop the footage. Otherwise, if you've included your prompt and have enough credits, it should work,
will your 3d 2d course require student to purchase anything additional software or AI services/credits? Or will you be showcasing free options to work with?
It's going to use tools that mostly require credits. So whilst they typically offer some free credits, you'd ultimately need to buy credits for some of the tools. Plus need the Adobe Creative Suite (or at least After Effects) subscription.
hi i am using image to video in runway ml and getting this error from last 2 days. I hope you can help me out. I am using free trial version. Error: Experiencing high demand Gen-3 Alpha generations are unavailable. Please wait for your generations to complete or try again later. Thank you!
@@EternalEleganceInvites . Is that part of the unlimited generation option?… I’ve not subscribed to that tier, but perhaps there’s a limit to when those “unlimited generations” can be done.
@@EternalEleganceInvites . Odd, can only assume the servers were too busy. Strange that it’s not adding to a queue though. Guess worth reaching out to support. Good luck
If I cut a short film into bits to feed through the video to video, will I end up with a new variation of the prompt for each segment processed, or can it stay consistent between multiple clips?
Here is the thing. if u use the least price subscription then for video to video the maximum amount of credit is 125. And thats basically 25 videos(with a 5 sec limit) and 12 videos(with a 10 sec limit).
@@Godsavethecrumpets . You can write your own prompts. Cost much the same as normal credits cost for generation. But they also have their slower unlimited option if you sign up to the premium tier. Sorry that’s a bit of a vague answer.
I think it's limited to 10-second clips. But others have noted if you maintain the seed and description the same restyling will be applied. So you could possibly achieve a 3 minute clip over multiple generations. I'm testing this out today. (Tested and don’t seem to get consistent restyling across clips despite same seed number and prompt)
Problem is runway cost money and a LOT of it . its not free anymore. you put 15 usd for the 600 credits that end in less then a day as you need to do 5-6 generations for each 10 seconds clip you do this to . until you get it right.
@@Twizzforsberg Not really difficult using this feature in Runway for a short film that is really short, like 3 minutes, just a little bit time consuming, but that's the case even when making a normal short film. Since Runway only allows ten seconds, you'd have to cut your film into 9 or 10-second segments to upload one at a time, and also use the same seed number and prompt for each clip to create the same consistent look for each clip. We've gotta think outside the box a bit to create something very cool and get consistent results.
Runway and Kling and Luma, etc. are really meant for filmmakers, not your average person. It's a professional tool. Runway is more reasonable in price than most of the others, for you can get unlimited generations on their top plan, and it pays for itself for someone who consistently uses it to create monetizable content.
It's available now and it's shite. I'm on an unlimited plan but it's been 25 mins since it started to generate a 10 second clip with no progress what so ever. So goodluck using it. Runway releases new stuff before they even have the servers to support it.
Not worth the price, by the time you get the effect you are looking for, your credits are gone faster than expected. Even the simplest algorithm that Gen-2 offers to generate animations from images doesn't do well, literally every 4 or 5 animations are usable, the rest go in the bin. When I asked on discord if they can improve this algorithm, they told me no, but I can buy a more expensive subscription lmao. This is making a laughing stock of the customer. If Runway is the best on the market then imagine how rubbish the competition must be.
I hope they will add a feature where you can give a picture (or more pics) as a reference of the outcome. This way a poorly rendered 3d animation could be a final render very fast.
@@kisatom . Yeah if/when you can add an image reference this could become incredibly useful.
They've added this now
@@culttelevision Wow, thank you for the heads-up!
@@culttelevision . Are you sure?.. I still don't see a way to add an image reference to a video to video generation. (Just image to video. Or video-to-video restyled based on a text prompt?).
@@AIAnimationStudio on the right hand side. There's a style reference with an image upload and i've literally just done a video to video using just a text prompt. under STYLE REFERENCE (IMAGE / PRESET / TEXT PROMPT)
Awesome! Your content keeps getting better and better! Amazing work!
holly crap quality is good
This looks really amazing
Wow the cgi one is stellar.this is absolutely amazing video creation tool
Runway translated "jumper" to "sweater", a very smart AI indeed!!! :)))
@@alpaykasal2902 …😂😂😂.. fair
I need a way to do this with longer clips, if you tried to do an entire video, or even just a minute, it would currently be impossible to maintain the same look throughout. Can't wait for that.
Have you tested it for consistent characters and style? I mean, using the same prompt with a series of clips using the same actor?
That would be great, and how long the videos can be?
@@GadrielDemartinos . Good question .. and no I’d not checked that yet. Hopefully with the same text and seed number we should get some consistency. Something to try tomorrow 👍
@@mik3lang3lo It looks like you get a maximum output of 10 seconds per video-to-video generation
@GadrielDemartinos Yes, I just tested it by using the same seed from the transformed video and reusing the same prompt for another clip of the same character (from a short AI film I'm creating). So, it works by uploading ten seconds at a time of the original video, then finally editing them together into a longer video.
@@AIAnimationStudio It works, I just tried it using the same seed value and same prompt on another clip I uploaded (of the same AI character) and it gave me an identical result in its style, as far I can tell.
Amazing! Thanks for this video!
Awesome new features thanks
Great video! What is this geometric effect from 0:09 called? cheers
Thanks... that geometric one had the prompt: "Low poly 3D model, with paper textures. Origami. 3D polygon man talking to camera. Dynamic lighting. Cinematic."
This tool is realy cool BUT I have a big problem: The minimal structure transform amount of 0 is still too big to get controllable and therefore usable outcomes. I would really love to be able take a 3D render which looks semi real and then keep it mostly the same but just adding a bit of detail with ai. Sadly this is not possible yet because the outcome is still very varied
That is too advanced, you’re not gonna get that without willing to be creatively free
Can you add effects as well or just change the style?
Is there any free vid2vid
@@SSLCLIPS-TV . You could try ComfyUI locally with an elaborate workflow to achieve something similar to DomoAI.
I think Pika can do it with its older v1 but I don’t think that’s covered by their free plan. (Version 1.5 is).
MiniMax can do image to video but not video to video yet.
Does Kling do it yet?? … I need to revisit Kling soon.
I upload video on runaway and I can't press the button ''generate'' what's the issue here? video length is 9.6s
Not sure... potentially a temporary glitch. If it's not the correct ratio, it now seems to prompt me to ask you how you'd like to crop the footage. Otherwise, if you've included your prompt and have enough credits, it should work,
Same for me.. 4s long video, i cropped it, i have 125 credits and still can't click on the generate button.
@@inko187 . Had the same issue myself earlier after a few successful generations. Not sure what the issue is.
ai-videoupscale AI fixes this (AI Video Upscaling (to 4K)). Awesome video AI styling showcase.
Excellent, cheers.
Wow Next Level!
will your 3d 2d course require student to purchase anything additional software or AI services/credits? Or will you be showcasing free options to work with?
It's going to use tools that mostly require credits. So whilst they typically offer some free credits, you'd ultimately need to buy credits for some of the tools. Plus need the Adobe Creative Suite (or at least After Effects) subscription.
Great tutorial!
Thanks very much 🤟
I am still just waiting for them to add different aspect ratios XD
Amazing work!
Impressive
Great video, is this feature only for paid plans? My free plan isn't showing me this option, it only shows me video to video in Gen-1
hi
i am using image to video in runway ml and getting this error from last 2 days. I hope you can help me out. I am using free trial version.
Error:
Experiencing high demand
Gen-3 Alpha generations are unavailable. Please wait for your generations to complete or try again later.
Thank you!
@@EternalEleganceInvites .
Is that part of the unlimited generation option?… I’ve not subscribed to that tier, but perhaps there’s a limit to when those “unlimited generations” can be done.
@@AIAnimationStudio i have 325 credits and still unable to generate the video from last 2 days
thanks a ton for replying sir
@@EternalEleganceInvites . Odd, can only assume the servers were too busy. Strange that it’s not adding to a queue though. Guess worth reaching out to support. Good luck
Yes buddy same problem..😢
@@EternalEleganceInvitesyep ! Same issue here too.
If I cut a short film into bits to feed through the video to video, will I end up with a new variation of the prompt for each segment processed, or can it stay consistent between multiple clips?
Others have commented confirming if you keep the seed and prompt the same, the same style can be applied to more clips. 👍
@@AIAnimationStudio
Thanks. So awesome.
Wow, so cool man
What’s the longest you can go? I mean, can you render a shot of 47 seconds or longer?
10 seconds. But others have commented confirming if you keep the seed and prompt the same, the same style can be applied to more clips.
Here is the thing. if u use the least price subscription then for video to video the maximum amount of credit is 125. And thats basically 25 videos(with a 5 sec limit) and 12 videos(with a 10 sec limit).
so can you only use the presets? hows the cost work out for the restyle?
@@Godsavethecrumpets . You can write your own prompts.
Cost much the same as normal credits cost for generation. But they also have their slower unlimited option if you sign up to the premium tier.
Sorry that’s a bit of a vague answer.
Brother, can I, for example, take your clip and put my picture there and it will move like you move??
@@DuCaffeine . There’s no image reference option yet for this video to video approach (yet). If/when there is that could be really powerful.
i cant upload video, what format is supported?
@@kuplramontop6772 . MP4’s work for me. I’m not sure if they support other formats
Is there atry out option with this? Anyone?🙏🏻
If you use same prompt, do you get 100% same result on another video?
I don't think so
can Gen create 3 minutes Video to Video?
I think it's limited to 10-second clips. But others have noted if you maintain the seed and description the same restyling will be applied. So you could possibly achieve a 3 minute clip over multiple generations. I'm testing this out today. (Tested and don’t seem to get consistent restyling across clips despite same seed number and prompt)
Many Thanks!!! It looks like I'm keeping Runway's Unlimited Plan for a bit, hehe.
doenst it slow down a lot after 2250 credits are spent before the end of the month? if so, how long do you wait for a video render?
Where is the audio?
@@fabianoperes2155 … errrm. Should be there 🤷♂️
Can this generate anime?
Problem is runway cost money and a LOT of it .
its not free anymore.
you put 15 usd for the 600 credits that end in less then a day as you need to do 5-6 generations for each 10 seconds clip you do this to . until you get it right.
am I late? 2024 is not finished ;) still happy to join, sent you email!
Its cool but make a shortfilm would be to hard to make and expensive
@@Twizzforsberg 💯. But that’s kinda good to know, that a decent short film still requires effort 😀🤷♂️
@@AIAnimationStudio i have the script. Tried different but ai is still not trainee how a human canon looks like
@@Twizzforsberg Not really difficult using this feature in Runway for a short film that is really short, like 3 minutes, just a little bit time consuming, but that's the case even when making a normal short film. Since Runway only allows ten seconds, you'd have to cut your film into 9 or 10-second segments to upload one at a time, and also use the same seed number and prompt for each clip to create the same consistent look for each clip. We've gotta think outside the box a bit to create something very cool and get consistent results.
@@FilmSpook Thanks
It’s not for the brokies yet, just for the big companies
Give me the solution for ... experiencing high demand
@@factinfo1970 … hopefully they’ll scale up to more available systems soon. Or demand will balance out once the new wave of hype diminishes. 🤷♂️
Runway is too too way expensive
Runway and Kling and Luma, etc. are really meant for filmmakers, not your average person. It's a professional tool. Runway is more reasonable in price than most of the others, for you can get unlimited generations on their top plan, and it pays for itself for someone who consistently uses it to create monetizable content.
Runway does not follow your prompts so the credits consume as fast as coke.
Love from Pakistan
I'm from Pakistan. It's not showing me the video-to-video option. Thank you for this tutorial nevertheless.
Seafood festival! Yum
It's available now and it's shite. I'm on an unlimited plan but it's been 25 mins since it started to generate a 10 second clip with no progress what so ever. So goodluck using it. Runway releases new stuff before they even have the servers to support it.
Is it free ??
@@mr.music.89 no it’s around 50 credits for 5 seconds . 100 credits for 10 seconds.
1000 credits is $10.
Plus they have an unlimited plan.
For the monthly cost I want more credit 🗣️🗣️🗣️😤😤😤‼️‼️‼️‼️‼️‼️‼️‼️‼️‼️‼️‼️‼️‼️‼️‼️it’s good otherwise
Runway superscription is way too much, you can literally burn through the lower tier credits with about 6 videos and end up with nothing usable.
This is the future of creative arts.amazing and cheap
someone tell runway to let negative prompts in gen 3 they simply wont listel tell them pls
Not worth the price, by the time you get the effect you are looking for, your credits are gone faster than expected. Even the simplest algorithm that Gen-2 offers to generate animations from images doesn't do well, literally every 4 or 5 animations are usable, the rest go in the bin. When I asked on discord if they can improve this algorithm, they told me no, but I can buy a more expensive subscription lmao. This is making a laughing stock of the customer. If Runway is the best on the market then imagine how rubbish the competition must be.
Argh! No anime style.
@@Apuat … you can write your own text prompt to achieve something akin to anime.
@@AIAnimationStudio Ahhh I was not sure. Thank you for confirming that this is possible.
Another useless ai platform built off of uncredited, uncompensated work of countless artists. Channels like these are terrible.
Useless? Are you regarded?
Well shit.
Stpd crap video this time this tool is not hell free fully paid so dont need it
someone tell runway to let negative prompts in gen 3 they simply wont listel tell them pls