Latest Text to Video Advancements Are Here to Blow Your Mind!
Вставка
- Опубліковано 20 сер 2024
- RunwayML Gen-2 and Pika Labs have introduced new features for their AI-based video systems. RunwayML Gen-2 now has an updated camera control that allows for selective zooming and influencing the direction of camera movement. Pika Labs has recently introduced a new camera feature that allows users to control the camera’s motion and zoom that is very much similar.
▼ Link(s) From Today’s Video:
✩ RunwayML: www.futurepedi...
✩ Pika Labs: www.pika.art/
✩ RunwayML Tweet: / 1701580692541341851
✩ Nick's Runway thread: / 1701221304555200625
✩ Dave's Runway thread: / 1701406814209032392
✩ Madaro Pika Labs: / 1701354686664556843
► MattVidPro Discord: / discord
► Follow Me on Twitter: / mattvidpro
-------------------------------------------------
▼ Extra Links of Interest:
✩ AI LINKS MASTER LIST: www.futurepedi...
✩ General AI Playlist: • General MattVidPro AI ...
✩ AI I use to edit videos: www.descript.c...
✩ Second Channel: / @matt_pie
-------------------------------------------------
Thanks for watching Matt Video Productions! I make all sorts of videos here on UA-cam! Technology, Tutorials, and Reviews! Enjoy Your stay here, and subscribe!
All Suggestions, Thoughts And Comments Are Greatly Appreciated… Because I Actually Read Them.
-------------------------------------------------
► Business Contact: MattVidProSecond@gmail.com
Thanks for the non-clickbait title! rare in the AI space
"This ai replaced my girlfriend" it's crazy nowadays
@@swolgan3527😂😂😂😂
But now I don’t know how to make $32487 per day with ai 😢
True 😂😂
Hey Matt, appreciate your commitment to the Ai craft and sharing our art and work. 🙏🏼
What happens when this tech is perfected? There won't be any craft left, human input will matter less and less until human input becomes irrelevant.
@@joemarklinThe human role in the creation process will always be there, although the way it manifests may change. Currently, it may seem overwhelming that tasks which once required extensive manual labor are becoming increasingly simpler. However, our aim is to further empower our creativity. Additionally, this advancement will eliminate obstacles for individuals who struggle with drawing or found filming to be prohibitively expensive and challenging. It is a straightforward outcome, and believe me, you still need to learn how to use these tools, so I consider it a fair trade-off.
@@QuanTonyka in time you won't have to learn much to use these tools and the human input will mean less and less as the tools become more powerful. In the near future if you simply prompt "make me a scifi short film" the AI will make a well written one on its own and you won't be able to distinguish a human made one from an AI made one, once that happens the human input becomes irrelevant.
⭐⭐⭐⭐⭐10/10 -OY3AH!
I think we're still 1-3 years away from practical video generation where it's indistinguishable from the way we produce video and vfx now, but once it's perfected media as we know it will change. Because by then image, music, and overall A.I. Will have advanced so much it will have everything at near indistinguishable levels.
At that point we will be seeing truly amazing futuristic sci Fi things like personal movies and shows and ads and even video games for each person. And probably even things similar to like the early stages of Star Trek holodecks once you combine this with projectors.
I only hope society is ready for video that can simulate anything in a convincing way.
I was just thinking about how in addition to making our own scifi, we can remake the last season of game of thrones, and make it finish properly.
1-3 years? 😂 At least 6 years my friend.
OpenAI is working on video data right now.GPT5 will be video. That’s next year. Were there already friends.
@@a.thales7641Uh, no. 1-3 years is a far, far more reasonable estimate.
This day next year we'll look back at this like it was the stone age lol
Great comparison, agreed with what you’ve said too - Outputs seem similar. I’ve not really used Runway, but Pika is impressive. Both improving quickly, exciting times. Thanks so much for the shoutout / feature on here too, appreciate being included in the video!
Pika is also free, runway is too expensive and the paid version is literally equal to a free accounts free trial. So you can just create multiple accounts to get more time and get just as much time as a paid account.
you can change the motion speed in pica labs with -motion 1,2,3 or 4
Soon we can explore entire worlds just by moving in a video, truly amazing.
Far better than the old days of an AI video of teddy bear painting. Which was only last year! 😮
there we go boys, finally getting some motion with this tech
I wonder if video-generative AI will only really start to excel as 3d-generative AI gets good. The problem is that image and video AI models don't understand objects or reality, so they do things that are inconsistent (like too many fingers). But once we are working with 3d AIs and then feeding that through additional AI filters, then we can start to get more consistency. Or maybe it won't be 3d specifically, but some other internal model of how objects and the world work.
I really want to try this. I've been in pika labs alot, created a couple of shorts
Best channel for catching up on the latest developments
Thank you!
On your assessment in quality at the moment, I agree with you Matt. Thanks for another great video and thanks for not generating lemons this time! Keep rockin in the free world. edit: also I dig the natural backlighting in this one.
Quality is going up. Nice work Matt
thanks for the great videos!
Imagine this as a fly through video game.
Great Content as usual !!!!!!!
Ai movies are gonna be Awesome.
Ahoy! Looks like Ai video is advancing quickly! This improves workflow speed and is great for dynamic angle control
Early days but looking good
That's a typo for Gen 2 standard plan in that splash screen - if you go into the pricing you will see its 625 credits (1 credit works out to about 1 second)
You CAN change the speed of the motion in Pika using the ... wait for it... -motion parameter! - (0-4 in integers)
Pika has -motion 0-4 and there is also fulljourney for video creation
We need director mode for image generation in A1111
Should try the orangutan zoom experiment with zooming in on the eye, since that could be reversed to get the zoom-out, and it'd start with the full face including they eye.
I'm just waiting for the point I can feed an AI series of my favorite shows and get entirely new episodes rendered based on it.
this could make great backgrounds for a super stylized animation
All these feel like you could achieve with Stable Diffusion and some compositing in After effects or Resolve Fusion.
You should cover Gaussian splatting. It is NeRF with the Neural AI.
in two decades we could probably generate a whole show from a single prompt.
4:10 camera does zoom in but the plane propeller is not rotating .............................
Thank you
well, i thought I was the only person who didnt blink... am not alone
The quality is similar to a bootleg movie you downloaded that was filmed inside the theater.
They really need to Lower their price. 5 cents a second adds up fast!
I think this is the maximum that this approach gives. We should praise them, but it's time to completely change the framework
Money grab I mean Token grab 😂
What if we are a apart of one of these universes
Nice.
But I’m still not sold yet.
Here’s a AI app suggestion, baazart. It has some cool ai features.
The future is now
WHY ARE YOU YELLING!?
Yessir
Its not there Yet still waiting On the New AI Chips to get there, also you need to host a contest with AI to post the Top 3 on the channel to give it more life
Don't you think that pikalabs at this moment still better than Runway? What is your opinion about kaiber?
why no talky about kaiber?
Anyone notice watermelons don't grow in trees?
I'm just one comment, so do with it what you will. I will say, IMO, having the vague/open-ended thumbnail and video title does disappoint me for some reason when viewing the video. I kind of assume this is a multi-update going over several topics over the past week/month. But instead it's a more specific show case. I still think the video is useful, but again, IMO having the title explain that it is a video on the topic of moving still images sets my expectation for what's in the video and how to do it.
Obvs I watch all your videos regardless, but new viewers may have similar experiences(or perhaps this comment will get downvoted if others disagree lol) ❤
Yeah he left it too vague
Hi Matt.
I want video to video.
I want to watch old cartoons in live action.
There is no need to generate videos from scratch. Video AI can work along with image generators and 3D engines. Generation of animations + 3D assets can give more content to work with + human can fix final result. Also this concepts are closer to AI possibilities at the time.
👌
_Pika Labs_ looks noticiable better when it comes to realism, at least to my eyes.
I find that _Gen-2_ looks a bit cartoonish.
Day 7 of asking for a Splittic update
Why does his voice sounds like a ai voice 😂
9h 529 likss 212k subs
Im sorry but Gen2 is just way too expensive for its random and very imperfect generations....and on top of that 125 secs max.
I think Ill wait for a little longer.
hi
bro, what have we become... i can't imagine the cringe some real directors are feeling watching these tools
It's just technology advancing, I bet real producers could only have dreamed of this
Bro, these generations look terrible to be honest. Not worth 15 a month. And what would this be using for? I wouldn't approve any of these clips for b-roll.
Runway is beyond dreadful.
Not trying to belittle this in any way - but who are the folks that are actually paying for these kind of services?
Sure, these generators will get much better in the next couple of years, but at the current level, they are just utterly useless for any real life application scenarios 🤔
I’ve paid for it, mainly just to experiment with the tech
People who make content on them, and enjoy messing with them
It's all garbage as long as you can't download and run it locally. SaaS will always be trash
Still useless. For now.
My thoughts exactly.
The work you have to put it is not worth the results you get out.
But someone has to test it out.
I'll wait a year from now where hopefully by then it will be more practical.