The Adobe CEOs are extremely greedy, unlike anything seen in any other company. Additionally, it’s worth noting that ON1 is set to launch their new ON1 RAW photo software, which will exclusively feature LOCALY generated images ... NOT ONLINE.
MiniMax is the most impressive model out. It does great with expressing prompted emotions but I have to say that Kling’s pro version has been capable of that too. Great video, as always :)
100%, my Friend, I've been using MiniMax every day for over a week, and it by far is the best at animating humans as well as other things like birds flying and animals running.
@@curiousrefuge I tried minimax after I watched your perfect video and minimax is great but without the ability to use picture to video is useless for me, because basically you cannot produce something with one character (human). Which is the best option for picture to video? I personally never seen so realistic videos like minimax, but if you have to produce something with one character on many videos doing different things which AI tool you prefer to use? Thank you once again for your you tube channel :)
it’s cool to have the runway gen3 extensions… the problem is they’re using GEN 2 to do the extensions, not GEN 3, so that’s why the extensions look so weird and low quality… they really need to use GEN 3 to do the extensions.
Wait, which program generated the montage that's playing while you're talking about legislation (19:48, 20:26, 20:37, 20:45, etc.)? Those are some of the best I've ever seen.
I think a better comparison would be to have them all start with the same picture. Even just taking the first frame from Firefly would have been a good starting point to compare
@@terryd8692 I mean, there wil always bere a bias as in Adobe will simply choose their best example, but to give it a little of a fight at least start based on the same premises
As someone working on an XR concept in California I've been following the legislation you mentioned. The lead on the language in that bill is the "Center for AI Safety" which is basically a non-profit consultancy. Not all that enthusiastic that they are leading the charge here in CA.
Thank you very much! I have been researching this space, looking at smaller vendors. I would have ignored Adobe assuming a heavy handed "solution", but this actually looks worth paying for. (Adobe stock at the next dip?)
how tf do they get their movies to look so high resolution in those movies showed at the end? i know there are ways to "cheat" by adding fine grain and filters, but the resolution overall looks much better then what runway gives out, even with good prompting and high resolution input images. especially "seeing is believing" it looks amazing, the shot with the asian woman is great!
Thank you so much! Technically, Runway’s resolution is slightly higher (1280x768) than Minimax (1280x720), but I agree-the pixel density in Minimax feels smoother. Especially with cinematic outputs, Minimax has a great consistency, and though not technically "sharp" or "high-res," it feels more balanced-kind of like a BluRay downsized to DVD that still retains its perceived sharpness. For "Seeing Is Believing," I didn’t use Topaz or any other AI video upscaler. Instead, I just put all 720p clips into a 5K Final Cut Pro project, which just "zooms" them out without additional upscaling or pixel interpolation. Then, as you mentioned, color grading and adding fine grain help give the shots that "hi-res" look, even though they technically aren’t. :) You can watch the final 4K version of "Seeing Is Believing" here: ua-cam.com/video/ghnk0rf5qPU/v-deo.html
@@particlepanic thank you for taking the time to answer so detailed, this is great input. I appreciate it! and at first didn’t realize you are the creator that answered haha, I’m looking forward to future projects of you, keep it up :)))
After watching the freaky extend video feature it made me wonder if this is the real skynet. Instead of an apocalypse and nukes, which we've seen coming, skynet is going to create seriously disturbing videos that drive us into insanity.
You know what? You make a great point because. what can end up happening is people could start end up making AI videos that look like a real terrorist threat to try and cause war and now this is going to make it harder for governments to verify videos. Ohh jeez. this is going to cause a hot mess of new fraud.
11:47 Why not do an end frame when testing the camera movement though? I would want as much control as possible, so I would definitely do an end frame. I’m curious to see what the results look like when you have both a start frame and an end frame, and you change the camera movement at the same time.
...and yesterday the announcment that Runway Gen-3 is now able to generate video-to-video...things go faster than the news.. btw. thanks for the meshy reminder...have to check directly. 🙃
@@curiousrefuge when you say check our gallery, do you mean check the videos you have uploaded to your UA-cam channel? If so, I am not finding a standalone video of the hamsters controlling robots.
Here’s what I want: 1) Stable Model 2) Stable Environment. 3) Creative Camera Work If I can simply create characters and insert them into an environment, without either of them morphing into an acid trip, I’ll pay. As of now, getting usable clips is not only time consuming with too many trial and error prompting, it gets expensive. Whoever can accomplish this first is going to do very well. I hope it happens soon.
grab crew, rent equipment, talents, scout for location... then you will see how expensive it is 😂😂😂😂😂😂 lol, prompters crying £20 a month is a lot, you couldn't make it up hahahaha
@@yeah-I-know Runway costs you around 3.K per year if you want to make movies and its not perfect but its damn good! I spent around $300 this month but just for hobby really.
@@dakaisersthul 😂 still a drop in the ocean comparing to making it irl, stop complaining - you already have other people work given to you for free, all you have to do is pay for the tool to use it... yawn... I guess it's not enough for the prompters 😂😂
That’s great! Your CEO’s nephew just became a motion designer at a fraction of what they’re paying you. Well there’s always Amazon delivery driver - well, until they automate that.
Regulating use of AI version contracts in this way is not providing "special treatment" for union members. It's definitely not a "major problem" (video - 20.53). The legislation protects ALL workers from being exploited in this context, and encourages membership of unions, which is a good thing. For reinforcement of stuff like decent minimum wages and workers rights (and, in this instance, the right to be paid properly for use of your AI version), unions are of great benefit, especially for those working in creative industries where monetary value of work is ambiguous.
I am not convinced by a comparison of a few random generations from models that have been trained on different data sets and for a different range of topics. Its a bit like taking a formula one car and a golf cart and compare the off road capability.
As soon as Adobe releases a 4th version of the firefly model, we'll have a robust image-to-video pipeline without subscribing to many different services.
@@AINIMANIA-3DYes, but that's the case with every plug-n-play solution. If you want privacy and uncensored generations, you gotta go with flux and comfyUI
@@KevinSanMateo-p1l Do you have a list of AI video generators that are uncensored? Ideogram is the only one I know of. Minimax appears to do Will Smith, Darth Vader and Mario, but I don't know if it would do Trump shooting a gun, for example (like in the Dor Brothers' videos).
@@KevinSanMateo-p1l That's such a "I don't know how to f-ing read" comment, so, f-ing READ. Commenter said when Adobe release Firefly 4, they will be able to use ONE subscriptions service (Creative Cloud) rather than NEEDING to use multiple. Read, for God's sake.
In the runway vs adobe comparisons. Adobe's actually seem just as jank tbh and I wouldnt use either in real world applications. 1. Look at the reindeers back leg as it turns to face the camera 2. Drone flying through Lava... cause sure that's totally a thing drones can do 3. The puppets, sure whatever both are cursed 4. Look at the ripples on the sand change over time
I'd love to see Adobe release this stuff, but my fear is that they begin charging extra for generations. Wouldn't surprise me either as they are pretty greedy with their stock footage after you're paying big money for the suite.
There are several videos on UA-cam of people flying drones through lava. It's not copying anyone's work. It's using it as a reference, just like every other generation. CTFD.
that lava shot is actually an FVP drone pilots footage, i remember the footage from his youtube vlog where he flew his FPV drone into a volcano, and the lava destroyed his propeller wings! i wonder if he submitted his clip for AI training, or is Adobe just snatch up Content creators clips the same way udio does there audio generations
Dude if you dragged the ankle point to the bear's toes I can imagine how precise you were with the rest of them. No wonder the bear animation looks wonky
I love how this Ai Video that is literally generating scenes from text.... is talked about like "it's ok, not bad".... when 5 years ago everyone would have been losing their minds.
It's all good, but only 8-bit. So for professionals nog very useful at the moment. Sending stuff to grading is always a struggles these people are super technical and very specific. I don't want to be the one sending them AI generated clips 🙂
The AI video sector is getting hot AF. Im using like 10 different video generator websites in my workflow to make videos. Its honestly getting out of hand. I also find the California push back of AI is due to the fact that Hollywood is there and they dont like the idea of the common man competing with their market share.
Do you know which video generators are uncensored (violence, guns, gore, horror, blood, celebrity and politician likenesses, etc.)? If one uploads an image of Trump, for example, to Gen3 Runway will it animate it?
@@High-Tech-Geek I’ve made some Trump image to video with kling. I think if you just upload the picture and call it “Fat orange idiot…” instead of “Donald Trump” then it won’t ID it.
A lot of people are complaining about Adobe subscription pricing. Do you know Adobe spends around 10 Billion dollars a year in development and marketing? If they do not charge what they charge Adobe as a business would fail.
Since you can now download and train your own ai model on your local machine what will any global reg have. What about Dopple ganger of an actor, clone a user who grew up in the same location or a human imitator cloning there imitation voice.
@@curiousrefuge is legislation already too late as you can download an ai model to your own machine and run it locally. If I find someone who was a natural twin to a famous artist a voice or visually looking and used them in a video what happens then. Are copyright came about from the printing press they are not relevant to the modern world. I tend to have the same attitude to copyright as John Carmack. Hope that helps
Feels like a free ad for Adobe. If you're going to run comparisons with Runway, at least show us the prompt so we can make our own assessments. The "trust me bro" approach makes people wonder what you're hiding and who is paying you to hide it.
After LITTERALY stolen thousands and thousands of photos, images, and video footages from their clients in their cloud service, of course they can generate great AI videos.
@NetTubeUser ... Of course and idiot with no concept of how AI works will say this. I bet you failed basic math courses in high school and photography was the only thing you could do.
And what do you think Runway, Midjourney and the rest of them did? Feeding their model training with copyrighted material? I hope this whole "industry" burns down in lawsuits very soon.
I don't know why everyone goes crazy about it already it's still in baby stages.. video lasts like 5 seconds at best and doesn't even have audio, you can't possibly make a movie or tv show, your better of imagining something at least you can think of outcomes
knowing Adobe they’ll charge like $20 for 5 clips
And you'll have to redo each idea a few times.
Right 😔
it;s ok that keeps people that won't pay or can't pay out of the door ,, Beta will drops 2nd week of october can't wait
The Adobe CEOs are extremely greedy, unlike anything seen in any other company. Additionally, it’s worth noting that ON1 is set to launch their new ON1 RAW photo software, which will exclusively feature LOCALY generated images ... NOT ONLINE.
We hope not!
MiniMax is the most impressive model out. It does great with expressing prompted emotions but I have to say that Kling’s pro version has been capable of that too. Great video, as always :)
100%, my Friend, I've been using MiniMax every day for over a week, and it by far is the best at animating humans as well as other things like birds flying and animals running.
@@FilmSpook just needs that image to video and it's top dog, for now.
Minimax is so impressive with text2video!
@@curiousrefuge I tried minimax after I watched your perfect video and minimax is great but without the ability to use picture to video is useless for me, because basically you cannot produce something with one character (human). Which is the best option for picture to video? I personally never seen so realistic videos like minimax, but if you have to produce something with one character on many videos doing different things which AI tool you prefer to use? Thank you once again for your you tube channel :)
@@georgikozhuharov2293 Can't try minimax because it won't even open the page. Looks like it's too popular and overloaded right now..
I bet adobe will make a seperate subscription model for firefly just like they did for their 3D services Substance.
We'll see!
Facts. And like runway, they'll make sure you waste your credits.
Adobe jumping into AI video? Fantastic, another subscription for tools we’ll barely use! Adobe’s playing catch-up as always
On the other hand, it helps consolidate the toolbelt!
Adobe is the industry standard.
it’s cool to have the runway gen3 extensions… the problem is they’re using GEN 2 to do the extensions, not GEN 3, so that’s why the extensions look so weird and low quality… they really need to use GEN 3 to do the extensions.
didn't know that. does that hold true for for all subscription levels?
Interesting point !
That's not true at all and you made that up.
@@TukTukPirate its alright to be wrong lil bro, try to be better next time. 😏
Dude that's Me!
r/Optopode here, and thank you so much for the reference 🪶
that video was hillarious!!! I loved it! -Mitzy
Amazing work!
It's time to retire the Sora comparisons. We've got the AI video options to create TODAY.
Is anyone still even waiting for Sora?
@@dasberlinlex My grandfather. Sora 1.0 was announced in 1984 :)
@@MartinZanichelli I love it. Great joke. You have a nice sense of humor.
Sora did it's job. It got the ball rolling on a mass scale to give us all these options. Sora was never about just Sora.
@@TPCDAZ Yeah, I'd say Sora just never was. Not even looking forward to it. 😎
You over looked the option to download the model as STL allowing one to print them! so cool!
Thanks for the info!
Wait, which program generated the montage that's playing while you're talking about legislation (19:48, 20:26, 20:37, 20:45, etc.)? Those are some of the best I've ever seen.
It's a handful of different tools!
Nice. Some of these look handy in one way or another.
The tool I really want from Adobe: Jump cut eliminator. if it worked like a cross dissolve, oh man.
Ooooh that would be cool!
I think a better comparison would be to have them all start with the same picture. Even just taking the first frame from Firefly would have been a good starting point to compare
True! That's a more accurate test!
I'm not sure that you can fairly compare the firefly marketing videos to something you chucked together in a couple of minutes
@@terryd8692 I mean, there wil always bere a bias as in Adobe will simply choose their best example, but to give it a little of a fight at least start based on the same premises
As someone working on an XR concept in California I've been following the legislation you mentioned. The lead on the language in that bill is the "Center for AI Safety" which is basically a non-profit consultancy. Not all that enthusiastic that they are leading the charge here in CA.
Great point...we'll see!
Thank you very much! I have been researching this space, looking at smaller vendors. I would have ignored Adobe assuming a heavy handed "solution", but this actually looks worth paying for. (Adobe stock at the next dip?)
We'll see!
how tf do they get their movies to look so high resolution in those movies showed at the end? i know there are ways to "cheat" by adding fine grain and filters, but the resolution overall looks much better then what runway gives out, even with good prompting and high resolution input images. especially "seeing is believing" it looks amazing, the shot with the asian woman is great!
Thank you so much! Technically, Runway’s resolution is slightly higher (1280x768) than Minimax (1280x720), but I agree-the pixel density in Minimax feels smoother. Especially with cinematic outputs, Minimax has a great consistency, and though not technically "sharp" or "high-res," it feels more balanced-kind of like a BluRay downsized to DVD that still retains its perceived sharpness. For "Seeing Is Believing," I didn’t use Topaz or any other AI video upscaler. Instead, I just put all 720p clips into a 5K Final Cut Pro project, which just "zooms" them out without additional upscaling or pixel interpolation. Then, as you mentioned, color grading and adding fine grain help give the shots that "hi-res" look, even though they technically aren’t. :) You can watch the final 4K version of "Seeing Is Believing" here: ua-cam.com/video/ghnk0rf5qPU/v-deo.html
@@particlepanic thank you for taking the time to answer so detailed, this is great input. I appreciate it! and at first didn’t realize you are the creator that answered haha, I’m looking forward to future projects of you, keep it up :)))
Glad you enjoyed these!
I just love AI so much, its the love of my life
We love it too!
AI also said it loves you, but it can makes mistakes
17:43 It would be cool if there was an option for the AI to automatically rig the character.
Wouldn't be surprised if that's going to happen very soon!
UV mapping by AI is what we need also very dearly :)
After watching the freaky extend video feature it made me wonder if this is the real skynet. Instead of an apocalypse and nukes, which we've seen coming, skynet is going to create seriously disturbing videos that drive us into insanity.
You know what? You make a great point because. what can end up happening is people could start end up making AI videos that look like a real terrorist threat to try and cause war and now this is going to make it harder for governments to verify videos. Ohh jeez. this is going to cause a hot mess of new fraud.
We hope not!
Hailuo has image to video now, and it's fantastic.
We agree!
You just can’t bring coffee girl down-that cup is at least half full!:)
lol
LOL
😂😂
O no not me participating in Gen:48 ! I’ll have to try Adobe next.
Can't wait to see what you create!
11:47 Why not do an end frame when testing the camera movement though? I would want as much control as possible, so I would definitely do an end frame. I’m curious to see what the results look like when you have both a start frame and an end frame, and you change the camera movement at the same time.
Good point, we'd need to test that next time.
Bonkers. Thanks for the excellent overview.
Our pleasure!
ooh runway extend nice!
Yes! Quite a good feature :)
Off-topic question- but I really like your glasses. What is the model and brand?
Good question - we'll ask and get back to you!
@@curiousrefugehaha appreciate you 🫡
Runway should add "Extend with frame and control with video". Then its showtime!
We're getting close!
...and yesterday the announcment that Runway Gen-3 is now able to generate video-to-video...things go faster than the news.. btw. thanks for the meshy reminder...have to check directly. 🙃
Have to keep checking :)
Adobe was impressive but Runway is still the champ for me. I guess I'll have to test it out myself.
Definitely worth testing it all!
Many production have moved out of CA and is now in Georgia.
Yes, a mass exodus of traditional studio locations!
Thanks for the fresh information! The AI generator race continues)
Glad you enjoyed it!
It will be everywhere soon Blackwell chips at work
the next 6 months will be crazy!
There’s a typo in the prompt at 8:23
We appreciate the note!
thanks, I just upload 2 videos, that are 90% looks like realistic. done with minimax.. have to try adobe too
You can do it!
What i dont like now is pricing for relative very low time of output.. I know it's on the beginning but the pricing is crazy
True, it's quite pricey!
Is there a link to the full mouse video somewhere?
check our gallery!
@@curiousrefuge when you say check our gallery, do you mean check the videos you have uploaded to your UA-cam channel? If so, I am not finding a standalone video of the hamsters controlling robots.
A werewolf you say? 🐺
::howls at the moon!::
Will Adobe Firefly only interface with other Adobe software?
Very likely!
That not surprising they waited, and have better. Thanks!
Thanks for watching!
The dab and explosion 😂😂😂
Glad you liked that!
Fix it in post is about to go to another level
ahah, very true!
Here’s what I want: 1) Stable Model 2) Stable Environment. 3) Creative Camera Work
If I can simply create characters and insert them into an environment, without either of them morphing into an acid trip, I’ll pay.
As of now, getting usable clips is not only time consuming with too many trial and error prompting, it gets expensive.
Whoever can accomplish this first is going to do very well. I hope it happens soon.
Exactly not even SuperNintendo had suck crappy assets for motion caputure LOL hahhha
That would be great wouldn't it : )
grab crew, rent equipment, talents, scout for location... then you will see how expensive it is 😂😂😂😂😂😂 lol, prompters crying £20 a month is a lot, you couldn't make it up hahahaha
@@yeah-I-know Runway costs you around 3.K per year if you want to make movies and its not perfect but its damn good! I spent around $300 this month but just for hobby really.
@@dakaisersthul 😂 still a drop in the ocean comparing to making it irl, stop complaining - you already have other people work given to you for free, all you have to do is pay for the tool to use it... yawn... I guess it's not enough for the prompters 😂😂
That’s great! Your CEO’s nephew just became a motion designer at a fraction of what they’re paying you. Well there’s always Amazon delivery driver - well, until they automate that.
It will certainly be a disruptive year!
Can we select a lower frame rate to get more than six seconds of video, and use the output in DaVinci Resolve Studio to fill in the missing frames?
You can certainly use other AI tools and try DV to smooth out the missing frames.
Regulating use of AI version contracts in this way is not providing "special treatment" for union members. It's definitely not a "major problem" (video - 20.53). The legislation protects ALL workers from being exploited in this context, and encourages membership of unions, which is a good thing. For reinforcement of stuff like decent minimum wages and workers rights (and, in this instance, the right to be paid properly for use of your AI version), unions are of great benefit, especially for those working in creative industries where monetary value of work is ambiguous.
We appreciate your perspective.
Since they announced "tokens" for AI use, you'll be paying for every frame, whether they are usable or not.
It can certainly add up!
The bear animation… you put the first point in the wrong spot! 😮
Did you see that the dot says groin when you click on it?
Woops! Nice catch!
You've obviously never lived in Canada - snow does blow up sometimes lol.
Haha...we have so much to learn!
Montage sequence cinematographers for National Geo channel are quaking in their boots right about now
We appreciate you watching
😂😂 isn't national geographic literally about the *actual* geography my boi
it isn´t available in South America. I have an Adobe subscription but still text to video in Firefly has the coming soon title.
Awww that's an interesting limtation! Hopefully that changes soon!
I am not convinced by a comparison of a few random generations from models that have been trained on different data sets and for a different range of topics. Its a bit like taking a formula one car and a golf cart and compare the off road capability.
True! It's difficult to test, but we try our best :)
Adobe can't even do humans in Firefly yet, so I won't hold my breath on how good Luma is.
We'll see!
As soon as Adobe releases a 4th version of the firefly model, we'll have a robust image-to-video pipeline without subscribing to many different services.
Only heavily censored
@@AINIMANIA-3DYes, but that's the case with every plug-n-play solution.
If you want privacy and uncensored generations, you gotta go with flux and comfyUI
@@AINIMANIA-3D that’s such such a boomer mindset. Everyone will have some kind of text video model so censorship will no longer be an issue
@@KevinSanMateo-p1l Do you have a list of AI video generators that are uncensored? Ideogram is the only one I know of. Minimax appears to do Will Smith, Darth Vader and Mario, but I don't know if it would do Trump shooting a gun, for example (like in the Dor Brothers' videos).
@@KevinSanMateo-p1l That's such a "I don't know how to f-ing read" comment, so, f-ing READ. Commenter said when Adobe release Firefly 4, they will be able to use ONE subscriptions service (Creative Cloud) rather than NEEDING to use multiple. Read, for God's sake.
Brilliant video. Thanks
Glad you enjoyed it
Where is the link to minimaxi?
Should be first result in google
Great video. Thank you. I am still laughing at the bear. If anyone starts doing meet ups in the Dallas/Fort Worth area, I would love to join.
Jump in our discord and we can help organize it!
In the runway vs adobe comparisons. Adobe's actually seem just as jank tbh and I wouldnt use either in real world applications.
1. Look at the reindeers back leg as it turns to face the camera
2. Drone flying through Lava... cause sure that's totally a thing drones can do
3. The puppets, sure whatever both are cursed
4. Look at the ripples on the sand change over time
We appreciate the feedback on this!
I'd love to see Adobe release this stuff, but my fear is that they begin charging extra for generations. Wouldn't surprise me either as they are pretty greedy with their stock footage after you're paying big money for the suite.
We would probably bet that there will be some kind of charge.
Don't need to extend that way just reuse the same photo and prompt something else the results will be better.
Good point!
Great stuff, thanks :)
My pleasure!
Thanks!! Great video
thanks for watching!
Tere was a guy who flew his drone through a volcano... This looks EXACTLY LIKE THAT..... IT's copying his work.
There are several videos on UA-cam of people flying drones through lava. It's not copying anyone's work. It's using it as a reference, just like every other generation. CTFD.
Certainly possible to have trained on that one video but a generation is compiled of far more data than a single vid.
I wonder if the AI model will be a plugin in their desktop video software.
Perhaps one day!
that lava shot is actually an FVP drone pilots footage, i remember the footage from his youtube vlog where he flew his FPV drone into a volcano, and the lava destroyed his propeller wings! i wonder if he submitted his clip for AI training, or is Adobe just snatch up Content creators clips the same way udio does there audio generations
It's absolutely not the drone pilot's footage.
Perhaps there was *some* training but not one single generation is a result from a single video.
Is the video generator free for a few uses?
Minimax is currently but better get started now before it's too late :)
Thank you :)
Our pleasure!
Dude if you dragged the ankle point to the bear's toes I can imagine how precise you were with the rest of them. No wonder the bear animation looks wonky
Good point!
Adobe Video extension, to extend a rush... it needs to be connected online ?
Oh great question! We assume you need some internet access for the initial setup!
I love how this Ai Video that is literally generating scenes from text.... is talked about like "it's ok, not bad".... when 5 years ago everyone would have been losing their minds.
It all moves so fast!
Content writers becomes hero.
Certainly the best stories will rise to the top!
많이 배우고 있습니다.
thanks for watching!
Is that Snoop Dog eatting a burger...?
Hahah could be?
where is th link to the chinese geenrator? there are tons I cant find it thanks
Check our description :)
very interesting episode
I want still camera shots, it seems they are always moving around no matter what I input
locked shot, stationary shot, tripod mounted camera, etc. none of these work for you?
That's a good tip. I'd say even with those prompts you'll still get movement in 50% of your shots unfortunately.
@@High-Tech-Geek I'll try those terms In my prompts thanks 🙏
this whole Ai video thing is really getting real. I mean will be to real to differenciate soon. a bit sceptical thou.. if it's used for bad purpouses.
Like all types of art, we encourage everyone to use it for good!
"We will see!!" ;)
We shall!
Wonder where Adobe trained those models
On their own assets.
Amazing Stuff!
Glad you think so!
It's all good, but only 8-bit. So for professionals nog very useful at the moment. Sending stuff to grading is always a struggles these people are super technical and very specific. I don't want to be the one sending them AI generated clips 🙂
Definitely only works for certain projects !
The AI video sector is getting hot AF. Im using like 10 different video generator websites in my workflow to make videos. Its honestly getting out of hand. I also find the California push back of AI is due to the fact that Hollywood is there and they dont like the idea of the common man competing with their market share.
Do you know which video generators are uncensored (violence, guns, gore, horror, blood, celebrity and politician likenesses, etc.)?
If one uploads an image of Trump, for example, to Gen3 Runway will it animate it?
@@High-Tech-Geek I’ve made some Trump image to video with kling. I think if you just upload the picture and call it “Fat orange idiot…” instead of “Donald Trump” then it won’t ID it.
Definitely makes finding a workflow difficult. However, we wouldn't be surprised if in one year from now most things are consolidated.
This kooks amazing
Glad you enjoyed this!
Thats the same way I drink coffee
It's the BEST way :)
A lot of people are complaining about Adobe subscription pricing. Do you know Adobe spends around 10 Billion dollars a year in development and marketing? If they do not charge what they charge Adobe as a business would fail.
Interesting point!
Since you can now download and train your own ai model on your local machine what will any global reg have. What about Dopple ganger of an actor, clone a user who grew up in the same location or a human imitator cloning there imitation voice.
I'm sorry I'm not sure I understand your quesiton?
@@curiousrefuge is legislation already too late as you can download an ai model to your own machine and run it locally. If I find someone who was a natural twin to a famous artist a voice or visually looking and used them in a video what happens then. Are copyright came about from the printing press they are not relevant to the modern world. I tend to have the same attitude to copyright as John Carmack. Hope that helps
Will this just be part of the normal sub we pay? Probably not is what my gut tells me.
We'll see!
thank YOU for updating my antiquated mind ,
Our pleasure!
ADobe better catch up or I'm done. They could've seen this coming for years. It's THEIR BUSINESS!
It's a very competitive landscape for sure!
Feels like a free ad for Adobe. If you're going to run comparisons with Runway, at least show us the prompt so we can make our own assessments. The "trust me bro" approach makes people wonder what you're hiding and who is paying you to hide it.
Sorry you felt this way! We'll try and be more clear next time :)
After LITTERALY stolen thousands and thousands of photos, images, and video footages from their clients in their cloud service, of course they can generate great AI videos.
We appreciate you watching!
Yeah, how people are still happy to give them money is beyond me
@@curiousrefuge You're welcome! 🙂
@NetTubeUser ... Of course and idiot with no concept of how AI works will say this. I bet you failed basic math courses in high school and photography was the only thing you could do.
And what do you think Runway, Midjourney and the rest of them did? Feeding their model training with copyrighted material? I hope this whole "industry" burns down in lawsuits very soon.
Did Adobe send you a script or talking points?
Nope! None of our videos are sponsored.
thank. bro
No problem
I hope it's in adobe cloud.
We'll see!
I hope everyone understands that adobe is a highly advanced tech company that has been around for almost 80 years....
True!
Isnt that Sora? They just not letting on thats what it is.
bingo
which part?
@@curiousrefuge Adobe's video gen upgrade.
So now there is no value for the artist now huh ?
Not at all. Artists use AI tools.
NIce! but how can i make a loner clip, like a shortfilm, using the same characters in minimax?
Must generate many clips and splice them together to to make your story come to life !
@@curiousrefuge yeah, but can i continue the consistency of the subjet
I don't know why everyone goes crazy about it already it's still in baby stages.. video lasts like 5 seconds at best and doesn't even have audio, you can't possibly make a movie or tv show, your better of imagining something at least you can think of outcomes
It's more about adding it as one thing to a toolset, rather than making a movie with it entirely.
Sora without real users is considered fake ability
Sora defintely has users but unfortunately it's kept pretty tight.
Is anyone still waiting for Sora?
We're enjoying all the other tools currently!
oh, its not out yet.
Not yet!