The plot twist about the AI narration at the end left me speechless too. It's incredible to see how advanced AI technology has become. (Comment generated by ChatGPT)
The narration voice was lowkey giving me serial killer vibes and was kinda scaring me subconsciously. The plot twist at the end made me feel much better.
a few notes from a professional compositor. Whenever adding still images as a matte painting you need to add back moving grain that matches the original footage. This can be done using a denoiser on the original clip and then overlaying the RAW noise it captured onto the png. I also noticed that some of the fills looks either too sharp or too soft in areas where it came up against the original clip. You could probably fix this by increasing the photoshop file's resolution and selectively blurring/softening it after adding it next to the clip. Your results still look really good considering you guys aren't VFX artists. It is really impressive how far AI tools have come in the last year.
I also miss some actual overlapping of the generated content, it feels really stiff when they're just standing/sitting in place instead of wandering around a bit and rotoscoping it over
Could you give us a minute of your time and explain to us how to obtain the noise from the original matter? Or recommend a video where they explain how to do that? Thank you so much.
@@pabloromano6845 Not a VFX/compositing pro either, but you could capture a clean slate i. e. several seconds of still image from a tripod and then extract the temporal noise by subtracting one frame from another. There might even be a matching blend mode (subtract, difference) where you would overlay the clip with itself, but shifted one frame. If there is no clean slate available and your noise filter has no "retain only noise" option (which at least some audio noise filters have), you could denoise your clip and the use a subtract/difference blend mode to overlay the denoised clip with the original one. That is probably not as efficient and precise as filming a gray card with the exact same camera, settings and under the exact same lighting conditions to get "pure" noise - but for sure it will be more authentic than just slapping some arbitrary film grain onto the clip. Perhaps @LFPAnimations can tell us a couple more secrets of the trade.
@@enricopalazzo2312 denoising and then blending between a noisy and denoised clip set to “difference” or “subtract” blend modes are the industry standard way of extracting noise. You basically guessed it which is cool ;)
Everything about this video is just incredible, but what really left me speechless is the plot twist about the AI narration at the end. AI is really becoming incredibly advanced.
Really? After like 5 seconds I was already wondering why the narrator was talking so weirdly, and by 10 seconds I knew I was listening to AI. I don't know this stuff still manages to fool people.
@@PolymerJones i dont see how its dissapointing being able to give you information, sound, imagery and many many other things in a fragment of time that a human would need. Also the narration, no i did not realize it was AI. Sounded fairly fine and if you just think about it with how young AI actually is but how much it already can do...Give it 3-4 years and its gonna be tremendously smarter than we are. I mean it already is. Only thing that limits it as of right now is us as humans
Absolutely epic stuff here, using AI for environments is such a smart move. It's like boost app social, an app I use for my social media, they're also using AI to create stunning stories and reels. Tech these days!
I miss you guys, another of my favorite UA-cam channels turned into a ghost town. hope all is going well with your business, maybe one day you will be back or at least you will feel inspired to give us a farewell video.
In the first 30 seconds i thought did they get a new narrator. Then when you appeared on screen i thought "hmmm" knew something was up with the voice. It was too clean and unnatural. As for the video this is incredible. You have just changed my life. I've all the tools to do this and its solved a big problem i see in the near future for an upcoming project. Just wow. I just hope all the generated assets are owned by adobe and don't screw of independent artists.
The voice immediately stood out as AI... But that dinning and room scenes where just amazing.. With some good enough light... UA-camrs won't need to deck out their scenes again to have lovely looking backgrounds.. This is inspiring..
Wow! I could not believe the environments in this video was created with Photoshop & enhanced with Davinci Resolve. When the time comes, I will try it out. Thanks Epic Light Media.
For a small creator who struggles with Space where I only have a corner of our second bedroom, which is not only my office corner being able to use this tool will allow me to create a much more professional looking setup without actually needing it I would probably go one step further with this, and when you’re in Photoshop, remove subject from the plate, so then you have that solid background
Love to see that someone makes a tutorial on it finally - thought i was the only one thinking about that. I directed a music video for some pretty famous artists in germany, which released yesterday (PA Sports - Doktor) - I used the same technique combinated with digital zooming and also some stock video assets to bring in a little bit movement.
We can't deny it's helpful and industry changing forever, the only flaws I could've spotted were the river waters not running and the tree leaves with no wind movement at all, on the outdoor environments. Apart from that, it's perfect! when indoor with unanimated objects, it's insane what it can do. Thanks for bringing this video guys 👏🏼 would love to watch a more detailed tutorial on matching the scenes on the video editing software.
The thing to keep in mind is, this technology is as bad as it will ever be. It will only improve from here. So the flaws you (rightfully!) mentioned, will be gone sometime in the future. And at the rate this all is developing, it will be gone sooner than one might think
No one’s denying it’s not helpful or industry changing. Just that it’s going to put a lot of people out of work in many industries. Last month 40,000 jobs were already lost to ai in the US. It’s already happening and capitalism is not prepared for the mass unemployment ai is going to eventually cause. Even jobs people claim are safe like programming are absolutely not safe, ai is already able to do most website coding and eventually will be able to code 95% of apps and software just based on user prompts.
Wow, this video is an absolute game-changer! 🚀 The concept of using AI environments to create amazing videos anywhere is mind-blowing. I've always struggled with finding the perfect locations or dealing with unpredictable weather conditions, but this solution seems like a dream come true. The way AI can simulate different environments and backgrounds opens up endless possibilities for creativity. It's fascinating to see how technology is revolutionizing the filmmaking process. This video has truly inspired me to explore new avenues and push the boundaries of my own video production. I can already imagine the convenience and flexibility this brings, especially for content creators on the go. Being able to transport yourself to various settings without leaving your studio is incredible. The ability to create professional-looking videos without the need for expensive equipment or travel is a game-changer for aspiring UA-camrs like me. I'm really excited to dive deeper into AI environments and experiment with different scenarios. The potential to create immersive storytelling experiences is tremendous. Thank you for sharing this valuable information and empowering creators like us to take our videos to the next level. Keep up the fantastic work! 👍🎥
I never subscribe to any channel solely based on someone in the video asking me to subscribe. Upon seeing the "do not subscribe" comment in the end, I immediately subscribed :)
Oh my god, I was actually a bit stunned at the end that it was actually that good AI voice trained on your voice. I kind of thought the whole narration was a bit "monotone" and flat in it's delivery so I was a bit like "hmmm...okay maybe thats the tone they're aiming for" but didn't expect it to be AI (and I'm quite deep in audio editing, sound design...). Did you use Respeecher for the voice?
Congratulations guys. This link popped up on a mailing list I follow. Lucky for me, I had already not subscribed to the channel. You’re going to be getting a lot of new views.
Creating new AI environments using the generative fill feature in Adobe Photoshop is an incredible way to explore creativity and push boundaries. Keep up the great work!
New subscriber here, so the following may make more sense. Adding the blue light and talking about it prior to showing the effects of the light may have caused a bias. It does seem like a neon light is hitting you in that direction, but that may be my bias. I did immediately notice the AI generated narration as I'm trying to do this with my videos. It will be interesting to ee how many comments were surprised by this. I'm not sure if the tech s totally here yet, but the narration was good, but only because I saw your face many times in the video so that told me that those were not brought in from stock footage. Great video! Cant wait to see some more videos like this!
The shot using the overpass looks kinda kludgey to me and might not “feel” right to some viewers, but the rest seems quite plausible. A real boost to lower budget commercials and the like. Nice work! And thank you for sharing. I have a new project started where this may make solve some budget problems.
You are right some of the AI stuff does look "kludgey", but yeah its still pretty impressive at the beta stage. Im excited about using thes tools even for concept or storyboarding stages. Also the adobe generative fill seems to do better with real world objects/spaces vs "apocalyptic destroyed cities"
You also missed the part where it took them hours to fix the coloring and exposure to make it look real and that's if they know what they are doing...would take an amateur days and they'd never get it this close, and that's just for one shot lol
@@GameUnkasa lol he said the word "now" and "with this tech" meaning they will be able to without anymore technological advancements, but at least you can read the first half 👍 but honestly he isn't wrong, it just needs to be noted that people shouldn't just give up on art just cuz any kid can make art now. The point is that, yes it's easier to create certain shots now and will be even easier in the future, but there are still very difficult steps to creating an entire film that will stay very difficult for probably forever.
Kids with phones have been able to make movies for several years now. 99.99% of it is pure shit. The ones who persist will get better if they learn from their mistakes and build on their successes.
The AI voice was the biggest shock tbh... Hardly noticed that bit at all. The rest of it is convincing enough for UA-cam. More than anything it will allow smaller budgets to have nice studio looks. If you play into it a bit, it could be a good move for some.
I think it is a great tool, and the question comes in whether you film it on location, or the project is such that this is perfectly workable. Filming on a location allows for more dynamics. And like many many many post tools, this could allow you to do a quick pickup shot, or save a shot when budget or time does not allow you to reshoot or do a pickup.
Cool concept. Seems to have the best results mimicking an indoor "set". With an inoffensive background its a great way to extend a "greenscreen" set for talking-head videos. Still could be challenging to make sure the real-world lighting stays motivated by a fake environment. It feels a bit uncanny for the outdoor scenes. The outdoor lighting/shadows were a bit off, and lack of background movement (wind/running water) felt unnatural.
totally agree! The generative fill is much better at "real environments" vs the apocalyptic world we tested on this video. One thing that really did surprise us about the generative fill, is most of the time it really tried to keep lighting from the correct direction, color, and hardness. Not perfect but we were consistently impressed how it referenced the image lighting as a whole before generating the new sections.
@@jamesepiclight I noticed that as well but only after color correction and other post effects were applied. At first placement the elements have a paste-in feel, as you don't notice the basic continuity of light values. Color correction (continuity) makes so much difference in image editing in general relative to perceived dimensionality, etc.
4:47 on the left edge, you can see a little bit of the border between "real" and "fake" but then, you added a vertical frame feature in the background which justified that vertical line quite nicely. Human art aided by AI, this is the future!
I'd love a tutorial on how to incorporate these assets into Blender and track a 3D camera for parallax. As much as I love this and am trying to figure out my own workflow, it looks like a green screen at best. There's something about it that my brain just won't believe. Maybe adding some kind of extra moving noise to the generated images would help among other things. It just looks like a well keyed green screen ... and that is not exactly convincing. Maybe some handheld movement - but in After Effects for parallax.
That’s not that difficult if you have basic 3D skills, just enough to create simple geo. Search for camera projection tutorials and also for camera tracking and have fun!
Awesome video! If you didn't tell me this was made by AI, I wouldn't have known or been looking for imperfections like water moving but this looks like an easy process for the most basic creator. Especially if your in a tight space
Awesome stuff! you have left no doubt in my mind that I will be buying a green screen in the near future :) Still, for professional stuff I can easily see the value in this..
Am I the only one who could immediately recognize that the voice was AI? There's something about the inflections that just aren't 100% Really cool video! I can't wait to play around with the tools in Photoshop!
@@moomoocowsly Interestingly I'm pretty much the same. But i follow all this stuff closely as well. ChatGPT is the easiest of them all though it basically uses the same templates for nearly everything even if you request different styles, etc.
not all AI tools are bad, this is a great tool for maybe a film student who is making a short film or bottle film that they can create a larger environment to make their picture seem like it had large budget.
This generative fill definitely opens up possibilities. I'm surprised to see how the AI generated content resolution in a 12K timeline. My understanding is that it produce content 1024 resolution so I guess the trick is to do a basic back plate and then regenerate smaller chunks of it to up the resolution
For any AI programs that produce low resolution output, you can then feed that into an image scaling AI to get it up to 4K or 8K or whatever. But a lot of the programs do let you increase the render resolution if you poke around.
@@RavenMobile yes lots of upscalers out there designed for various types of images ( photo, anime, digital art etc). but it's tricky when patching up a higher res image and how big of a selection you should make to fill it at the highest resolution available so it also matches the rest of the image.
I subscribed immediately after seeing this video.. I've been frustrated with the level of quality visually that my videos have this should be a game changer. I hope you follow this theme on giving all the details on how to make this happen.. And refining the process... I think everybody wants to know more..
@@moviedorkproductions9465 The video editing part was demonstrated in DaVinci Resolve, but the AI part has to be done in Photoshop, as there is no generative fill in Resolve.
I think the room lighting thing was absolutely amazing. I did notice that the voice-over was ai. The only thing in the commercial footage I noticed was that the generated cement for the roads before and Beyond the actor were clearly flattered and Faker than the actual concrete he was standing on.
Wow, really impressive. I don't know if I'm in the minority here but I would have really like to have seen a more granular and desciptive segment on the actual creation and editing of the import and post import. You either skipped, or zommed past, a lot of small detail that a newbie like me would really like to understand.
First of all if I could Subscribe 1,000 times I would so go ahead and hate me for that. This channel seriously upgraded my videos, basically within a week. I watched all your videos. If I could work with you I would. You guys are awesome.
@@EpicLightMedia Yes! You guys really got me thinking about affordable options, for example, a flag/scrim kit. Maybe you could make a “homemade” DIY version of the Modern Scrim Set?
This is a great idea, i recently started to write a my own web-series for youtube. i was warried about how i am going to do things but this is great idea. thank you soo much. this opened so many possibilities for me. i can do it :)
Thanks for the creepy ending. 😆 Great tutorial; it definitely can be useful for vids that don't need to move the camera -- I could take my tiny space and turn it into a mansion! Or at least a larger studio to work with.
voice got me, i thought it had no emotion but happens when reading a script. its definitely better than so many text to voice readers, what was used to do the voice. the videos from the get go my brain did not accept it. i thought it was full green screen the shadows where not right, to sharp just wrong, one of those things where your brains like there a problem here and from the get go your looking at ever little thing for some unknown reason because something feels off. overall fantastic
Very creative guys. Let the record show, Epic Light Media was the first to highlight this feature and all other videos were shameless copycats. The amount of copycat videos this week is honestly disturbing. I will not name names. But it doesn’t matter. This is the OG. All others are copycats.
haha, i clicked on this video on a whim, and that post apocalyptic cell service commercial caught me waaaaayyy off guard....i literally laughed out loud. nicely done.
This is something that will save a lot of time for my corporate interviews. My question is, when do you disclose that you used AI in a project? Do you disclose when you've tweaked a lamp, couch, window, etc.? I know that Adobe's current iteration has some interesting terms of use as well. I used AI to spruce up a boring black background in a recent project and will definitely use the method again.
@@moomoocowsly good point. I hadn't considered that we don't disclose CGI (or even simple compositing) that alter the images we see. But is AI enough of a departure from a more hands-on approach to the art we create, that it warrants a disclosure?
@@brentwpowell YOU NEED TO DISCLOSE AI , it is in every AI software TOS , dont know how would you pose with that before your boss tho , probably it will take a few years and it will become mainstream.
@@poti732 I'm under the inclination that you should disclose, and have on most of my outputs, but not all software requires disclosure. The generations shown in this video were via Adobe Firefly, which is currently in non-commercial beta, so the disclosure is necessary for fair-use commentary. I've come across other implementations that don't require disclosure in their TOS. Many bosses will love this stuff, faster turnaround and higher concept productions, even if they have to disclose.
@@moomoocowsly using AI is different from photoshopping or using CGI since those two still involve the human touch, whereas the AI is generating the whole content for you; it's as if you were commissioning/outsourcing it to someone else. If you were to heavily alter the AI-generated content or simply use it as a reference, however, then a case could be made that you no longer need to disclose it.
Man i was legit just thinking about how to use generative fill in videos. Of course, I wasn't the only one with those thoughts going through my head. Cool ideas!
Great video I don't have the skill set you folks do but I have a plain back wall and have played a little with generative fill and your video has shown how to
Wow this was pretty awesome. Great idea, dudes!
Instant pin! Casey is a GOD!
Such a great idea that other popular UA-camrs are making exactly replicas of your video ua-cam.com/video/-DMU_pJ0YNY/v-deo.html
@@EpicLightMedia 💯💯💯
The plot twist about the AI narration at the end left me speechless too. It's incredible to see how advanced AI technology has become. (Comment generated by ChatGPT)
Literally mind blowing. The entire thing was very well done.
@@TheRafark Yeah. I was like "this is some good narration..."
sounded like one of thoes skyscraper videos ngl, still kinda far from humman
It was pretty obvious
@@ClaimerUncut yep
The narration voice was lowkey giving me serial killer vibes and was kinda scaring me subconsciously. The plot twist at the end made me feel much better.
a few notes from a professional compositor. Whenever adding still images as a matte painting you need to add back moving grain that matches the original footage. This can be done using a denoiser on the original clip and then overlaying the RAW noise it captured onto the png. I also noticed that some of the fills looks either too sharp or too soft in areas where it came up against the original clip. You could probably fix this by increasing the photoshop file's resolution and selectively blurring/softening it after adding it next to the clip.
Your results still look really good considering you guys aren't VFX artists. It is really impressive how far AI tools have come in the last year.
I also miss some actual overlapping of the generated content, it feels really stiff when they're just standing/sitting in place instead of wandering around a bit and rotoscoping it over
Could you give us a minute of your time and explain to us how to obtain the noise from the original matter? Or recommend a video where they explain how to do that? Thank you so much.
@@pabloromano6845 Not a VFX/compositing pro either, but you could capture a clean slate i. e. several seconds of still image from a tripod and then extract the temporal noise by subtracting one frame from another. There might even be a matching blend mode (subtract, difference) where you would overlay the clip with itself, but shifted one frame.
If there is no clean slate available and your noise filter has no "retain only noise" option (which at least some audio noise filters have), you could denoise your clip and the use a subtract/difference blend mode to overlay the denoised clip with the original one.
That is probably not as efficient and precise as filming a gray card with the exact same camera, settings and under the exact same lighting conditions to get "pure" noise - but for sure it will be more authentic than just slapping some arbitrary film grain onto the clip.
Perhaps @LFPAnimations can tell us a couple more secrets of the trade.
Damn. Great tip!!
@@enricopalazzo2312 denoising and then blending between a noisy and denoised clip set to “difference” or “subtract” blend modes are the industry standard way of extracting noise. You basically guessed it which is cool ;)
Mindblowing!! Awesome!!
Everything about this video is just incredible, but what really left me speechless is the plot twist about the AI narration at the end. AI is really becoming incredibly advanced.
Really? After like 5 seconds I was already wondering why the narrator was talking so weirdly, and by 10 seconds I knew I was listening to AI.
I don't know this stuff still manages to fool people.
the “ha ha ha” becoming more and more sinister was a nice touch 😂
AI is still disappointing and sucky once you play with it for a while
@@PolymerJones Thats what she said
@@PolymerJones i dont see how its dissapointing being able to give you information, sound, imagery and many many other things in a fragment of time that a human would need. Also the narration, no i did not realize it was AI. Sounded fairly fine and if you just think about it with how young AI actually is but how much it already can do...Give it 3-4 years and its gonna be tremendously smarter than we are. I mean it already is. Only thing that limits it as of right now is us as humans
Absolutely epic stuff here, using AI for environments is such a smart move. It's like boost app social, an app I use for my social media, they're also using AI to create stunning stories and reels. Tech these days!
Been using it for a while now, great toolkit for socials!
HEY! the fact that we can't move the camera brings us back to birth of cinema, where creativity was made exactly like that. NICEEE
I miss you guys, another of my favorite UA-cam channels turned into a ghost town. hope all is going well with your business, maybe one day you will be back or at least you will feel inspired to give us a farewell video.
the AI narration at the end left me speechless
Love the cell service parody commercial. You guys could make a whole channel doing them. Keep up the great work.
In the first 30 seconds i thought did they get a new narrator. Then when you appeared on screen i thought "hmmm" knew something was up with the voice. It was too clean and unnatural.
As for the video this is incredible. You have just changed my life. I've all the tools to do this and its solved a big problem i see in the near future for an upcoming project. Just wow. I just hope all the generated assets are owned by adobe and don't screw of independent artists.
Next step: combine this technique with Resolve Relight. We're living in the future
Resolve relight is not very good tbh
@@Visethelegendsounds like you don't know how to properly use it 😱
@@GameUnkasa sounds like you are not a real colorist 🤯
I have to try
@@Visethelegend٩ مرحبا ٥٥
The voice immediately stood out as AI... But that dinning and room scenes where just amazing.. With some good enough light... UA-camrs won't need to deck out their scenes again to have lovely looking backgrounds.. This is inspiring..
I think this will be a great tool for educational videos. A cheaper way to create sets without the need for green screen.
Wow! I could not believe the environments in this video was created with Photoshop & enhanced with Davinci Resolve. When the time comes, I will try it out. Thanks Epic Light Media.
For a small creator who struggles with Space where I only have a corner of our second bedroom, which is not only my office corner being able to use this tool will allow me to create a much more professional looking setup without actually needing it
I would probably go one step further with this, and when you’re in Photoshop, remove subject from the plate, so then you have that solid background
Lost count of how many times my mind was blown during this video...
Great process! Creating cinematic mattes with AI is so much fun. These lighting tips really up the game.
Wow - even a twist ending! Bravo!
Love to see that someone makes a tutorial on it finally - thought i was the only one thinking about that. I directed a music video for some pretty famous artists in germany, which released yesterday (PA Sports - Doktor) - I used the same technique combinated with digital zooming and also some stock video assets to bring in a little bit movement.
Oh wow! I’d love to see it!
Just checked it out, great work! Very curious which parts were AI and what other kinds of tools you used.
Awesome work on your video, really dig your style!
Loved the video!
I did not expect the narration to be AI...
What a time to be alive!
We can't deny it's helpful and industry changing forever, the only flaws I could've spotted were the river waters not running and the tree leaves with no wind movement at all, on the outdoor environments. Apart from that, it's perfect! when indoor with unanimated objects, it's insane what it can do.
Thanks for bringing this video guys 👏🏼 would love to watch a more detailed tutorial on matching the scenes on the video editing software.
The thing to keep in mind is, this technology is as bad as it will ever be. It will only improve from here. So the flaws you (rightfully!) mentioned, will be gone sometime in the future. And at the rate this all is developing, it will be gone sooner than one might think
No one’s denying it’s not helpful or industry changing. Just that it’s going to put a lot of people out of work in many industries. Last month 40,000 jobs were already lost to ai in the US. It’s already happening and capitalism is not prepared for the mass unemployment ai is going to eventually cause. Even jobs people claim are safe like programming are absolutely not safe, ai is already able to do most website coding and eventually will be able to code 95% of apps and software just based on user prompts.
it can only get better from here
Waiting, in anticipation, until the next one!
Wow, this video is an absolute game-changer! 🚀 The concept of using AI environments to create amazing videos anywhere is mind-blowing. I've always struggled with finding the perfect locations or dealing with unpredictable weather conditions, but this solution seems like a dream come true.
The way AI can simulate different environments and backgrounds opens up endless possibilities for creativity. It's fascinating to see how technology is revolutionizing the filmmaking process. This video has truly inspired me to explore new avenues and push the boundaries of my own video production.
I can already imagine the convenience and flexibility this brings, especially for content creators on the go. Being able to transport yourself to various settings without leaving your studio is incredible. The ability to create professional-looking videos without the need for expensive equipment or travel is a game-changer for aspiring UA-camrs like me.
I'm really excited to dive deeper into AI environments and experiment with different scenarios. The potential to create immersive storytelling experiences is tremendous. Thank you for sharing this valuable information and empowering creators like us to take our videos to the next level. Keep up the fantastic work! 👍🎥
Thanks!!!
Wow… That’s One awesome video and things to learn and try. 🎉🎉
"ha ha haa.... ha ha haaa.. hah haaaaa... " My favorite quote from this video
I never subscribe to any channel solely based on someone in the video asking me to subscribe. Upon seeing the "do not subscribe" comment in the end, I immediately subscribed :)
Oh my god, I was actually a bit stunned at the end that it was actually that good AI voice trained on your voice. I kind of thought the whole narration was a bit "monotone" and flat in it's delivery so I was a bit like "hmmm...okay maybe thats the tone they're aiming for" but didn't expect it to be AI (and I'm quite deep in audio editing, sound design...). Did you use Respeecher for the voice?
Elevenlabs
It seemed AI to me in the first 30 seconds.
@@wright96d yeah, immediately recognizable and incredibly offputting.
Congratulations guys. This link popped up on a mailing list I follow. Lucky for me, I had already not subscribed to the channel. You’re going to be getting a lot of new views.
Creating new AI environments using the generative fill feature in Adobe Photoshop is an incredible way to explore creativity and push boundaries. Keep up the great work!
Adding a window to your room is really pushing the boundaries of creativity
New subscriber here, so the following may make more sense. Adding the blue light and talking about it prior to showing the effects of the light may have caused a bias. It does seem like a neon light is hitting you in that direction, but that may be my bias. I did immediately notice the AI generated narration as I'm trying to do this with my videos. It will be interesting to ee how many comments were surprised by this. I'm not sure if the tech s totally here yet, but the narration was good, but only because I saw your face many times in the video so that told me that those were not brought in from stock footage. Great video! Cant wait to see some more videos like this!
The shot using the overpass looks kinda kludgey to me and might not “feel” right to some viewers, but the rest seems quite plausible. A real boost to lower budget commercials and the like. Nice work! And thank you for sharing. I have a new project started where this may make solve some budget problems.
You are right some of the AI stuff does look "kludgey", but yeah its still pretty impressive at the beta stage. Im excited about using thes tools even for concept or storyboarding stages. Also the adobe generative fill seems to do better with real world objects/spaces vs "apocalyptic destroyed cities"
Best generative AI tutorial yet
Damn it's getting crazy. Any kid with a camera will be able to produce his own film with this tech. So many new possibilities now.
And then there's the script and the acting, but sure.
You also missed the part where it took them hours to fix the coloring and exposure to make it look real and that's if they know what they are doing...would take an amateur days and they'd never get it this close, and that's just for one shot lol
Since ppl can't read... Use words like "getting" or "will be able" is speaking on things to come... It's only gonna evolve and become easier......
@@GameUnkasa lol he said the word "now" and "with this tech" meaning they will be able to without anymore technological advancements, but at least you can read the first half 👍 but honestly he isn't wrong, it just needs to be noted that people shouldn't just give up on art just cuz any kid can make art now. The point is that, yes it's easier to create certain shots now and will be even easier in the future, but there are still very difficult steps to creating an entire film that will stay very difficult for probably forever.
Kids with phones have been able to make movies for several years now. 99.99% of it is pure shit. The ones who persist will get better if they learn from their mistakes and build on their successes.
That ending was perfect. It made me editor brain so happy I had to rewatch it three times.
The AI voice was the biggest shock tbh... Hardly noticed that bit at all.
The rest of it is convincing enough for UA-cam. More than anything it will allow smaller budgets to have nice studio looks.
If you play into it a bit, it could be a good move for some.
I think it is a great tool, and the question comes in whether you film it on location, or the project is such that this is perfectly workable. Filming on a location allows for more dynamics. And like many many many post tools, this could allow you to do a quick pickup shot, or save a shot when budget or time does not allow you to reshoot or do a pickup.
That's it. I am going to live in a cabin in the woods and write manifestos on a mehanical typewriter.
Now the indie moves will upgrade to another level. Cheers.
Where are you? It has been months since you posted a video.
That last "hahaha" was sinister sounding lol
Cool concept. Seems to have the best results mimicking an indoor "set". With an inoffensive background its a great way to extend a "greenscreen" set for talking-head videos. Still could be challenging to make sure the real-world lighting stays motivated by a fake environment.
It feels a bit uncanny for the outdoor scenes. The outdoor lighting/shadows were a bit off, and lack of background movement (wind/running water) felt unnatural.
totally agree! The generative fill is much better at "real environments" vs the apocalyptic world we tested on this video. One thing that really did surprise us about the generative fill, is most of the time it really tried to keep lighting from the correct direction, color, and hardness. Not perfect but we were consistently impressed how it referenced the image lighting as a whole before generating the new sections.
@@jamesepiclight I noticed that as well but only after color correction and other post effects were applied. At first placement the elements have a paste-in feel, as you don't notice the basic continuity of light values. Color correction (continuity) makes so much difference in image editing in general relative to perceived dimensionality, etc.
OMG I just CANNOT believe this content exists! What a great job! You have a new subscriber here for sure. Thanks so much for sharing it!
4:47 on the left edge, you can see a little bit of the border between "real" and "fake" but then, you added a vertical frame feature in the background which justified that vertical line quite nicely. Human art aided by AI, this is the future!
The end is epic!
I'd love a tutorial on how to incorporate these assets into Blender and track a 3D camera for parallax. As much as I love this and am trying to figure out my own workflow, it looks like a green screen at best. There's something about it that my brain just won't believe. Maybe adding some kind of extra moving noise to the generated images would help among other things. It just looks like a well keyed green screen ... and that is not exactly convincing. Maybe some handheld movement - but in After Effects for parallax.
I wonder if you could cheat at least some motion by using a normal map of the image generated in Resolve Relight? Really want to dive into this...
@@davidbroughton5237all of that will require substantial power to even preview I suppose. But I was thinking the same thing haha!
@@davidbroughton5237 You need a depth map, but you can't push that too far before the effect will break down
That’s not that difficult if you have basic 3D skills, just enough to create simple geo. Search for camera projection tutorials and also for camera tracking and have fun!
Epic Light Media 🔥🔥🔥 🚀
Awesome video! If you didn't tell me this was made by AI, I wouldn't have known or been looking for imperfections like water moving but this looks like an easy process for the most basic creator. Especially if your in a tight space
It looks amazing. Tottaly realistic.
Awesome stuff! you have left no doubt in my mind that I will be buying a green screen in the near future :) Still, for professional stuff I can easily see the value in this..
Am I the only one who could immediately recognize that the voice was AI? There's something about the inflections that just aren't 100%
Really cool video! I can't wait to play around with the tools in Photoshop!
Got me as well. It's interesting how some people notice instantly while others don't
@@moomoocowsly Interestingly I'm pretty much the same. But i follow all this stuff closely as well. ChatGPT is the easiest of them all though it basically uses the same templates for nearly everything even if you request different styles, etc.
Hey, everything is ok? I miss your videos in this channel.
not all AI tools are bad, this is a great tool for maybe a film student who is making a short film or bottle film that they can create a larger environment to make their picture seem like it had large budget.
This generative fill definitely opens up possibilities. I'm surprised to see how the AI generated content resolution in a 12K timeline. My understanding is that it produce content 1024 resolution so I guess the trick is to do a basic back plate and then regenerate smaller chunks of it to up the resolution
For any AI programs that produce low resolution output, you can then feed that into an image scaling AI to get it up to 4K or 8K or whatever. But a lot of the programs do let you increase the render resolution if you poke around.
@@RavenMobile yes lots of upscalers out there designed for various types of images ( photo, anime, digital art etc). but it's tricky when patching up a higher res image and how big of a selection you should make to fill it at the highest resolution available so it also matches the rest of the image.
Miss your great content guys!!!
Where are you guys?? We miss your videos!!
We decided to stop UA-cam to do a feature film. Maybe we will start up in a year or 2
@@thomasmanning9111 oh that's so sad
I subscribed immediately after seeing this video.. I've been frustrated with the level of quality visually that my videos have this should be a game changer. I hope you follow this theme on giving all the details on how to make this happen.. And refining the process... I think everybody wants to know more..
The reason you don't put a light close to a subject is because when they move it falls off too quickly. Falloff is fastest close to a light source.
I know I know
@@EpicLightMedia, would this also work with Da Vinci Resolve? I just started using this editing sortware and I'm trying to keep it simple...
@@moviedorkproductions9465 The video editing part was demonstrated in DaVinci Resolve, but the AI part has to be done in Photoshop, as there is no generative fill in Resolve.
And the reason you put a light close to a subject is for softer lighting. So its a matter of preeference.
@@vbrooklyn It always looks a bit unnatural though - yes it's softer but it's also harsher because of overly quick falloff.
I think the room lighting thing was absolutely amazing. I did notice that the voice-over was ai. The only thing in the commercial footage I noticed was that the generated cement for the roads before and Beyond the actor were clearly flattered and Faker than the actual concrete he was standing on.
Wow, really impressive. I don't know if I'm in the minority here but I would have really like to have seen a more granular and desciptive segment on the actual creation and editing of the import and post import. You either skipped, or zommed past, a lot of small detail that a newbie like me would really like to understand.
I commend you for explaining this in a way we can fully understand the process and wasnt overly long 👏🏾👏🏾👏🏾👏🏾👏🏾
Hey guys, where have u been
First of all if I could Subscribe 1,000 times I would so go ahead and hate me for that. This channel seriously upgraded my videos, basically within a week. I watched all your videos. If I could work with you I would. You guys are awesome.
Hey thanks so much!!!!! We haven’t made a new video in a while… if you think of any new video ideas let me know
@@EpicLightMedia Yes! You guys really got me thinking about affordable options, for example, a flag/scrim kit. Maybe you could make a “homemade” DIY version of the Modern Scrim Set?
Dope! So realistic 🔥 🔥🔥🔥🔥
This is a great idea, i recently started to write a my own web-series for youtube. i was warried about how i am going to do things but this is great idea. thank you soo much. this opened so many possibilities for me. i can do it :)
Thanks for the creepy ending. 😆 Great tutorial; it definitely can be useful for vids that don't need to move the camera -- I could take my tiny space and turn it into a mansion! Or at least a larger studio to work with.
Or you can take the 50k you'd spend on this gear and actually get a bigger space.....
the trailing 'ha ha' had me loling. Great work!
Love it. Going to try this. It will be very beneficial for third world country people who want to create good high quality videos.
this is a creative way to use them. i love the idea. people became more creative using this ai tools
voice got me, i thought it had no emotion but happens when reading a script. its definitely better than so many text to voice readers, what was used to do the voice. the videos from the get go my brain did not accept it. i thought it was full green screen the shadows where not right, to sharp just wrong, one of those things where your brains like there a problem here and from the get go your looking at ever little thing for some unknown reason because something feels off. overall fantastic
Very creative guys. Let the record show, Epic Light Media was the first to highlight this feature and all other videos were shameless copycats. The amount of copycat videos this week is honestly disturbing.
I will not name names. But it doesn’t matter. This is the OG. All others are copycats.
haha, i clicked on this video on a whim, and that post apocalyptic cell service commercial caught me waaaaayyy off guard....i literally laughed out loud. nicely done.
Hey thanks!!!!! I’m glad you liked that part. Not many people have mentioned it
How cool is that. I'm goin gto integrate this into my workflow, ASAP.
We have a small filming studio and such a video is really helpful as I wondered if I can incorporate Photoshop to enhance real footage. Thank you !!
This is something that will save a lot of time for my corporate interviews. My question is, when do you disclose that you used AI in a project? Do you disclose when you've tweaked a lamp, couch, window, etc.? I know that Adobe's current iteration has some interesting terms of use as well. I used AI to spruce up a boring black background in a recent project and will definitely use the method again.
@@moomoocowsly good point. I hadn't considered that we don't disclose CGI (or even simple compositing) that alter the images we see. But is AI enough of a departure from a more hands-on approach to the art we create, that it warrants a disclosure?
@@brentwpowell YOU NEED TO DISCLOSE AI , it is in every AI software TOS , dont know how would you pose with that before your boss tho , probably it will take a few years and it will become mainstream.
@@poti732 I'm under the inclination that you should disclose, and have on most of my outputs, but not all software requires disclosure. The generations shown in this video were via Adobe Firefly, which is currently in non-commercial beta, so the disclosure is necessary for fair-use commentary. I've come across other implementations that don't require disclosure in their TOS. Many bosses will love this stuff, faster turnaround and higher concept productions, even if they have to disclose.
@@moomoocowsly using AI is different from photoshopping or using CGI since those two still involve the human touch, whereas the AI is generating the whole content for you; it's as if you were commissioning/outsourcing it to someone else. If you were to heavily alter the AI-generated content or simply use it as a reference, however, then a case could be made that you no longer need to disclose it.
What an amazing video!! Congrats!!
Man i was legit just thinking about how to use generative fill in videos. Of course, I wasn't the only one with those thoughts going through my head. Cool ideas!
Great video I don't have the skill set you folks do but I have a plain back wall and have played a little with generative fill and your video has shown how to
This was a really great video, gave me so many ideas. keep it going, guys...
Freakin' brilliant!
I've been using PS AI for a while now but this is insane!!! I didn't even think of this potential! Thank you!!
well that amazing, i see two videos that did the same thing but youre the og who did first!
Great work guys. I still remember photoshop without layers... hahaha. We are moving into an exponential world with AI
Very nice video and strait forward no complications I really like these stuff, Thanks!
Ths is the first time I am seeing your video. Great work. Keep it up.
This was an amazing display of so many technological advances. Thank you for making it easy to understand and replicate!
Hey thanks so much!!
Friggin hell... You just kick-started my brain! Thanks so much and this will open up a lot of opportunities for creatives.
Incredible! It's a game-changer for cheap wide shots.
Very Close to Reality. Great work!
This is so epic! The possibilities are endless!
the last part got me good
Love the transparency disclaimer, way to go!
Inside the room looked really good. Consider me sold.
Great tipps! What software did you use for the narration? Thank you!
I used VND-CPL filters from Kase for my movie. It's greatful to us when we take action outdoors.
Would've loved to see your Degrain and Regrain methods for this process
Really interesting use of generative fill and insight into the use of AI. Thanks for sharing this.
wow that´s amazing!! sooo well done!!
HEY, you can add movement to those images and simulate. slow zoom ins and outs. Just take the image and slice it in layers.