You're one of few UA-camrs whose actually added thoughtful commentary and not just described what we can all see with our own eyes with a few oohs and aahs mixed in
yeah, it will just enable creative people like you to produce more far more quickly, and the traditional camera stuff will still be needed for capturing real events and places
I'm actually working on this very problem. I have the beginnings of a platform that leverages various GenAI APIs for artwork, sound effects, music and narration. Right now with the state of the art it's still very labour intensive with countless retries to get content that is meaningful. With Sora, things will only get easier. I can't wait for this to be available!
In the future, you will go watch a movie in which AI replaces the lead with you, or your dad, or your GF.. or whoever you want, and you'll be the main character! That would be awesome
Great video. Really enjoyed the detailed commentary on artifacts and difficulties in animation that I would not have noticed or realized how difficult they were to do well. Thanks. @OpenAI Get this guy a demo!
16:00 that's one of the main issues I have when using AI creatively. It's easy to get AI write a scene of a chapter, but it's really hard to iterate on that specific scene and ask for changes, because the AI will always fall back into it's patterns half way through. It's like AI somehow does actually lose it's context and just moves along from where it started. It doesn't remind itself of what it was doing and doesn't re-check if it is doing what it is supposed to be doing. A bit like people with ADHS (like me) where a new impression or a new thought can send me miles away from what I was doing within a few seconds. But just reminding myself of how "Will Smith eating spaghetti" looked a year ago, I'm pretty confident we will see a similar jump of quality over the next year.
The AI models 2nd Gen, like GPT, Gemini, Dalle, Midjourney and so on, doesn't have short-term memory, they forget every thing in the next instruction just like 80s pcs.
@@celestemtz587 I didn't specify, but I was talking about GPT-4. When sending a prompt, it also gets the most recent conversation as a context, and even with that context it has a hard time changing course because it falls back into its patterns.
Your analysis helped like, calm my mind. Something about all this has been screwing with my heard, like I'll watch Nature Footage that used to bring me joy & peace, but now my 1st thought is "Well it's probably not real, probably AI Generated" even tho I know it isn't. Like I can feel the wiring in my brain shifting to a place where I feel like I can't trust anything on a computer or TV is really real, and that's sort of scary. Like I fear the day where like, if there's a horrific event and someone shows me the news coverage on the phone, that I'll just look at them and go "Well, like, we don't know if that's real, it could be fake" I really think this is going to mess with people's mental health & sense of reality.
Look man, its hard but you never supposed to trust what is on tv or internet anyways. 90% of info out were is a lie. This ai thing will teach people to check, think and use critical thinking. You need to calm down and get prepared for future ahead. Trust me, your first reaction always should be “its fake till i prove its not” and not the other way around.
@@Airbender131090 The average person is not going to think that way, they're not going to meditate on it and suddenly become a critical thinker, humans are reactionary emotional creatures. What you are saying it will teach people is a utopian fantasy that has a 0% chance of happening. And yeah, when we can no longer communicate and tell if what we're hearing or seeing is real or not, that's a problem, a huge problem. And you wanting to assume everything is fake instead of real is incredibly nihilistic, it's not a world humans can live in for very long.
Bane of all AI - misalignment. If only we'd have tools that let us mess with the latent space of the generators to tweak the output in a meaningful way, instead of rolling dice and hoping to roll a nat 20 on cohesion. Heh, maybe one more paper down the line.... ?
Not totally the same thing but we’re using more numerous, smaller and specialized models in a composition to try and improve among other things observability. First it was a way to save on training complexity while achieving similar results but we started realising it might also make it easier to observe and tweak what the larger composition does because we can control how the specialized smaller models communicate with each other. The difficulty is in making smaller models flexible enough despite their size and it's a pretty horrible balancing act, but it shows a lot of promise.
3:56 for example, looks like a shuffle step but if you go to the linked page you can see it happening midstride and also when one right foot is in the front, switches to her left and moves backwards again(4:18). can be noticed in pretty much all walking videos. human or animals. as noted in a comment above its inherent to the current model. it does not (yet) understand anatomy, its just very good at mimmicking moving images
It'll be interesting to see the first completely AI generate persona/actor that gets "famous". They could completely make up a person put them in movies, TV shows and commercials. Create a whole story for them to cater to specific groups of people. The next 10 years is going to be insane.
People already ignore verified truths and choose to believe discounted ideas, I don’t imbue these kinds of things with the “terrifying” power of influence that Theo speaks of. People have done this since forever, by the way. So yes, some people may be influenced, but they always have been.
I am sure flying phones already existed since 2018. All you need is a quadcopter drone big enough to hold a smartphone. Not exactly useful though, as most hobby drones already have high resolution cameras for that purpose.
> AI blockbuster before 2027 > Cost 1/10 of an organic one > Never seen scenes and animations > "10/10 wtf" > Everyone freaks out about job securities > "yeah but we had to do many edits" > Everyone is calm again
Yeah, I was thinking about pricing last night. I've seen various podcasts now with critiques on the Sora videos, and it does have many limitations (as wonderful as it looks). Based on this, I'm thinking pricing will be reasonable.
Still waiting for Star Trek's Holodeck to be real. We're getting close to it with videos and photos, but I want to be immersed into a virtual reality holodeck and interact physically with my created prompts LMAO.
One thing thats most sketchy about it is the training data. To have this level of quality in the output you would need massive amounts of diverse data, when I say massive I mean petabytes. There is no way they had licensed every bit of it. Not only would that be expensive, but extremely impractical. So I am assuming a large part of the training data was just UA-cam videos. I mean, where else would you get the same video, in multiple resolutions, multiple languages, captions, multiple genres and basically anything you could ask for. When you think about it, UA-cam is just a massive unorganised training data if you have the resources. If you had a popular video on social media, then chances are you contributed to training Sora. Whether you like it or not. I would hope theres some regulation in place which prevents these companies from just straight up stealing the data to train their models or someone would strong arm OpenAI to _open_ about their data but with Microsoft behind them, its just wishful thinking.
Cool video, I agree with almost all your points. However, I'd like to point out that it's pretty easy to do screen replacements for the TV scene, and text replacement on the car. Especially the car. The text is always in shadow, so you don't have to worry about animating shadows. So the job is almost entirely just tracking the new text onto the car. Which I would be surprised if it wasn't trivial to do in DaVinci Resolve or After Effects, and still quite possible to do in Blender, with its new-ish corner-pin planar tracks
on the 1950s TV Screens example - I believe it could be possible to generate a kind of UV map-like grid on the screens and then update each part of the grid. This would be more like inpainting, a technique we will definitely see in AI-generated videos. The biggest issue would be how to handle depth in scenes. However, this could improve over time, similar to the advancements from GPT-3 to GPT-4. Video is somewhat different, but I could imagine it gaining all the capabilities that stable diffusion has, such as inpainting, outpainting, various models and styles, motion capture/posing, mixing, etc. it's impressive how fast and how good these things are getting.
In the same way a single person can now make an entire music album, may be posible for a single person to generate entire 24min 12 episode series in a future. This will be crazy
For the AI voiced video's and still-image videos getting millions of views, for them it's a gold mine. As for the audience it's a galactic pile of trash. Soon we better standardize labelling these or something else
What would be really cool is if it could have a concept of layers so for example in the case of a bunch of TV's you could bring it into your editing software OBS, etc
1) short prompt usually give better and more HQ results in my experience. 2) the AI usually loves to add random watermarks since most reference images that are used to create the model are videos or images that are watermarked/signed etc
i find funny that you guys are looking at this pointing out what looks odd, which of course it does, as if it's not glaringly obvious, despite the fact that the overall quality of these videos is beyond comprehension already, i was stunned when i learned about sora and i'm still trying to process this. if it didn't output anything incoherent at all i'd be concerned OpenAI already has AGI. a model would actually need to be able to fully understand the human perspective to fool us with every single generated video
It might not be that hard to do post-edit on that. Someone managed to create a gaussian splatting. With that, you can do pre much anything using Unity or the likes. ua-cam.com/video/hYsYZHhefgk/v-deo.html
These films resemble our thoughts. They are not perfect and full of details. This can be perfectly seen when you want to, for example, draw what you are thinking and you catch yourself missing details
its really amazing when you think about how our minds work here. When really seeing a video shot of real people we say "look at her, what is she doing" as in "what the actual F is she doing". We see that person as real. But now we have generative AI giving us a video and we still talk about the representations of people as real people like saying "Look at granny" or "What is the girl in the background doing". The mental aspects and psychological part of it is fasanating
5:51 - There _are_ tools out there that will actually take a prompt and use it to edit existing video as you see fit. I heard of it recently but I can’t remember the name of it for the life of me, unfortunately.
In 10-20 years kids are going to look back at us and laugh thinking why would you watch a movie or play a game someone else made up. You prompt your own movie or game and you're in your own imagination
im sure they will make a movie where a guy is in an IA dimension and he need to find the coherent part to survive like in Matrix but this time with generative IA and they will use actual Generative IA mistake to scare u out, cause thats some serious cursed thing
It's so weird to watch you immediately pick it apart for every little flaw as if that was insightful in any way. You missed the forest for the trees. You're supposed to look at where we were a year ago and see this and shit your pants because every little (and big) flaw you found will be moot by next year.
Eventually instead of inputing text to get a image, you will be inputing directions for a movie, and getting whole fledge films, actors, directors, and scenes from a single sentence, and you could watch a movie, any movie, with any style and prompt. Wild Times!
About this 'danover' -> Simply they don't want to be sued, jailed to hell model powers that nonetheless. Bias check? Will they do that from both sides of the fence? ;)
Its only getting worse for "hollywood" guys. Actors and Actresses are about to be apart of the "Days of Yesterday." We're getting closer and closer to the perfect Robot (or demi-god).
6:05 this isn't the first apparent error. In the Tokio one at one point the woman suddenly switches legs (left becomes right and vice versa). Still really impressive though
Does anyone knows where we can put the prompts? they talk about whats on the page, and how to use mid journey but no now says anything about where on SORA should be, is it available ?
if u mean it in like a negative way it could be a "diss" "slight" or "jab". if u mean it in a positive way it could be a "nod" or another word im forgetting
Just a heads a up, a long sentence doesn't mean its a run on sentence. Run on sentences are actually barely a thing, it was more of a short hand for teaching kids to not write long sentences
I think we should be here to admire an attempt of new mind-blowing tech generation, not to examine and criticize a old-published product of some factory
I really appreciate this sort of AI coverage which avoids both the hysterical moral panic or uncritical boosterism that you find elsewhere. More please.
They are blocking adult oriented video creation but given the track record of exploitation and abuse of young women in the industry they really should allow it.
What surprises me is that Dalle 3 doesn't seem as good as Sora. You would think their image generation would get better first before they were able to video of 'this' quality.
When its publically available, the first thing we are obligated to do is create Will Smith eating spaghetti.
Thery are trying to ban the use of celebs in their AI which is arguably good.
@@oxxal7357but not even eating spaghetti is sad
@@oxxal7357Meme aside, I was hoping that was the case
Arguably lame @@oxxal7357
@@oxxal7357is dalle banned from celebs? Mj aloe them
Waiting for netflix AI generated series by text
Black mirror next season
well, won't be that much worse...
Netflix’s proprietary AI is actually quite advanced and is used for a lot of things.
Bring on the day where they generate specific content for us, so we don’t have to scroll for days.
They using it to write their scripts all the time
this scares me a lot. Because now I can't trust any image, video or audio file. And you can't use them as proof in courtrooms as well.
Soon cameras will digitally sign every image or video they take to prove their authenticity.
That's a good point, that does seem like a necessary step to help ensure the authenticity of images and videos
@@CarpenterBrotherunless the camera’s digital signature is compromised or the camera itself is compromised
You're one of few UA-camrs whose actually added thoughtful commentary and not just described what we can all see with our own eyes with a few oohs and aahs mixed in
Some people think AI is a passing gimmick just like NFTs were.
Nope.
NFT didn't make my productivity bump 3x like any other tool I used before
No, no crypto is web 3. Haha Lmao.
Get rugged.
Who would have guessed digital abundance would win over digital scarcity.
WHO is making these dumb predictions?
@@dv_interval42 people that just look the hype around these tech trends and nothing more
This is not AI.
For me as a filmmaker and visual artist, this is devastating - I'm really scared of the future
Why it will make ur life much easier now u don’t need all that equipment to make ur content
yeah, it will just enable creative people like you to produce more far more quickly, and the traditional camera stuff will still be needed for capturing real events and places
@@luisfernando5998 There is no need to hire visual artists like me when you can get it for free and in seconds - that's the scary thing..
Don't feel like that.
These models have censorship.
Some clients will need results that these AIs cannot make because of their policies.
I am also scared as I am currently 15 years old and I don't know if I will be able to get a good job.
What would be insane, is you could take a novel and feed it to AI and it would generate a movie for it. I would love that!
I'm actually working on this very problem. I have the beginnings of a platform that leverages various GenAI APIs for artwork, sound effects, music and narration. Right now with the state of the art it's still very labour intensive with countless retries to get content that is meaningful. With Sora, things will only get easier. I can't wait for this to be available!
In the future, you will go watch a movie in which AI replaces the lead with you, or your dad, or your GF.. or whoever you want, and you'll be the main character!
That would be awesome
CliffNotes on steroids.
here comes Lord of the rings remake
I feel like there will be no generational shared experiences anymore 😊since entertainment will be highly personalized by ai
13:11 Maybe that is just what homemade mobile phone videos will look like in 2056
and giant people will exist
beat me to it! ;)
Damn Sora was playing 4D chess...
More like 2026
Reality is getting close to game over.
Great video. Really enjoyed the detailed commentary on artifacts and difficulties in animation that I would not have noticed or realized how difficult they were to do well. Thanks. @OpenAI Get this guy a demo!
16:00 that's one of the main issues I have when using AI creatively. It's easy to get AI write a scene of a chapter, but it's really hard to iterate on that specific scene and ask for changes, because the AI will always fall back into it's patterns half way through. It's like AI somehow does actually lose it's context and just moves along from where it started. It doesn't remind itself of what it was doing and doesn't re-check if it is doing what it is supposed to be doing.
A bit like people with ADHS (like me) where a new impression or a new thought can send me miles away from what I was doing within a few seconds.
But just reminding myself of how "Will Smith eating spaghetti" looked a year ago, I'm pretty confident we will see a similar jump of quality over the next year.
You know why that happens? Because it's not AI.
The AI models 2nd Gen, like GPT, Gemini, Dalle, Midjourney and so on, doesn't have short-term memory, they forget every thing in the next instruction just like 80s pcs.
3rd Gen Will be able to remember who are you and how you treat them
@@celestemtz587 I didn't specify, but I was talking about GPT-4. When sending a prompt, it also gets the most recent conversation as a context, and even with that context it has a hard time changing course because it falls back into its patterns.
Your analysis helped like, calm my mind. Something about all this has been screwing with my heard, like I'll watch Nature Footage that used to bring me joy & peace, but now my 1st thought is "Well it's probably not real, probably AI Generated" even tho I know it isn't. Like I can feel the wiring in my brain shifting to a place where I feel like I can't trust anything on a computer or TV is really real, and that's sort of scary. Like I fear the day where like, if there's a horrific event and someone shows me the news coverage on the phone, that I'll just look at them and go "Well, like, we don't know if that's real, it could be fake" I really think this is going to mess with people's mental health & sense of reality.
Look man, its hard but you never supposed to trust what is on tv or internet anyways. 90% of info out were is a lie. This ai thing will teach people to check, think and use critical thinking. You need to calm down and get prepared for future ahead. Trust me, your first reaction always should be “its fake till i prove its not” and not the other way around.
@@Airbender131090 The average person is not going to think that way, they're not going to meditate on it and suddenly become a critical thinker, humans are reactionary emotional creatures. What you are saying it will teach people is a utopian fantasy that has a 0% chance of happening. And yeah, when we can no longer communicate and tell if what we're hearing or seeing is real or not, that's a problem, a huge problem. And you wanting to assume everything is fake instead of real is incredibly nihilistic, it's not a world humans can live in for very long.
Bane of all AI - misalignment.
If only we'd have tools that let us mess with the latent space of the generators to tweak the output in a meaningful way, instead of rolling dice and hoping to roll a nat 20 on cohesion.
Heh, maybe one more paper down the line.... ?
Not totally the same thing but we’re using more numerous, smaller and specialized models in a composition to try and improve among other things observability. First it was a way to save on training complexity while achieving similar results but we started realising it might also make it easier to observe and tweak what the larger composition does because we can control how the specialized smaller models communicate with each other. The difficulty is in making smaller models flexible enough despite their size and it's a pretty horrible balancing act, but it shows a lot of promise.
Yeah, and after that paper they're going to say "It really is AI now", even though it won't be 😄
What a time to be alive!
once you see it you cannot unsee. the woman in the first one switches legs midstride. similar to the threadmill guy later
It is a common failure mode for these models. The much worse google model from last year had it too.
Yeah, her left legs passes over her right leg and then becomes her right leg.
I dont see it could you timestamp pls
3:56 for example, looks like a shuffle step but if you go to the linked page you can see it happening midstride and also when one right foot is in the front, switches to her left and moves backwards again(4:18). can be noticed in pretty much all walking videos. human or animals. as noted in a comment above its inherent to the current model. it does not (yet) understand anatomy, its just very good at mimmicking moving images
But the image generating models had these problems too and now they are gone
It's amazing how far this tech has gotten in a relatively short period of time.
and scary
Short? This has been around for many years, lol if 20+ years is short then idk
@@JEsterCW ya but these latest advancements have only been in the last 3 years
@@Zeragamba Maybe there is research progress that not reach clickbaity social media with a few demos and wild guesses about future
Its not even reletivly short. Its literaly short. From nothing to chatgpt and SORA in 20 months
Just wanted to say thanks for putting out such great content. It was one of the inspirations for me to start posting. You rock!
It'll be interesting to see the first completely AI generate persona/actor that gets "famous". They could completely make up a person put them in movies, TV shows and commercials. Create a whole story for them to cater to specific groups of people. The next 10 years is going to be insane.
So next year we can do this with an open source model.
People already ignore verified truths and choose to believe discounted ideas, I don’t imbue these kinds of things with the “terrifying” power of influence that Theo speaks of.
People have done this since forever, by the way.
So yes, some people may be influenced, but they always have been.
I really liked this video going in-depth. Really fascinating stuff. =]
Thank you so much for the right time and right place👌
Insane amount of data is required here! Wild times people! its Wild times!
Actually the Lagos clip is accurate because in 2056 phones will be able to fly.
I am sure flying phones already existed since 2018. All you need is a quadcopter drone big enough to hold a smartphone. Not exactly useful though, as most hobby drones already have high resolution cameras for that purpose.
I been bawling since I got up....I feel like my time here is over, my jobs at risk now and Im crushed!
I agree, I'm about to go into post-secondary for 3D and now my parents want me to reevaluate my career choice.
> AI blockbuster before 2027
> Cost 1/10 of an organic one
> Never seen scenes and animations
> "10/10 wtf"
> Everyone freaks out about job securities
> "yeah but we had to do many edits"
> Everyone is calm again
shorts and 20s generated videos....
i hope that doesn't become a deadly combination...
I'm curious how it would handle cartoons or anime. Creating anime battles on demand would be cool.
Attack on titan, I don't even want to make a different ending, I just want it animated with wits style, so AI can do it
I wonder the price per second/frame/clip of this? Maybe the 7 trilion dollars are to be able to properly scale this thing.
Yeah, I was thinking about pricing last night. I've seen various podcasts now with critiques on the Sora videos, and it does have many limitations (as wonderful as it looks). Based on this, I'm thinking pricing will be reasonable.
24:57 Totally missed it lol
😂
Also with the grandma one. I think the main problem was that the candles didn't go off.
and the direction of the flames, it wasn't consistent. @@il804
Still waiting for Star Trek's Holodeck to be real. We're getting close to it with videos and photos, but I want to be immersed into a virtual reality holodeck and interact physically with my created prompts LMAO.
The papercraft video does not look bad. It looks amazing.
One thing thats most sketchy about it is the training data.
To have this level of quality in the output you would need massive amounts of diverse data, when I say massive I mean petabytes.
There is no way they had licensed every bit of it. Not only would that be expensive, but extremely impractical.
So I am assuming a large part of the training data was just UA-cam videos. I mean, where else would you get the same video, in multiple resolutions, multiple languages, captions, multiple genres and basically anything you could ask for. When you think about it, UA-cam is just a massive unorganised training data if you have the resources.
If you had a popular video on social media, then chances are you contributed to training Sora. Whether you like it or not.
I would hope theres some regulation in place which prevents these companies from just straight up stealing the data to train their models or someone would strong arm OpenAI to _open_ about their data but with Microsoft behind them, its just wishful thinking.
Wondering if anyone's eyes felt weird when looking at these videos? I felt confused visually, through what's on the screen looks very realistic.
when it becomes open source , IT IS GOING TO BE CHAOS. I am not prepared for that yet
I didn't know Theo likes Flume, that's so cool!
Cool video, I agree with almost all your points. However, I'd like to point out that it's pretty easy to do screen replacements for the TV scene, and text replacement on the car. Especially the car. The text is always in shadow, so you don't have to worry about animating shadows. So the job is almost entirely just tracking the new text onto the car. Which I would be surprised if it wasn't trivial to do in DaVinci Resolve or After Effects, and still quite possible to do in Blender, with its new-ish corner-pin planar tracks
I can sense video AGI.
One of the worst time to be alive.
on the 1950s TV Screens example - I believe it could be possible to generate a kind of UV map-like grid on the screens and then update each part of the grid.
This would be more like inpainting, a technique we will definitely see in AI-generated videos.
The biggest issue would be how to handle depth in scenes. However, this could improve over time, similar to the advancements from GPT-3 to GPT-4.
Video is somewhat different, but I could imagine it gaining all the capabilities that stable diffusion has, such as inpainting, outpainting, various models and styles, motion capture/posing, mixing, etc.
it's impressive how fast and how good these things are getting.
The grandma with the cake, she was not looking at it and she can't blow off the candle.
Great video dude.
Soon some device will be able to read our thoughts and visualize them with this kind of tech.
Ai genrated Anime ? Personalized just for me. Future is looking good
In the same way a single person can now make an entire music album, may be posible for a single person to generate entire 24min 12 episode series in a future.
This will be crazy
This would be revolutionary, if this creates a real like video based on a script 😘
For the AI voiced video's and still-image videos getting millions of views, for them it's a gold mine. As for the audience it's a galactic pile of trash. Soon we better standardize labelling these or something else
What would be really cool is if it could have a concept of layers so for example in the case of a bunch of TV's you could bring it into your editing software OBS, etc
I can see these uncanny videos becoming some sort of niche, a sort of vaporware-like thing.
I already don’t like the dark side of this, although this is amazing. The propaganda that can be created with this is…
1) short prompt usually give better and more HQ results in my experience.
2) the AI usually loves to add random watermarks since most reference images that are used to create the model are videos or images that are watermarked/signed etc
That’s the next production milestone - chunking for re-prompting segments
Image sora using literal dream footage as an input.
I've been messing around with Runaway AI to create videos, I cannot wait for this to get better and better
Kingdom heaaaaarts lets goooooo
i find funny that you guys are looking at this pointing out what looks odd, which of course it does, as if it's not glaringly obvious, despite the fact that the overall quality of these videos is beyond comprehension already, i was stunned when i learned about sora and i'm still trying to process this. if it didn't output anything incoherent at all i'd be concerned OpenAI already has AGI. a model would actually need to be able to fully understand the human perspective to fool us with every single generated video
finding out theo is a flume fan is such a nice surprise
i wanna impromt
:shopping on black friday with a voice.
liked the AI commercials
It might not be that hard to do post-edit on that. Someone managed to create a gaussian splatting. With that, you can do pre much anything using Unity or the likes.
ua-cam.com/video/hYsYZHhefgk/v-deo.html
i didn't even think of that, thanks for sharing
These films resemble our thoughts. They are not perfect and full of details. This can be perfectly seen when you want to, for example, draw what you are thinking and you catch yourself missing details
My new rule....if I didn't see it happen right it front of me, it is fake until I prove otherwise.
18:19 what's the point of comparing this to copilot? Of course copilot is easier to edit, it's working with text. This is video!
its really amazing when you think about how our minds work here.
When really seeing a video shot of real people we say "look at her, what is she doing" as in "what the actual F is she doing". We see that person as real.
But now we have generative AI giving us a video and we still talk about the representations of people as real people like saying "Look at granny" or "What is the girl in the background doing".
The mental aspects and psychological part of it is fasanating
5:51 - There _are_ tools out there that will actually take a prompt and use it to edit existing video as you see fit. I heard of it recently but I can’t remember the name of it for the life of me, unfortunately.
Maybe in 2056, cell phones will levitate without the need for drones, and that's how you will shoot home videos
Bro really missed what was wrong in the treadmill video 🤣🤣
AI blockbuster 2025 beats titanic and avatar adjusted for inflation
In 10-20 years kids are going to look back at us and laugh thinking why would you watch a movie or play a game someone else made up. You prompt your own movie or game and you're in your own imagination
The clip at 17:00 would be a thing kids would listen to all day. Edit some Audio to it and BOOM
Man, he talked about fps for forever but saying the word
im sure they will make a movie where a guy is in an IA dimension and he need to find the coherent part to survive like in Matrix but this time with generative IA and they will use actual Generative IA mistake to scare u out, cause thats some serious cursed thing
26:23 people in the background are sooooo creepy.
New movie genre: "AI generated liminality".
It's so weird to watch you immediately pick it apart for every little flaw as if that was insightful in any way. You missed the forest for the trees. You're supposed to look at where we were a year ago and see this and shit your pants because every little (and big) flaw you found will be moot by next year.
Eventually instead of inputing text to get a image, you will be inputing directions for a movie, and getting whole fledge films, actors, directors, and scenes from a single sentence, and you could watch a movie, any movie, with any style and prompt. Wild Times!
great, even more crappy movies on the horizon! truly wild times
I’m not sure if this is new cuz stable diffusion has been been doing video . I didn’t realize this was such a revolution.
About this 'danover' -> Simply they don't want to be sued, jailed to hell model powers that nonetheless. Bias check? Will they do that from both sides of the fence? ;)
Its only getting worse for "hollywood" guys. Actors and Actresses are about to be apart of the "Days of Yesterday." We're getting closer and closer to the perfect Robot (or demi-god).
At this point in time at least just regulate AI so it doesn't cause too many problems .
6:05 this isn't the first apparent error. In the Tokio one at one point the woman suddenly switches legs (left becomes right and vice versa). Still really impressive though
Does anyone knows where we can put the prompts? they talk about whats on the page, and how to use mid journey but no now says anything about where on SORA should be, is it available ?
lol, I am not exactly what's the correct word in english for it is, but was that a low key knock at Prime at the start?
if u mean it in like a negative way it could be a "diss" "slight" or "jab". if u mean it in a positive way it could be a "nod" or another word im forgetting
OpenAi, let @Theo try your tool.
"This is the worst its going to be"
Its over for realitycels
Well, Sora is actually capable of editing videos, look at the last article on openAI's research index
Its capable of way more than people realize. And its just the begining
25:02 did he not notice the backwards treadmill?
The porn implications alone are staggering. This is going to ruin our sex lives.
Just a heads a up, a long sentence doesn't mean its a run on sentence. Run on sentences are actually barely a thing, it was more of a short hand for teaching kids to not write long sentences
Marcel Proust approves of this comment.
thank u
I think we should be here to admire an attempt of new mind-blowing tech generation, not to examine and criticize a old-published product of some factory
stock footage sales 📉
One the other hand: Stock footage value as training data 📈
Whats the name of the plugin, he is using, the one which brings up a google searchbar anywhere?
29:17 what the hell is Theo saying? Theo is generated by AI
)))
I am curious how are they doing it??
so not available to the public>
?
I guess stock image and video sites are deader than dead, by AI trained on their own images
Can not do celebrities, but what if you are the celebirty and just think this would be easier than doing it yourself?
I really appreciate this sort of AI coverage which avoids both the hysterical moral panic or uncritical boosterism that you find elsewhere.
More please.
Just like everything else on the internet, this will definitely be used for an immense amount of porn.
Am I missing something? I'm using the SORA ChatGPT plugin and it just generates images, not video.
Its not open to the public yet
Making the world a worse place, one tool at a time
They are blocking adult oriented video creation but given the track record of exploitation and abuse of young women in the industry they really should allow it.
How did you generet Sora's animation
What surprises me is that Dalle 3 doesn't seem as good as Sora.
You would think their image generation would get better first before they were able to video of 'this' quality.
They're limiting its true potential.
They fear the use some people could make of it.