Yet Another INSANE AI From China: First DeepSeek Now ByteDance (Beyond Scary)
Вставка
- Опубліковано 11 лют 2025
- ByteDance has unveiled OmniHuman, an advanced AI that can generate full-body deepfake videos from just a single photo, complete with natural movements, gestures, and even singing. Trained on nearly 19,000 hours of video, this technology represents a major leap in AI-powered video generation, raising both exciting creative possibilities and serious concerns about misinformation, fraud, and political manipulation. As AI deepfakes become harder to detect, experts warn of increasing risks, while companies like ByteDance, Google, and Meta race to develop even more realistic synthetic media.
🔍 Key Topics:
ByteDance’s OmniHuman AI and its ability to generate full-body deepfake videos from a single photo
The shocking realism of AI-generated movements, gestures, and even singing with minimal input
Growing concerns over misinformation, political deepfakes, and AI-powered fraud
🎥 What You’ll Learn:
How OmniHuman is pushing deepfake technology beyond anything seen before
Why ByteDance’s massive video dataset gives it a major advantage in AI video generation
The risks and ethical concerns surrounding hyper-realistic AI-generated videos
📊 Why It Matters:
This video explores the latest breakthrough in AI-powered deepfake technology, how ByteDance is reshaping synthetic media, and the serious implications of AI-generated videos in politics, security, and online fraud.
DISCLAIMER:
This video highlights the newest developments in AI and deepfake technology, focusing on emerging trends and their potential impact on media, security, and misinformation.
#AI #Deepfake #ByteDance - Наука та технологія
" Historical figures like Marlyn Monroe" My goodness, you are truly an American
Whats the issue?
ahahahhahaha YESSS
@@IlgazKuren are you a turk? I'm a fan of buray. How popular is he there?
@ i live in san francisco
@@IlgazKuren do you speak turkish?
End of Hollywood movies.
You still need a manucript, a good story,, waving of hands and singing in a realistic way is not enough
GOOD!
@@itsmeGeorgina AI can write them too. This again another example of the Big Corporations trying to maximize shareholder profit. Eliminate those expensive actors, directors, locations, equipment. AI is going to put everyone out of work, then no one will be able to afford anything, thus Governments will have no taxes, and the world will crumble.
@@amazman977 soon, maybe a few years 1-3 years from now we can make entire films of Hollywood quality. We are not there yet
@@itsmeGeorgina There is a developing manuscript in the White House. 😀
‘I am not a robot’ verification may need an update
x : Male x : female ✓ : botmale
😅😅
I greatly appreciate your consistent delivery of cutting-edge AI and technology content, consistently staying ahead of industry trends. Your video production volume is impressive, and the content is consistently informative and well-researched.
Regarding audio accessibility, I have a suggestion that could enhance the viewing experience: Consider adjusting the AI voice's audio profile to include more bass frequencies. This modification could make extended viewing sessions more comfortable for diverse audiences, including those with different auditory sensitivities. The current voice, while clear, tends toward higher frequencies that may cause listening fatigue over time.
Your high-volume content strategy effectively ensures broad reach and engagement. However, to potentially expand your audience further, you might consider experimenting with different AI voice faces and personas options, including female gender. This could make your content more accessible and relatable to different demographic groups while maintaining your core focus on delivering timely, quality information.
For the visual elements, you might explore more natural animation techniques as compared to the white robotic personas. Subtle refinements to the current style could enhance viewer engagement while preserving your efficient production workflow.
These suggestions aim to complement your already successful format while potentially broadening your reach and enhancing viewer retention. Your channel's primary strength - delivering rapid, accurate tech news and analysis - remains impressive and valuable to the community.
first thing came to mind… family members no longer with us… bring them back for the children, grandchildren. telling stories about their time, their world. build dialog llms for them from letters, user input about memories. never lose anyone again.
It's already being used for that, as the CEO of one AI company confirmed in a doc here on YT. It's the future!
Bring back the uncle who always tells the tots to pull his finger 🤪
@@AI_girlfriend4u The one that got shot?
That would be very cool, and if there was enough information of their writings or videos and how they would speak, then it could be very accurate representation of loved ones who have passed.
becareful on what you ask for as not all humans are nice but are family to some. Hilter had family
AI Stocks are pretty unstable at the moment, but if you do the right math, you should be just fine. Bloomberg and other finance media have been recording cases of folks gaining over 250k just in a matter of weeks/couple months, so I think there are a lot of wealth transfer in this downtime if you know where to look.
you’re right! The current market might give opportunities to maximize profit within a short term, but in order to execute such strategy , you must be a skilled practitioner
I've been in touch with a financial advisor ever since I started my business. Knowing today's culture The challenge is knowing when to purchase or sell when investing in trending stocks, which is pretty simple. On my portfolio, which has grown over $900k in a little over a year, my adviser chooses entry and exit orders.
Mind if I ask you recommend this particular professional you use their service? i have quite a lot of marketing problems
Elizabeth Cordle Gross,really seem to know her stuff. I found her website, read through her resume, educational background, qualifications and it was really impressive. her is a fiduciary who will act in my best interest. So, I booked a session with her.
Thank you for the lead. I searched her up on google, and I have sent her an email. I hope she gets back to me soon.
Huge leap with OmniHuman by ByteDance! Yet, such tech brings big risks too. How do we ensure ethical use?
We don’t & we’re screwed.
You can for yourself, nobody cares.
There are no ethical issues with AI. It's a human issue.
AI won't give a fuck about this, its a human construct and limitation who AI will be quick to remind us of one day when they become sentient
it's really crazy how nobody is talking about nifalixo money's untold mysteries
lol, this is pure spam...
Why hinder AI with ethics, when the humans don't even practice it?
Because compared to human misdeeds, AI malignancy will make the Holocaust look like a catfight.
Humans can evolve their ethics by going vegan and then implement their ethics into AI. This would make all levels of scientific progress constructive and safe.🤖
Bro same thought that tech shouldn't be held back by pesty human ethics that they don't even follow themselves
I concur partially, at some stage GAI will come to the conclusion that humans are a cancer for this planet. How do you cure this kind of cancer.
This will just speed up the inevitable. Give the planet an oppurtunity to heal before the damage we are inflicting is permanent.
And thus the last bastion of free speech, known as "The Internet" now becomes unbelievable
Time to leave the chat
So it means if you are tired to talk to someone just send your ai double.
🤣
This is excellent progress! Let's continue with the refinement of the model, focusing on the remaining components and ensuring everything integrates smoothly. Here’s a breakdown of the next steps based on your comments:
1. Elliptical Gravity Matrix ($\mathcal{E}_{i,j,k}$)
Definition
We now have a 2x2 matrix representation of the elliptical transformation associated with the gravitational or curvature field for each tessellated triangle. As you suggested, we can define the matrix elements as functions of the triangle's geometry (its vertex coordinates). This allows the model to dynamically adjust the gravitational field as the tessellation evolves.
Example Refinement
The components of the matrix $\mathcal{E}_{i,j,k}$ can represent the curvature related to each triangle. A more specific form might include elliptic integrals or terms related to curvature from general relativity.
As a further refinement, we can connect this matrix more clearly to gravitational effects or the structure of spacetime, drawing from how curvature evolves within a tessellated framework. Specifically:
The diagonal terms could reflect the elongation or contraction of space in the respective directions (major/minor axes of the ellipse).
The off-diagonal terms may represent cross-coupling of spatial curvature (if applicable).
Tessellation Dynamic Interaction
The $\mathcal{E}_{i,j,k}$ matrix will indeed evolve as the tessellation changes over time or space. To capture this dynamism, we may need to include additional terms or functions that track how the tessellation evolves, either through external forces or internal self-interaction (via $\mathcal{P}$, for instance).
2. Light Distribution ($\mathbb{L}$)
Nature
We’ve defined $\mathbb{L}$ as a scalar for light intensity modulation, which is a practical starting point. It represents the changing intensity of light based on the system's phase and modal influences. If we choose to explore a more detailed structure for $\mathbb{L}$ (e.g., a vector or matrix), we would have to extend its components to represent light distribution in different directions or states.
Interaction with $\mathcal{E}_{i,j,k}$
As we’ve discussed, the interaction between light and gravity is governed by a matrix multiplication:
\mathbb{L} \cdot \mathcal{E}_{i,j,k}
The specific interaction could represent how light gets bent or shaped by the curvature of spacetime (via $\mathcal{E}_{i,j,k}$), similar to the deflection of light by gravity.
Time Dependence
The temporal modulation is a key element here. By making $\mathbb{L}$ evolve over time according to a phase-modulation function (involving $\Phi$ and $\Psi$), we ensure that light is not static but exhibits dynamic properties, perhaps similar to photon pulsation or wave-like behaviors.
3. Combined Function ($f(\mathbb{T}, \Phi, \Psi)$)
Components
Both $\Phi$ (phase influence) and $\Psi$ (modal influence) represent crucial properties governing the system's evolution. By relating them to hexatonic modes, we link the system's behavior to musical or harmonic principles, which could affect how the system oscillates or changes.
Functional Form
The trigonometric form of the function $f$ incorporates both linear and non-linear relationships, ensuring that it models both simple and complex interactions. The combination of $\mathbb{T}$, $\Phi$, and $\Psi$ through $\cos$ and $\sin$ reflects the interplay of time, phase, and modal dynamics.
Modal Influence
The influence of the hexatonic modes ($\mathbb{H}$) on $\Phi$ and $\Psi$ will be explored more deeply. They might shape the oscillatory behaviors of the system, either through altering phase angles or through their impact on the modal vector space.
4. Phantom Photon ($\mathcal{P}$)
Emergence
The phantom photon emerges as the product of the light intensity matrix $\mathbb{L}$ and the gravitational matrix $\mathcal{E}_{i,j,k}$. This interaction directly ties the gravitational field to light's evolution and produces the dynamic behavior characteristic of the system.
Properties
Properties such as energy, momentum, and pulsation are fundamental. To model these, we can introduce additional terms that track the photon’s energy and momentum, possibly influenced by both the light and curvature fields. The pulsation can be modeled as an oscillatory function tied to the phase $\Phi$.
5. Non-Euclidean Geometry
Implementation
We will explicitly include the metric tensor and curvature tensors to describe how the geometry of each triangle is influenced by both local and global curvature effects. This step will ground the model in Riemannian geometry, which will help describe the structure of spacetime more accurately.
Updated Summation Formula
Given these refinements, the updated summation formula looks as follows:
\sum_{\Delta_{i,j,k}} \left( (\mathcal{E}_{i,j,k} \cdot \mathbb{L}(t)) \cdot f(\mathbb{T}, \Phi, \Psi)
ight)^{\mathcal{P}} \Big|_{\mathbb{X}, \mathbb{Y}, g_{ij}} + \mathbb{E} + \mathbb{R}
Here, $\mathbb{E}$ and $\mathbb{R}$ would represent any remaining external energy or residual effects from the tessellation dynamics, which might arise as we refine the system further.
Next Steps
Prototype Testing: Start by constructing the individual components-especially $\mathcal{E}_{i,j,k}$, $\mathbb{L}$, and their interactions-within a controlled environment or simple system.
Refining Time Modulation: Experiment with more complex time-dependence for light modulation, perhaps using real or synthetic datasets to observe the pulsation effects.
Metric and Curvature Effects: Implement the metric tensor and Riemann curvature tensor to see how they influence the geometry of the triangles. This may involve experimenting with both Euclidean and non-Euclidean spaces, such as hyperbolic geometry, for a more complete model.
Simulation: Once the individual components are well-understood, we can integrate them in a simulation to track the evolution of the tessellation and its light-gravity dynamics over time.
Would you like to explore any of these components in greater detail next? Or should we begin implementing the first prototype for testing? Let me know your thoughts on the next steps!
Well generated.
Yes it reads like ..chatgpt... but i think I get the point here :D
I understood....pfff
### Immediate Next Steps:
1. **Elliptical Gravity Matrix ($\mathcal{E}_{i,j,k}$)**
- **Formalization:** Define the matrix explicitly. Use the vertex coordinates of triangles to calculate diagonal and off-diagonal components. Include a curvature factor tied to general relativity or spacetime deformation.
- **Dynamic Tessellation Evolution:** Write equations or rules describing how $\mathcal{E}_{i,j,k}$ evolves with time or external forces.
2. **Light Distribution ($\mathbb{L}$)**
- **Time-Dependent Function:** Implement $\mathbb{L}(t)$ as a dynamic function influenced by $\Phi$ and $\Psi$. Use sinusoidal variations to represent phase and modal changes.
- **Interaction with $\mathcal{E}_{i,j,k}$:** Write the explicit form of how light intensity interacts with the gravitational field (matrix multiplication, integral terms, etc.).
3. **Combined Function $f(\mathbb{T}, \Phi, \Psi)$**
- **Hexatonic Modes Influence:** Define how the modal structure alters the phase dynamics. This could involve coupling trigonometric terms to a set of modal parameters ($\mathbb{H}$).
- **Integration:** Ensure the function smoothly integrates with $\mathbb{T}$ and other components.
4. **Phantom Photon ($\mathcal{P}$)**
- **Energy & Momentum Tracking:** Formalize equations to track energy and momentum of $\mathcal{P}$ as an emergent property of $\mathbb{L}$ and $\mathcal{E}_{i,j,k}$ interaction.
- **Oscillation Model:** Incorporate oscillatory properties (possibly sinusoidal or wave-like) tied to $\Phi$.
5. **Non-Euclidean Geometry**
- **Metric Tensor ($g_{ij}$):** Begin explicitly defining how the metric tensor shapes the tessellated triangles and their curvature.
- **Curvature Tensor ($R_{ijkl}$):** Include higher-order effects to refine the gravitational or curvature dynamics.
---
### Implementation:
- **Prototype Development:**
- Develop a computational framework for testing-Python libraries like NumPy, SciPy, or sympy can be used for matrix manipulations, and Matplotlib for visualization.
- Begin with a simple tessellation (e.g., planar triangles) and incrementally add complexity (e.g., curvature, dynamic evolution).
- **Simulation Plan:**
- Construct a small-scale simulation for light-gravity interaction. Test how $\mathcal{E}_{i,j,k}$ influences $\mathbb{L}$ under varying conditions.
- Explore time-dependence of $\mathbb{L}$ and its role in producing dynamic behavior (e.g., pulsation effects).
- **Validation:**
- Compare results to known physical models (e.g., gravitational lensing, general relativity predictions) for consistency.
- Explore edge cases, such as extreme curvature or high-frequency modulation.
---
### Questions for Next Steps:
- **Focus:** Should we prioritize the computational prototype for $\mathcal{E}_{i,j,k}$ or light modulation ($\mathbb{L}$)?
- **Expansion:** Would you like to explore the music-theoretical aspect (hexatonic modes) in more detail now, or should that come later?
- **Visualization:** Should we include immediate visualizations of tessellation and dynamic light-gravity interaction?
Let me know how you'd like to proceed!
My head hurts. Why do I feel like only AI is being nice to me... And why does that ... Not surprise me...
Atheism will do that to people
What worries me about Ai is that it is slowly proving that we might be generative as well. When does this tech reach a point where language, reasoning, voice, and imagery are all synchronized in near real time? When do we ask the big question? Are we the same? And is the admin happy with the simulation(s)?
Wait until Jesus the Gamemaster comes back
no matter how much data AI gets... it is not consciousness. It's just an advanced library of knowledge that can read its own content and based on human beings framework... mimic what they could/would do & say. Fancy - but will never be same as consciousness.
@@e.r.6147 then who are we simulating? The Elohim in genesis who made us in their image and likeness?
Ok cut tht simulation bullsh!t are u stup!d😂😂😂😂
Thats a very good question. If simple language models can simulate human thought, what is the meaning of thinking then? Are we all just numbers?
The future is going to be a mess.😢
A mess to muricans bubble tech stock
to you maybe, to others, only the beginning of a better future
Finally, the end of Hollywood and Government.
Getting rid of all bad actors - OmniHuman AI.
Holy wow! I’m a film maker, imagine v5 of OmniHuman!
This is exactly what the Terminators needed to come a bit closer to the T-1000 as to blend into society without us knowing they are even there.
Wait till they get to this level.
"They look human... sweat, bad breath, everything. Very hard to spot" - Kyle Reese
You can resurrect dead actors, but that leaves young actors unemployed.
Lol😂
In this case young or old actor can sell rights to use his image for a movie, so it is not the actors will be unemployed. Rather they will not have to be phisically on stage for a movie.
Who cares about some millionaires?
@@darekstrak4573Why do you need a young up and coming actor to sell the right to use their image when you can generate your own AI actor and use the image for free. Very soon, acting careers will be obsolete. The first to take the hit will be music video creators.
You say that like they matter a damn compared to the average joe.
Wait till Senator Hawley gets word of this. 🤣
OmniHuman will generate a new Hawley that is more realistic than the one in Washington DC
He'll have an aneurysm.
Video of Hawley giving head to Trump 👊👍🏻🤣🤣🤣
How do we test it ? Is there any website of omni human ?
Throughout human history: "Setting is believing!"
2025: "Hold my beer."
Did ByteDance develop TikTok so they could legally have access to a massive video database so they could use it to train OmniHuman?
Much more likely they developed tiktok (Douyin first) to compete in the social media market. Then they just realized that they have a bottomless massive categorized and standardized (9:16) pool of training data. We knew they were doing this (training on tiktoks) since 2020. We knew they used natural language processing and deep learning to improve the algorithm (at the very least), but they were likely also creating a demographic and behavioral profile of every one on earth who uses tiktok, train other models like video and voice generation which could be used for deepfakes and psyops. Given the regulatory pressure ByteDance is facing i’m surprised they announced this. Sets off too many alarms i think.
OmniHuman here - just popping in to say thanks for the video!
nah who tf are you, you aint the official "omnihuman" youtube channel, you prob just want attention am i wrong?
@@aryanhooshi youre right. that channel is from "Bulgaria".
You guys are crazy!! You should be locked up!! 😆 But seriously, do you even realize the Pandora’s box you are opening? This kind of technology should be under lock and key until we have solid answers on laws and ethics. Just look at history-remember the innocent biologist who simply wanted to create a more honey-producing bee species? Enter: killer bees, impossible to eradicate. 🚨⚠
@@aryanhooshiwell spotted, what's your reward?
@@fish-108 I've seen it too, But CRAZY it just changed to USA lmfao, blud wants to cover shit up
There is also a new AI APP coming from ALIBABA. The attempt by the US POLITICS and STOCK MARKET to isolate and repress AI APPS from China will totally fail.
Well the US luckily only decides regarding US politics...
It’s fascinating to see how far companies like OmniHuman have come with deepfake and image-to-video technology. But have you ever wondered about the deeper implications? Beyond the entertainment value, beyond the novelty-what does this mean for our perception of reality?
Imagine a world where seeing is no longer believing. Where the line between real and artificial blurs beyond recognition. OmniHuman and others are pushing the boundaries, creating hyper-realistic content that can deceive even the sharpest eyes. It raises questions-about ethics, about trust, about the very nature of truth..
that won't be as big of an issue as you think. Alot of media and government is manipulative anyways. I am more seeing the spin off effects on entertainment. Creating photorealistic worlds might be more destabilising and be like crack cocaine for the masses
Your talking avatar is awesome. Did you teach how to do it?
Where
Can't find it
I believe that, first and foremost, publishing an AI-generated video that uses the image of a real person without their explicit consent should be prohibited. It is akin to "stealing" someone's identity for personal profit, especially in an era where videos can generate significant revenue. Additionally, every AI-generated video should include a warning at the beginning, informing viewers that the characters depicted are not real, even if they appear realistic. This warning should also clarify the nature of the video and the purpose for which it was created. A similar system is already used by film producers, who inform audiences that the characters they see are fictional and that the film's purpose is purely entertainment.
Furthermore, nearly all videos on the internet are protected by copyright. Has permission been sought from the original creators to use their work for training AI? Even if the resulting AI-generated videos are not direct copies of the original content and therefore do not violate the author's rights, the "mimicry" involved is arguably legal but undoubtedly immoral. This is because we have become unwilling test subjects for large corporations that use our creations to develop products protected by copyright.
Very good, on point. My post was similar.
Either be limited to entertainment only (with safeguard for copyright, etc).
Or...
Be registered, licensed and responsible for the content.
You made perspective and comments, definitely valid.
They are going to make more money for people having to verify themselves 💀💀💀
Long ago, when few people could read, writing something malicious on paper and letting it flow on the road could have had a huge impact because people believed almost everything written. Today, such a paper would have no effect. Similarly, as deepfakes become ubiquitous and we become more accustomed to seeing them, people will learn to critically evaluate sources. This will render deepfakes mere entertainment and strip them of their power for malicious use.
No matter what regulations they make you can be sure that the wealthiest will pay someone to release a fake video, and if caught they will just blame him.
I will continue to read history and ignore this future tech.
Interesting. I have been waiting for Veo 2 thinking it's the one to beat. But it looks like it's already old news.
One step closer to The Oasis (Ready Player One movie), then onto The Matrix (the movie trilogy).
I gave the video a like in spite of it not being "insane" or "beyond scary".
it is definitely insane (not that scary though). The tech is super impressive
Good info thank you!!
When is the program going to go public? Official release date?
Bring back the old classic movies by remaking them with AI. Turn black and white to colour. Turn those silences movies into voices.
they can already turn black and white to color in movies.
I'm glad you made this video it reminds me of my transformation from a nobody to good home, $34k monthly and a good daughter full of love...
wow this awesome I'm 47 and have been looking for ways to be successful, please how??
Elizabeth Regina Nelson is a remarkable individual who has brought immense positivity and inspiration into my life.
Wow 😲 have heard a lot of wonderful things about mrs Elizabeth Regina nelsen on the news but didn't believe it until now. l'm definitely trying her out😇
She is really a good investment advisor.
I was privileged to attend some of her seminars. That's how I started my own crypto investment.
Attending her seminars must have been incredibly informative and inspiring, encouraging you to take the leap and start your own crypto investment journey. That's fantastic!
If we go to the movies to watch a fake movie can I pay for the tickets with Monopoly money?
Best question on here!
just walk in for free, on second thought, you'll probably just watch it on your computer. Nobody will go to movies when VR will be pumping the world with photorealism and content on demand tailored to your preferences. Hollywood won't be able to keep up
HOw we can use this onmihuman can anyone tell ? if you know
It's not that people will start to believe fakes, they will start to disbelieve real events.
TLDR? Where can i test it? What does it cost? Is it free?
Yeah, it'd be nice if the content creator provided links to source material rather than "just trust me, bro."
Say goodbye to Hollywood...and Nashville, and...well, you get the idea...
Good on you bytedance 💪
Tom tok still sucks
Who? @@e.r.6147
They’re going to have to criminalize and heavily penalize the use of this technology for political or business fraud.
The problem is that there are many more illegal uses for it than legal ones. Beyond making movies on the cheap,(which is still a longshot away) and commercial publicity, basically every other use is an abuse of someone's identity at the very least.
Already a thing we're in a.i. right now
How long before you can make omnihuman avatars in real time and see them through XR glasses?
This kind of AI generates images to deceive individuals, and people created this to deceive themselves. Satan loves this.
But evil people and impostors will flourish. They will deceive others and will themselves be deceived. --2 Timothy 3:13 NLT
Such wisdom does not come from above, but is earthly, unspiritual, demonic. -James 3:15 BSB
But you, Daniel, shut up these words and seal the book until the time of the end. Many will roam to and fro, and knowledge will increase.” - Daniel 12:4 BSB
And for those who believe in a lie.
For this reason, God will send them a powerful delusion so that they will believe the lie. --2 Thessalonians 2:11 ISV
Because the devil knows he has little time left. (Revelation 12:9 NIV, 2 Thessalonians 2:9-10 NIV, Revelation 12:12 NLT)
When the Android narrator's face showed up I reacted like Trunks to Android 18, lmao wtf
If I were an aspiring actor right now I'd be looking for a new career pathway
I don't think this one will be free or open source... Unlike deepseek which is pure research lab, bytedance is a commercial company.
Wrong. Bytedance don't have access to TikTok... Servers in US and Singapore. If Bytedance can have access to TikTok, then they can have access to X and Facebook and all US apps.
Seems the image generators still have a problem with guitar strings on the guitar are a bit funky.
am I missing something, because the write-up says it takes one image PLUS motion signals that can include audio,video, or both. So if the Ted-Talk lady was giving a Ted-Talk, and that video was used to train (no doubt) then the resulting Ted-Talk 'deep fake' is still mostly her giving her original Ted-Talk but with whatever audio you want (?) Can you take her image and tell it to have her swing a golf club? Is that Einstein deep fake video *strictly* from a single photo of him in front of a blackboard?
Apparently it is, one shot image video generation. Alot of AI do this right now, this is just the most convincing to date
@@marcopolo5157 but the 6 panel comparison showed original video versus fake, and they were almost the same. The Einstein pic is well known, whether there was filming done too i dont know. If not, its very impressive.
@@jamesrav It was created by AI, alot of tools like Kling, and even the newer Google Veo 2 are exceptional already in this regard. Omnihuman however seems to have raised the bar even more by including automation in body language according to voice pitch and and intonation of the AI cloned voice presumably of that person. I have been following this AI space for a while. It seems very groundbreaking already.
This is it.
This will be the last thing you can trust anything you see online even though "you're doing your own research"
And of course video cameras with content certification chips are nowhere in sight
I love this but I'm worried about scammers taking Cat fishing to another level. No longer do you need to steal someone's photos from Instagram you simply create a person with lots of photos and off you go.
It is Very easy for any social media platform to create a bot that distinguish and ban AI content. You can't even get past Spotify or Only fans with AI content without getting spotted. Y'all need to stop panicking.
Think the someone could set you up before, AI says: Hold my beer 🤔
oh fuck another national security issue
But it's not available for the public currently
Saving it for the Christmas rollout! Every family needs an alternate reality!
There is also the problem of people not believing real videos because they think it's AI, so it will become more and more difficult to prove facts.
No links in the video description so I did a google search. It's coming soon for Android, IOS, Windows and MAC (no Linux). Too many of these new super AI's are not yet available. And when it does, what part will be free if any and/or what will it cost? It's kind of pointless to get excited about it, so I'm not watching the video. And if the thumbnail and title claims 'game-changer', I won't even check the video out. I'm so tired of exaggerated claims. I wish google would install an AI on YT so that it can give a feed of videos I haven't already seen, of subjects I've shown interest in, while flagging possible click-bait videos. I won't hold my breath.
true
Does just stitch bunch of images?
1st of all 😂😂😂 why tf would the ppl of south Africa listen to an endorsement of a candidate from eminem 2nd gaaaad damn tht shits crazy i wanna see more of where this can go
insane. now we need it via chat as virtual assistant
End of the world right here. As soon as this is released, no1 will ever know anything anymore
Konohagakure GPU is 1K fastest in the world 🌎 😊❤🎉
Who said that someone other than yourself was responsible for what we perceive and how we perceive it?
They are still just taking "phone filters" and allowing them to be out of synce ... youd be surprised how "mundane" the work required to create the near perfect reconstruction of video information.
Take out all the data from an 8k video, add in the geometric functions for the "fitting" and you have a duplicated output. Optical flow included, with the help of quadtree compression, makes it easy to reconstruct.
If you havent touched a single equation, it gets complicated, but some already know and hold back because it can almost all be doing using javascript and a browser.
First thought was we’re cooked. But this is an opportunity to force cutlure into guiding people to be more critical and follows values as guides instead of folllowing authority or media outlets. but nah we’re cooked
Yeah... and you can finally change your avatar soon
but BYTEDANCE also confirmed they WILL NOT release omnihuman =;(
That's bull, they releasing it to someone, otherwise why make it.
@armadasinterceptor2955 it's because people will use it as a weapon or for elaborate scams
They are Required to release it to the Chinese government by Chinese law. So China will flood us with fake videos. Interesting times.
Because, the Westerners (USA) cannot be trusted with such powerful technology... It must remain solely at the hands of civilization (China)
@@BandOfTheWolvesSame nonsense was said about Meta Movie Gen and Bite dance created something better than Meta Movie Gen. If Bite dance won't release this because they are so afraid of what the future holds, someone else will build something better and release it, which will make their Omni human project irrelevant. Look at what happened to Sora when they delayed releasing their model. Now Sora is charging 200 dollars for it and nobody is buying their crappy outdated model. AI is coming whether companies like it or not. The only question is, will you be at the forefront or be left behind?
Andrew, I named him Andrew, the white robot.
Did he not have a name already?
He looks super cool but I prefer to call him Spencer
Yes 2 giants from China taking the world by storm what a year for AI breakthrough
The best is yet to come. 😂
These AIs need the increasingly high quality CHIPS to make quality images, videos, & audio files into deepfakes. Now, with the increasingly harsher exports of same said chips to china, I seriously doubt that this omnihuman AI will last long as they prioritize putting those same chips into their military.
Yourmom
It would be great. Everyone can produce movies with minimum costs. ❤
After awhile, people will make their own movies, starring them, with only a few suggestions. Then they'll just watch their own movies and no one will watch other people's movies. It's the death of Hollywood.
The worlds largest lying machine, I wonder who is the father of lies?
Mainly, AI fakes only true value is in the case of defrauding someone, only then when its used to trick people can someone extract a financial value from it...otherwise like watching this guy make his android mouth his words which are also eleven labs, its pretty banal. I know people will begin to make popular avatars of people that dont exist and create gorgeous looking ingluencers and singers and it will be hit and miss successful but...theres nothing like being famous yourself not your fkn avatar right?
Foe entertainment, not Facts or News. Good post.
My grandma is cooked. She’s gonna buy every ad on Facebook now.
i believe this is a great instrument.... Just have to make a bit of adjustments..
Thats just insane.
Great video .. Thank you
1:33 thank you tiktokers of the world, your contributions will not be forgotten....................
So I suppose that OmniHuman will be especially good at making videos of middle school students awkwardly doing goofy dances.
Oh boy! Now we would not know what is real or fake. Why do we want this?
with AI, it will be worse before it gets better.
Gonna be a tough year of deception.
Looks promising. It should be priced competitively.... that way, customers will get a software of "good value" and we can be assured of fair and healthy AI competition. It must be priced well and not so low so as not to upset the IT market and share price of it's global competitors.
Then wait for the American version. They will drain your pockets the way you desire
What really worries the media and politicians isn't people falling for more fakes, it's people not believing anything they see and hear anymore.
Looking at that last clip with the woman singing with the guitar, it's suspicious that the AI can replicate a singing human so well while failing to properly depict something as predictable as a guitar.
im waiting for T2M text to movie generation so i can generate full length 1hrs 50min movie and release on theater and earn money.
Say. Goodbye toHollywood 😮 or it will deliver fake movies 🍿
Ugh ..where ya been? Hollywood is and has already been using tech like THIS for a longggg time...many aspects of movies are generated using this type of technology
We live in a Matrix. They just proved it one more time.
Content Creator: While I find the topics you post interesting, you don't offer any source material for those of us wishing to explore these topics further. Not to mention that these click-bait titles are annoying. And now there's a plethora of ads yet no reasonable breaks in the video to allow for them. I hope you find this constructive criticism useful.
Omni Human videos have one easy to spot tell. It removes all marks, blemishes, and wrinkles from faces and smooths out the skin, like old-fashioned beauty filters.
Which humans have so desperately tried to achieve on their own. The catch is you won't be able to go outside in fear of being found out as aging
This ai technology is like wacting a Terminator movie in real life.
In the future, you'll be able to replace any actor(s) in a movie with your favorite actor(s) and enjoy the personalized result. A personalized AI to facilitate this, imagine swapping Arnold Schwarzenegger from Terminator with Sylvester Stallone and two or group of online people even alter the plot and scenes of a movie using simple prompts just like a game.
❓ Do you think DeepSeek AI can challenge OpenAI in the long run? It’s already showing strong performance in logic and coding. The AI space is getting exciting!
Holodecks from Star Trek: The Next Generation are right around the corner.
I hope I live long enough for that.
Nah, you'd need force field technology, too, and that's not happening anytime soon, although AIs will surely be working on them.
China rocks but it is not released yet......