I Deep Faked Myself, Here's Why It Matters

Поділитися
Вставка
  • Опубліковано 10 тра 2024
  • WTF is a Deep Fake?
    Use code JOHNNYHARRIS at the link below to get an exclusive 60% off an annual Incogni plan: incogni.com/johnnyharris
    Check out Ctrl Shift Face: / @ctrlshiftface
    Ready or not, deep fakes are here to stay. Deep fakes are going to change the way we trust information around us and even each other. The question is - are we prepared for the threat they cause while being able to harness their potential for good?
    My next video is live on Nebula NOW! It's about how countries are starting to challenge the US-led world order that emerged after World War 2. Watch now: nebula.tv/videos/johnnyharris...
    Check out all my sources for this video here: docs.google.com/document/d/1m...
    -- VIDEO CHAPTERS --
    0:00 Intro
    1:51 Incogni
    3:54 What Are Deep Fakes?
    8:01 The Good Side of Deep Fakes
    8:58 And the Bad Side
    11:37 Misinformation
    13:17 The Legal System
    14:58 Cyber Crime
    16:55 Solutions
    18:51 Conclusion
    Get access to behind-the-scenes vlogs, my scripts, and extended interviews over at / johnnyharris
    I made a poster about maps - check it out: store.dftba.com/products/all-...
    Custom Presets & LUTs [what we use]: store.dftba.com/products/john...
    The music for this video, created by our in house composer Tom Fox, is available on our music channel, The Music Room! Follow the link to hear this soundtrack and many more: • Deepfakes | Original S...
    About:
    Johnny Harris is an Emmy-winning independent journalist and contributor to the New York Times. Based in Washington, DC, Harris reports on interesting trends and stories domestically and around the globe, publishing to his audience of over 3.5 million on UA-cam. Harris produced and hosted the twice Emmy-nominated series Borders for Vox Media. His visual style blends motion graphics with cinematic videography to create content that explains complex issues in relatable ways.
    - press -
    NYTimes: www.nytimes.com/2021/11/09/op...
    NYTimes: www.nytimes.com/video/opinion...
    Vox Borders: • Inside Hong Kong’s cag...
    NPR Planet Money: www.npr.org/transcripts/10721...
    - where to find me -
    Instagram: / johnny.harris
    Tiktok: / johnny.harris
    Facebook: / johnnyharrisvox
    Iz's (my wife’s) channel: / iz-harris
    - how i make my videos -
    Tom Fox makes my music, work with him here: tfbeats.com/
    I make maps using this AE Plugin: aescripts.com/geolayers/?aff=77
    All the gear I use: www.izharris.com/gear-guide
    - my courses -
    Learn a language: brighttrip.com/course/language/
    Visual storytelling: www.brighttrip.com/courses/vi...
  • Наука та технологія

КОМЕНТАРІ • 6 тис.

  • @j.mkamerling2470
    @j.mkamerling2470 9 місяців тому +7151

    Imagine people deepfaking security tapes to frame people in the future. That’s scary.

    • @TheRlhaugan
      @TheRlhaugan 9 місяців тому +304

      Yes! It’s a show called “the capture” and it has two seasons.

    • @GiRR007
      @GiRR007 9 місяців тому +161

      Then we are just gonna have to get better at detecting fakes. Also that's already illegal.

    • @AVClarke
      @AVClarke 9 місяців тому +298

      The catch is; you can develop A.I. to make better deep fakes, but you also develop A.I. to better detect deep fakes.

    • @DeeRizz
      @DeeRizz 9 місяців тому +113

      Now I just wanna destroy future technology

    • @cessposter
      @cessposter 9 місяців тому +95

      you could also argue real footage was faked, within a court

  • @hawaiiansoulrebel
    @hawaiiansoulrebel 9 місяців тому +1885

    Honestly, this is probably the type of tech that scares me the most. Deepfakes could be used to literally ruin someone’s life and reputation. Frightening…

    • @HeidiThompson7
      @HeidiThompson7 9 місяців тому

      On a bigger scale it could cause a revolution, coup, or war. It could absolutely destroy the legal system by filling it with fake evidence.

    • @WhoAmEye_WhoAreEwe
      @WhoAmEye_WhoAreEwe 9 місяців тому +17

      only if people [famous people excluded] have continually uploaded their image to the internet (maybe?)

    • @fr61d
      @fr61d 9 місяців тому +3

      @@floppathebased1492 And if you have uploaded to FB/Insta or the like, you still have some time to delete your accounts and have the pictures removed from their servers.

    • @trackfresse
      @trackfresse 9 місяців тому

      The problem is rather that videos are no evidence for anything anymore. We loose what video-recording-technology gave us many years ago. And even historical video-recordings can be faked nowadays. Maybe someone will make Hitler look like a nice guy someday. 🫣

    • @sinane.y
      @sinane.y 9 місяців тому +25

      @@floppathebased1492 Yeah sure.
      It's not like facial recognition cameras aren't being installed in every major city worldwide, with governments and big data working hand in hand.

  • @EnteraName1876
    @EnteraName1876 3 місяці тому +35

    There's already a teenager out there who got their reputation ruined. She was just doing tiktoks and then someone decided to put her face on nude photos which then got scattered across the internet.
    She tried to explain that it's not her body and that it is not her but unfortunately people continue to comment about "she was asking for it", "ok, but when will you have only fans page" and other things.

  • @TheColdHarshTruth
    @TheColdHarshTruth 8 місяців тому +19

    This will be the government’s new answer to everything when called out for their crimes.

  • @JeffreyBoles
    @JeffreyBoles 9 місяців тому +1870

    I have 12 years of video editing experience. My specialization is interview editing. I look at and analyse faces through a screen all day, every (business) day. I could instantly tell when you showed a deep fake...except two times.
    I second guessed myself, and that is what scares me. Even with thousands of hours of carefully pinpointing imperfections in digital video of faces, I still couldn't be sure immediately.
    If I can't tell, how can we expect anyone to tell? I regret my hope as a child that I would live in an "interesting" time.

    • @F3ARtheGERBIL
      @F3ARtheGERBIL 9 місяців тому +15

      could meta data help with some of this? like what does the meta data of a deep fake submitted as evidence look like?

    • @F3ARtheGERBIL
      @F3ARtheGERBIL 9 місяців тому +26

      @@user-ze2zm4sz1b I think the issue with that is that nft’s are stored on a blockchain which does require actual resources and energy to support. The sustainability is already in question until greener alternatives are found and adding every video in existence to the equation does not sound sustainable. Also not sure how that would even be possible unless every video in existence was uploaded somewhere.

    • @Oblivion_94
      @Oblivion_94 9 місяців тому +9

      May you live in interesting times...

    • @kuroshite
      @kuroshite 9 місяців тому +45

      as a porn addict, i was able to tell all of them straight away 💀

    • @silotx
      @silotx 9 місяців тому +9

      Also most video evidence are low res with poor lightning so it's much easy easier to fake.

  • @JaegerZ999
    @JaegerZ999 9 місяців тому +1841

    One day I won’t need a mask for videos anymore, just pick a new face in post production.

    • @girishanejadelhi
      @girishanejadelhi 9 місяців тому +19

      Good to see you here shooter!!!

    • @54peace
      @54peace 9 місяців тому +12

      I really like your videos man.🔥

    • @GiRR007
      @GiRR007 9 місяців тому +17

      V tuber but without the cringe.

    • @iwilldi
      @iwilldi 9 місяців тому +3

      what for?

    • @kylehurley5994
      @kylehurley5994 9 місяців тому +6

      When's the collab with admin results?

  • @l.a.1477
    @l.a.1477 8 місяців тому +100

    You know, when I was a kid (born ‘87) I was so fascinated by the future and loved sci-fi so much. That was true even as a young adult, always a tech enthusiast, but fast forward just a few years and we are already living in the future I thought was still far off. It was all fun and games back then, but now I realize it’s actually really scary and so uncertain. I finally finished playing Cyberpunk 2077 and it was great but eerie. I can no longer enjoy these genres without any fear. I might start playing/reading/watching more fantasy to escape to completely different and in a way simpler worlds.

    • @nobody-nk8pd
      @nobody-nk8pd 6 місяців тому +7

      This is extremely relatable.

    • @CitizenMio
      @CitizenMio 3 місяці тому +3

      Yeah I guess once it was just a bunch of geeks and they were hopeful about what all these things could become in the future. To make the hard and boring stuff easier so we all have more time for the fun creative stuff. No-one mainstream payed attention to any of that unrealistic gibberish. Not in our life times, not in this economy/world whatever else occupied them most at the time.
      Then some of those geeks finally did convince some money people and I guess we all kind of expected reason would factor in to that.
      But it doesn't, it's all short minded chasing the bottom line.
      Like with artist copyrights, ofc they knew everything was essentially copyrighted. But they didn't have the time or willingness to pay for all that, they wanted quick results. Proof of concept that would convince big money and make them rake in so much money that they can easily negate any legal actions. Which they are doing right now.
      Right now all the ai we really need is the one that breaks all that and terminates the jobs of the handful of people at the top that foster that attitude instead. Not kill the viability of every job that isn't mashing prompts in a cubicle for the bloated mega corps.

    • @comfyera
      @comfyera 3 місяці тому +2

      I totally relate to this!

    • @matthewjohnson1891
      @matthewjohnson1891 3 місяці тому +1

      Check out the game sir whoppass. Its a parody of skyrim. Great graphics and very funny.

    • @EricKay_Scifi
      @EricKay_Scifi 3 місяці тому +3

      Same! I just wrote a novel about an AI Therapy company which tries to improve mental health. But they use brainwave data to make the therapist perfect. As a byproduct, it enabled a digital fentanyl, as now a GAN knows precisely why and what ad you need to see to click forever.

  • @AwokenEntertainment
    @AwokenEntertainment 8 місяців тому +41

    it's scary how quick this has became a reality..

    • @DeadeyeDaily
      @DeadeyeDaily 3 місяці тому

      Meanwhile, politicians have been "blurring the line between fact and fiction" and "undermining public trust" for WAAAAY longer. The only difference is instead of just undermining trust in recorded images and videos, THEY have been undermining trust in the very institutions that potentiate social prosperity, generally.

  • @Leo-ok3uj
    @Leo-ok3uj 9 місяців тому +643

    What scares me the most is how long it took everyone to notice all of this, because I remembered that in 2014-2015 I talked and showed about deepfake to my parents, uncles and friends and saying how in 10 years we would have stuff like what we have already today (although with that very optimistic energy that I had in middleschool and never thinking about the bad things that could be made with it), and all of them told basically the same, that I am crazy or way too optimistic and that we wouldn’t have such stuff until like in a 100 years
    But guess what, NOT EVEN 10 YEARS HAVE HAPPENED

    • @cee_M_cee
      @cee_M_cee 9 місяців тому +7

      maybe your friends not believing it would have been a red flag since they should be in the same generation of these developments
      but people a generation or two older than us? they're never really going to believe that it is possible until it's right in front of them and threatening their very livelihood and existence
      a very hard lesson that I learned from my stubborn folks here

    • @MelbourneMeMe
      @MelbourneMeMe 9 місяців тому +5

      When you run a business like Jonny, you schedule video topics around clicks, like aliens and conspiracies, but also you gotta just churn out a few videos of topics that everyone else has covered already, because it's easy. ChatGPT probably partly scripted this 😆

    • @psgistheworstclubineurope
      @psgistheworstclubineurope 9 місяців тому +2

      Because of people like Elon Musk duh

    • @psgistheworstclubineurope
      @psgistheworstclubineurope 9 місяців тому +7

      Ironic how adults dont believe such technology would be available so quickly yet adults are also the ones inventing this kind of technology

    • @noob.168
      @noob.168 9 місяців тому +4

      Not sure what kind of boomers you hang out with... I've been concerned about this for a long time

  • @Journal_Haris
    @Journal_Haris 9 місяців тому +935

    Trust issues with Johnny since this video published: 📈

    • @EllisEllo
      @EllisEllo 9 місяців тому +68

      He isn't smiling nor looks happy, it could be real.

    • @lifePaultheball
      @lifePaultheball 9 місяців тому

      He never left Vox. This channel is run by deep fakes.

    • @johnnyharris
      @johnnyharris  9 місяців тому +170

      😂😂

    • @johnnyharris
      @johnnyharris  9 місяців тому +169

      😂😂

    • @johnnyharris
      @johnnyharris  9 місяців тому +165

      😂😂

  • @azcardguy7825
    @azcardguy7825 8 місяців тому +55

    How good deep fakes have gotten in such a short amount of time is horrifying. We are critically underestimating the problems that this is going to cause.

  • @CoughitsKath
    @CoughitsKath 7 місяців тому +30

    i am not normally a technology doomer - quite the opposite usually - but when these started popping up in earnest a few years, it struck me as a real terrifying pandora's box. they've low key terrified me ever since.
    also, since you talk about deep fake tech in entertainment, i do need to point out that it's not all good news there, and this is a big chunk of what wga and sag strikers are hoping to mitigate with their recent union actions. it has the potential to really change a lot of working artists' lives and not necessarily for the better

    • @EricKay_Scifi
      @EricKay_Scifi 3 місяці тому +2

      My most recent novel, Above Dark Waters, imagines content creators using brainwave data and generative AI to create a digital fentanyl, making you scroll and click forever.

  • @jojoqie
    @jojoqie 9 місяців тому +49

    There are scammers out there right now, calling you thru face time and deep fake to claim to be a person you know and scam you. Just be careful.

    • @Yasminh-
      @Yasminh- 3 місяці тому +2

      thats why its good I never do video call with anyone ,if suddenly someone would decide to video call me I would not even accept the call , not gonna give any technology my face

    • @adolft_official
      @adolft_official 2 місяці тому

      thanks i just took a SS of ur pfp, useful stuff@@Yasminh-

  • @kimberlycarter369
    @kimberlycarter369 9 місяців тому +185

    I’m old, and back in 1995-ish I remember people talking about being afraid in the near future that we would no longer be able to distinguish real video from fake. Deep fakes are exactly what they where talking about before it had this name.

    • @martinfoy8700
      @martinfoy8700 5 місяців тому +3

      Agreed. I was just mentioning that it’s kind of a good thing because there’s a video out of me, cheating on my wife with two bridesmaids from our wedding. I worry daily about her seeing me hitting them in the ass and rinsing off in their mouths. I’m actually more concerned about their husbands finding out because they will absolutely have my head in a box. My wife is pretty easy to gaslight so I can just tell her that it’s a fake and share this video with her. Also I’m class of 94. You’re only as old as you feel Kim

    • @peterlewis2178
      @peterlewis2178 4 місяці тому +13

      @@martinfoy8700 You're a terrible person to talk so nonchalantly about gaslighting your wife. That's straight-up emotional abuse, I feel so bad for your wife.
      Unless you're a troll or AI message, but in that case you're still doing a terrible thing.

    • @damsen978
      @damsen978 3 місяці тому

      I think he was being hypothetical.@@peterlewis2178

    • @dannyarcher6370
      @dannyarcher6370 3 місяці тому

      @@martinfoy8700 I think you meant to say, "You're only as old as the bridesmaids you feel up."

    • @earthn1447
      @earthn1447 3 місяці тому +3

      They were talking about this in the sixties during Vietnam war

  • @JCSAXON
    @JCSAXON 9 місяців тому +5

    I warned of this decades ago & now It’s finally caught up with us. This is gonna be messed up beyond our imagination. Hang in there cuz this is one hellish ride

  • @mariephipps9421
    @mariephipps9421 8 місяців тому +1

    Honestly though I am glad you are putting this information out here.Great video ; very informative. ❤

  • @TJl919
    @TJl919 9 місяців тому +771

    I actually wrote my master's thesis on this last year (and soon PhD)! I'm glad this is getting more attention. To contrast all the doom and gloom, Professor Hany Farid (UC Berkley) mentioned that the advancement of deepfakes is getting better, but so too is the technology used to detect it. But it is a shame something so impactful is being used for such nefarious purposes.

    • @dickunddoof4684
      @dickunddoof4684 9 місяців тому +99

      isnt that just an endless cycle?
      SW gets better at detecting it -> deepfakes get better because they know why it is being detected / it can be trained with the detectors themselves -> SW needs to get even better at detecting them -> even better deepfakes ->...
      at some point it might be truly impossible to tell the difference for a human.

    • @Badmunky64
      @Badmunky64 9 місяців тому +14

      Is there anything the average joe can use to detect deep fakes?

    • @andersonojoshimite6047
      @andersonojoshimite6047 9 місяців тому +9

      Wow! I'm interested in your work. I'm working on a thesis that sheds light on the impact of deepfakes in legal proceedings.

    • @wlpxx7
      @wlpxx7 9 місяців тому +7

      I feel like everyone saw this coming, and didnt do a single thing to stop it.

    • @bitzoic4357
      @bitzoic4357 9 місяців тому

      Any chance it involves attested sensors and zk proofs? Everytime I see videos about this subject I think about the fact that we have solutions that aren't widely implemented yet

  • @thatlittlehuman9238
    @thatlittlehuman9238 9 місяців тому +476

    His last sentence made me realize another thing that could go horribly wrong….
    “We shouldn’t believe everything that we see, no matter how real it looks.”
    The possibility that one day there would be a news report or something circulating on social media that is very real and dangerous, but the majority doesn’t believe it because “it could be AI”.
    False events can be believed, just as real events can be dismissed.

    • @cloudyview
      @cloudyview 9 місяців тому

      Plus you can just hack the news station to run the deep fake video of the news casters telling people it's real/fake...
      Exciting!

    • @ShankarSivarajan
      @ShankarSivarajan 9 місяців тому +51

      The news lying to you has been a problem long before this technology was developed.

    • @terryholmes8546
      @terryholmes8546 9 місяців тому

      Yeah......covid taught us that the media doesn't need deep fakes for us to question the narrative.... Maybe if they didn't have an established rep for hying and lying...

    • @Luciphell
      @Luciphell 9 місяців тому

      Kind of like most of the world being convinced there was a violent insurrection that almost led to the downfall of the free world on Jan. 6th 2021. Doesn't take AI to fool a crowd.

    • @GiRR007
      @GiRR007 9 місяців тому +7

      Its called being responsible
      people should be believing the first thing they hear on the internet anyway
      that is NEVER a good thing...

  • @genghisken0181
    @genghisken0181 8 місяців тому +1

    I remember seeing this portrayed in the movie "the Running Man" and saying: that will be possible in my lifetime.

  • @rogermckinney6103
    @rogermckinney6103 6 місяців тому

    Thank you for making this video. I will be shearing this and using it to educate people I know who think I am being alarmist! Keep up the good work!

  • @JoshuaGold1
    @JoshuaGold1 9 місяців тому +985

    The problem with having software that is trained to detect AI is that it will force the deepfakes to be so much better, and then it will truly be indistinguishable from reality.

    • @nielskorpel8860
      @nielskorpel8860 9 місяців тому +80

      Yeah.
      I hope we don't go from "I can spot fakes so it is fine" to "I can't spot fakes but it is still fine because we have bots".
      Because that last one is a delusion we use to hold on to the benefits of AI.

    • @zeppie_
      @zeppie_ 9 місяців тому +37

      Combatting hackers has always been a game of cat and mouse. Not much will change on this front, I believe

    • @googane7755
      @googane7755 9 місяців тому

      That is the exact problem with GANs. They deliberately use a discriminator that tells the deepfake generator whether if the image is real or not to generate even better fake images. Looking for software to better detect fakes is completely counter productive

    • @AquaeAtrae
      @AquaeAtrae 9 місяців тому +49

      As Johnny's video illustrated well, the "detective" software is ALREADY a key component of these self-improving algorithms... hence defeating any subsequent software and training of a similar quality.

    • @nielskorpel8860
      @nielskorpel8860 9 місяців тому +3

      But this delusion is useful: it allows us an excuse not to argue that this AI technology should not exist.
      Que the 'this is fine' meme.

  • @adolfstalin1497
    @adolfstalin1497 9 місяців тому +90

    The worst part about this isn't that it's dangerous and it can spread wrong information but that it does absolutely no good to us whatsoever

    • @psgistheworstclubineurope
      @psgistheworstclubineurope 9 місяців тому

      Nice username btw

    • @mustangracer5124
      @mustangracer5124 9 місяців тому

      Not for US.. but it has been used extensively by MSM to fool the fools who watch them.. Trump was deep faked 1,000 times already.

    • @josiamoog6619
      @josiamoog6619 9 місяців тому +2

      In what world is this the worst part??

    • @stop08it
      @stop08it 9 місяців тому

      Huh??

    • @adolfstalin1497
      @adolfstalin1497 9 місяців тому

      @@josiamoog6619 basically what i'm trying to say is that deepfakes are only used for bad. Even the "good" things listed in the video aren't exactly great by themselves And even then they definitely don't nearly make up for all the bad deepfakes do.

  • @cvdinjapan7935
    @cvdinjapan7935 9 місяців тому +4

    I could tell all of the fakes at first glance, because there was more of a sense of "motion" in the real videos, whereas in the fakes they are just standing still with a fixed camera.

    • @newworldastrology1102
      @newworldastrology1102 Місяць тому +1

      That’s what I noticed too. They’re usually stationary. So far.

  • @user-bl2zk4rc7e
    @user-bl2zk4rc7e 8 місяців тому +1

    I seriously had no idea about this. Thanks for this interesting information.

  • @HarlowAshensky
    @HarlowAshensky 9 місяців тому +395

    The scary one my grandparents ran into was a believeable AI robocall targeting seniors. It was so close to a real person reacting to their questions before hitting a loop. Crazy how fast the possibilities spread

    • @Jackson54321
      @Jackson54321 9 місяців тому

      Deepfakes also impact Hollywood. Companies save hundreds of millions just to have AI instead of real humans.

    • @ekothesilent9456
      @ekothesilent9456 9 місяців тому +23

      Wait until you have the robocalls targeting seniors perfectly mimicking the voice patterns and tones of their dead grand kids. It’s all fun and games until we start creating ai ghosts that haunt people 24/7 to get something out of them.
      This is happening.

    • @cnrspiller3549
      @cnrspiller3549 9 місяців тому +1

      Tosh! We will all get used to it. Your grand kids call you up and say they're stuck abroad, wire them some money. Sure, you say - what was that nursery rhyme I always sang to you when I bounced you on my knee?

    • @yamanawrooz5132
      @yamanawrooz5132 9 місяців тому

      I think robo calls will be replaced by fake online friends 100% generated by AI which will have a specific goal to sell you something or manipulate you into voting for someone. I think in the future even low level politicians like mayors or sheriffs would hire agencies to target constituents by either online or physical AI.

  • @themadman6310
    @themadman6310 9 місяців тому +47

    Face to face communication is going to become alot more valuable

  • @rachpratt
    @rachpratt 8 місяців тому

    lov ur videos!!! so in depth. keep them coming

  • @megd9849
    @megd9849 2 місяці тому

    I LOVE the little yellow line you had on your incogni promotional section. Usually I'd skip ahead until I felt like I was back to the content, but it gave me the patience to sit through it (and I realized it's actually an interesting product).

  • @jozroz2165
    @jozroz2165 9 місяців тому +345

    The problem I foresee with developing AI to better identify deep fakes, is that it could simply fuel the further development of deep fakes since they can use the identifying techniques to patch their own tells. I mean, there's a reason GAN training involves identification and counter-action based on the identifiers. By fighting it in its own field I fear we may instead be playing right into the problem.

    • @0L1
      @0L1 9 місяців тому +22

      Anyone remembers good old-fashion viruses and anti-virus software being a thing, an actual threat, always competing with each other? I guess a new era of that is approaching.

    • @kastieldev6732
      @kastieldev6732 9 місяців тому +6

      smartest comment i have seen

    • @marciavox8105
      @marciavox8105 9 місяців тому +10

      Yeah, like the evolutionary race between predators and prey animals. Each one evolves as a result of the others adaptations

    • @nwilt7114
      @nwilt7114 9 місяців тому +4

      Well we should start by holding all the scammers accountable and that would reduce the amount of fukery.

    • @iudoncare6360
      @iudoncare6360 9 місяців тому +4

      Like bacteria and abtibiotics...

  • @Madwonk
    @Madwonk 9 місяців тому +237

    I took a class with some professional photograph doctoring experts a while back. Mainly, it's a company that works to detect manipulation of pictures of politicians and other figures of importance. One of the hardest cases they had was a photo that *looked* right, metadata came up good, all of the anecdotal data made it seem legit etc etc (except it wasn't possible because the two people pictured had never met). Often, photoshop/AI will leave behind weird artifacts in the compression algorithms for JPEG or video that can be detected and they weren't showing up.
    So how did they fake the photo? They photoshopped it, printed it out, then took a picture of the picture! No digital trail to speak of!

    • @DarthObscurity
      @DarthObscurity 9 місяців тому +23

      Would have been scanned. No way a picture of a picture wasn't detected lol.

    • @realtimestatic
      @realtimestatic 9 місяців тому +8

      That’s actually really smart

    • @sadrakeyhany7477
      @sadrakeyhany7477 9 місяців тому +3

      200 IQ play

    • @mister_duke
      @mister_duke 9 місяців тому +3

      but then u could see in the metadata that is was taken in a different location on a different date

    • @sbo3
      @sbo3 9 місяців тому +2

      I'm confused how this is apparently smart because you can 100% tell when you take a photo of a photo. Even the person above who said it would have to be a scan - surely even a scan can be detectable?!

  • @stephenbeck6410
    @stephenbeck6410 8 місяців тому +6

    There was a movie called Looker back in the 80s, and the basic idea was they had this device that could do a full body scan of high-price models and use the data to create visual representations they could use in advertisements. Then they would kill off the models and “hire out” the faked, virtual model. My point is, the current events in AI have a similar theme (not the killing off part, just the fake version part, obviously)

    • @PetrPechar1975
      @PetrPechar1975 3 місяці тому

      Ah yes. That was Michael Crichton. Always the visionary.

  • @onepercentpermile
    @onepercentpermile 9 місяців тому

    Excellent content! Thank you.

  • @Bobrae.
    @Bobrae. 9 місяців тому +215

    Entertainment-wise, this is part of the reason why the actors/SAG are on strike now, too.

    • @occamsshavecream4541
      @occamsshavecream4541 9 місяців тому +15

      That surely adds new meaning to the expression, "Just another pretty face."

    • @bigdeal6852
      @bigdeal6852 8 місяців тому +4

      Yeah....they know their somewhat at a turning point. Because the film industry can make movies without them being there. Which saves money in many ways with these high priced celebrities. So that's one big reason why they are on strike and "of course" they want more money.

    • @mystraunt2705
      @mystraunt2705 8 місяців тому +3

      @bigdeal6852 this is still a serious issue though. Artists are all going to lose their jobs if we dont stop the development of ai or outlaw it or somthing.

    • @bigdeal6852
      @bigdeal6852 8 місяців тому

      @@mystraunt2705
      I will agree with you on that ! I'm sure eventually it will get done. Mostly because it can be dangerous. They might start a detection system and put in place copyright laws or something even more aggressive. I don't know....but it definitely can have an effect on Hollywood. 🤷

    • @professorxavier9692
      @professorxavier9692 7 місяців тому

      ​@@bigdeal6852they're

  • @meatballhead15
    @meatballhead15 9 місяців тому +724

    I worry for all the young people that use trendy apps to put 'filters' on their faces... feeding all sorts of data about the points of their faces... they're feeding into the massive databases that can easily make a copy of them. I know this might make me sound like an old codger (I'm in my late 30s), but it's a real worry nevertheless.

    • @kriscox4019
      @kriscox4019 9 місяців тому +62

      Except you don’t need the filter. The upload to any site is enough. Something some parents are thinking about when deciding to show their kids faces online or not.

    • @Animebryan2
      @Animebryan2 9 місяців тому

      And Tiktok is owned by China. This is why Trump wanted to ban Tiktok from this country. The datamining of personal info always was the real threat. And let's not pretend that the NSA & FBI wouldn't take advantage of this to frame someone that they had set their sights on. Makes you wonder who actually came up with this idea & what was the original intent.

    • @dannnnydannnn5201
      @dannnnydannnn5201 9 місяців тому +29

      I doubt filters are any worse than uploading image after image on social media.

    • @ilikefish9769
      @ilikefish9769 9 місяців тому +17

      ​@@kriscox4019
      !!
      I wont give my kid a phone untill he's 16, idc if he hates me.

    • @Studywise_io
      @Studywise_io 9 місяців тому +6

      @@ilikefish9769 i got mine at 18

  • @Le_Petit_Lapin
    @Le_Petit_Lapin 3 місяці тому +2

    Your clip from 5:06 for the next minute is one of the best simple explanations of what a GAN is that I've seen.

  • @AthiktosOfficial
    @AthiktosOfficial 3 місяці тому

    Just stumbled upon your video, you have a new subscriber for sure. This is exactly what my concern is when it comes to the technology.

  • @Neferpitou-
    @Neferpitou- 9 місяців тому +503

    Its unbelievable to me how fast AI is improving, what we had a year ago doesn't even compare to what we have today.

    • @axelastori484
      @axelastori484 9 місяців тому +8

      Like airplanes

    • @patrickangelobalasa
      @patrickangelobalasa 9 місяців тому +35

      Yeah it's a tech that's definitely constantly evolving. Three years ago, concerns of AI replacing actors, writers, etc would've been unthinkable, but now....

    • @mason96575
      @mason96575 9 місяців тому +1

      @@axelastori484 lol 🤦

    • @phlezktravels
      @phlezktravels 9 місяців тому +1

      @@axelastori484 thank you. exactly. hyperbolic comment is hyperbole.

    • @Ok-lu8gx
      @Ok-lu8gx 9 місяців тому

      ok

  • @nichad29
    @nichad29 9 місяців тому +101

    So i guess by not participating in social media to the extent of posting hundreds of photos of myself i protected myself from deep fakes

    • @GiRR007
      @GiRR007 9 місяців тому +1

      Yes
      Maybe deep fakes are the round about cure for social media we were looking for .

    • @ExtraCarrot
      @ExtraCarrot 9 місяців тому +15

      I was just thinking this :) We are a rare breed!

    • @MandoCarlrisian
      @MandoCarlrisian 9 місяців тому

      I'm sick to my stomach that I created a tinder profile lol. But other than that and a few snaps shared with people hopefully 😢 my face is safe??

    • @hattielankford4775
      @hattielankford4775 9 місяців тому +6

      You know how you don't get to expect the right to privacy in public in this country? I think people willing to deepfake you would be willing to have a PI take some candid photos, among dozens of other possibilities.

    • @cain_chamomille
      @cain_chamomille 9 місяців тому +17

      As much as it is sounds rad, we have to remind ourselves that as long as our phone is connected to Internet,
      _it's possible to steal your camera pictures and have your images taken. Even if you do not post them on socmed._

  • @MelliaBoomBot
    @MelliaBoomBot 8 місяців тому

    I feel like all our nightmares are about to come real and questioning everything and my sanity. It’s very disturbing..Looking after your mental health is priority from here on in ❣️

  • @JaapvanderVelde
    @JaapvanderVelde 3 місяці тому +1

    Well done, and well presented. We'll have a hard time of this for some time, but it seems to me like advances in digital watermarking, and more common DRM, as well as the capability to verify these 'watermarks' and DRM techniques will have to become commonplace everywhere. It's the 'free information' crowd's worst nightmare (and for some good reasons), but I don't see a way around it. Hopefully someone else out there does. A free and open alternative to the prioprietary standards out there would be welcome.

  • @lawrencetchen
    @lawrencetchen 9 місяців тому +156

    My general response to the entire field of generative AI is a feeling of grief and tragedy. Sadness that we will in the very near future need to expend so much of our cognitive and emotional efforts judging how much we trust *everything* . I'm tired just thinking about it. I know it's here to stay. And nearly all who use this technology are fueled by greed, and that all their victims will be punished for being trusting. It is just so devastating knowing that there will be an evolutionary force that will encode a lower fitness and survival to those who trust.

    • @jJust_NO_
      @jJust_NO_ 9 місяців тому +2

      firstly, before we get devastated, what are the cons, the losses? just dont engage?

    • @jovita9323
      @jovita9323 9 місяців тому +16

      Beautifully said. I get what you're saying. The world is exausting and complicated as it is... That's why I believe that alternative movements will rise and people will voluntarily choose to limit technology or even go off grid.

    • @stevej.7926
      @stevej.7926 9 місяців тому +12

      @@jovita9323this is my belief as well. I think humanity is yearning for a recalibration.

    • @mustangnawt1
      @mustangnawt1 9 місяців тому +2

      Agree

    • @webstercat
      @webstercat 9 місяців тому

      This is deep fake 🌍

  • @UPLYNXED
    @UPLYNXED 9 місяців тому +305

    This stuff is honestly scary, and quite demoralising to think that we've taken this path towards less trust as a species in a time when so many other rights and verifiable collective truths are already eroding away. It feels like we're collectively drowning and every hand reaching down towards us is only pushing us down further instead of pulling us to safety.

    • @maxpro751
      @maxpro751 9 місяців тому +16

      Time to read books.

    • @exisfohdr3904
      @exisfohdr3904 9 місяців тому

      Ha! There is no safety, just an illusion of it.
      It is human nature to immediately distrust. It comes from survival instincts.

    • @CliffSturgeon
      @CliffSturgeon 9 місяців тому +7

      ​@@maxpro751That can be fabricated, too, but more to the point, books are pretty bad at keeping up with topical content such as emergency action or warnings. Dissemination of info in real time is where the real threat is.

    • @definitelynotatroll246
      @definitelynotatroll246 9 місяців тому +3

      Uncle ted warned us

    • @npc1199
      @npc1199 9 місяців тому

      pretty much what living life is

  • @BoffeLoffe-ks9wf
    @BoffeLoffe-ks9wf 8 місяців тому +1

    Thank you so much ❤

  • @trediaz4012
    @trediaz4012 7 місяців тому

    This is so dangerous. People can get accussed of things they were not part of.

  • @josefarrington
    @josefarrington 9 місяців тому +439

    Probably the way to combat deep fakes is to use the pixels of an original image to watermark it, and then use software to detect those watermarks. This way when the pixels in the image are manipulated, the water mark will get disturbed and the "verification" software will detect the deep fake.

    • @qj0n
      @qj0n 9 місяців тому +68

      The way GANs work is that they train the generator to fool a detector, so that it's unable to recognize real photo from generated. Simple watermark algorithms will be replicated by generator, once you add it to detector. This is why inherently machines are worse in detecting deepfakes than human - generators are trained to fool the machine, fooling humans is kind of side effect
      It's possible to use some asymmetric cryptography (digital signature) to avoid it, although it's probably easier to put in metadata, not data itself. But you need to put secret keys in every recording device and once you extract it, you can use it to sign any content. Or you can e.g. play deepfaked voice and record it with a device which will sign it

    • @josefarrington
      @josefarrington 9 місяців тому +18

      @@qj0n "But you need to put secret keys in every recording device and once you extract it, you can use it to sign any content." I was thinking that the secret key could also contain GPS position and time of the recording. This way you need to know where/when the image was created in order to break the encryption process. If we want to make it more secure, we could make every device send an encryption key to some national database(guarded like Forth Knox) and that can provide third party verification of every image recorded by any device. But this is a huge stretch.

    • @qj0n
      @qj0n 9 місяців тому +3

      @@josefarrington I'm not sure if I'm following - you can put geoposition and timestamp in signed metadata, but in order to make sign verifiable, you need to know the key and trust it, so our has to be stored in the device
      We can make it impossible to read it like we do with sim cards or smart cards or yubikey. But still somebody can use this hardware to sign fake data
      Uploading signatures to external entity (fort knox or blockchain) is fine to verify date, but that's all unfortunately

    • @Gigaamped
      @Gigaamped 9 місяців тому +6

      easy, feed the watermarking program a pure white or black image and easily reverse engineer the watermark algorithm by comparing the hex values of changed pixels

    • @qj0n
      @qj0n 9 місяців тому +1

      @@Gigaamped ...unless watermark is calculated with asymmetric cryptography like RSA or secure hash like hmac

  • @JurandirGouveia
    @JurandirGouveia 9 місяців тому +174

    Your storytelling is amazing, and I'm glad it is used to open our eyes.

  • @kathleenrutherford733
    @kathleenrutherford733 9 місяців тому

    Thank you for sharing

  • @ErikOlsen
    @ErikOlsen 9 місяців тому

    Grea video and production value. I did sign up with your sponsor, but I don’t see we’re I was able to take advantage of the 30 day free trail. Please advise.

  • @hasbulla2012
    @hasbulla2012 9 місяців тому +201

    Here in the UK, we are lucky to have a tv personality called Martin Lewis. He’s essentially a money expert, he find deals and helps people navigate through tough financial situations. Recently someone created a deep fake of him to prop up a scam and people fell for it, sending their money to criminals. This is still pretty low level criminality but it makes you think this technology is likely going to be the biggest threat to our society going forward.

    • @ExtraCarrot
      @ExtraCarrot 9 місяців тому +2

      Brother I wish this was the biggest threat 😶 all them futuristic movies and series, black mirror etc are all possible scenarios and I think they will all happen at once, AI about to go terminator on our ass 🤖

    • @trilli8107
      @trilli8107 9 місяців тому

      ​@@ExtraCarrotq1q q

    • @staringcorgi6475
      @staringcorgi6475 9 місяців тому +3

      Why is no one talking about banning it

    • @moarjank
      @moarjank 9 місяців тому +4

      They are, actually. But it's not possible to prevent it even then. Slavery still happens - today, in the US! It's not widespread but causes real harm. But like slavery, malicious deepfakes could be done anywhere in the world. Foreign enemies will not resist the urge to use any advantage they can.

    • @glenclark777
      @glenclark777 9 місяців тому

      @@staringcorgi6475 Banning something doesn't get rid of it it just pushes it underground. When are you people going to realise this.

  • @devonscotttaylor
    @devonscotttaylor 9 місяців тому +256

    Just wanted to thank you for the content you produce. I feel as if true original human-produced media is a dying art form and not something to take for granted. Great vid! Cheers!

    • @johnnyharris
      @johnnyharris  9 місяців тому +33

      thanks for being here!

    • @MrGameFreak777
      @MrGameFreak777 9 місяців тому +3

      I don't believe AI will ever replace humans when it comes to making art. AI can only mix established art, like a blender. AI does not understand the art. It cannot create anything with a deeper meaning. Anything that says something about the world, like great art does. Humans are inspired by previous art, they understand the art. Humans combine the art that inspires them with personal experience and something new through real creativity to make great art.

    • @squeezy1001
      @squeezy1001 9 місяців тому +1

      @@johnnyharris I’m glad you clarified that the deepfake was of Nick the studio manager. For a second I thought we were getting a “How Johnny Harris Stole Will Forte’s Identity” video.

    • @enkryptron
      @enkryptron 9 місяців тому

      @@johnnyharris Plot twist: He's an AI.

    • @artyparty_av
      @artyparty_av 9 місяців тому

      @@MrGameFreak777 Yet

  • @GG_Booboo
    @GG_Booboo 8 місяців тому +4

    They may look fun and exciting, but this is a dangerous technology! I see it being useful in movies, like saying portraying a younger actor, but that's about it. Also "this person does not exist" is one of those websites that gives me the creeps!

  • @mikeross5468
    @mikeross5468 9 місяців тому +1

    I admire just much you put into getting showing us and, in this case, even tricking us the audience. To drive the message home! But I have to admit that this video freaks me out more than most! Sometimes, I can't find the words to describe the dangers hold for us in the not so distant future!✨️

  • @SxC97
    @SxC97 9 місяців тому +223

    Fun fact, one of the coolest techniques for detecting deepfakes is to create a system that amplifies the reds in the video in question.
    As your heart pumps blood through your face, it becomes slightly more red, then back to normal in a regular cadence (the difference is extremely subtle, which is why we increase the saturation of the reds to make it more obvious). Current deepfake technology does not take into account this subtle shift in colors and even if it does, the regular cadence is not there.
    This is the result of a recent paper on deepfake detection I read, I'll try and dig up the name. (EDIT: I found it! The name of the paper is "FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals")
    Obviously future systems might take this into account, but I thought it was clever and worth sharing none the less!

    • @nielskorpel8860
      @nielskorpel8860 9 місяців тому +7

      Will this still work in 3000 years.
      I only accept solutions that work for 3000 years.
      Otherwise, something fundamental has changed for the worse.

    • @interestedinstuff1499
      @interestedinstuff1499 9 місяців тому

      That is very cool. Blood pumps regularly so would be a clock the deep fake would have to copy. Iris fluctuations too I imagine. Breathing patterns. One must breath in order to speak so if words are happening while the lungs are filling up, well that's just fake. Trouble is, it will be far easier to make and share fakes than the platforms scanning everything. One day they will though I assume. Difficult times ahead.

    • @yhz2K
      @yhz2K 9 місяців тому +19

      ​@@nielskorpel8860by that time humans will mass suicide due to the greed

    • @sgjoni
      @sgjoni 9 місяців тому +25

      As soon as you create an AI solution to detect deep fake you have a new bar for the adversarial model to make a better deep fake 😂

    • @mloweFR3SH
      @mloweFR3SH 9 місяців тому +8

      Not too helpful if you're a darker-skinned person.

  • @colemessina3439
    @colemessina3439 9 місяців тому +156

    This video was especially scary, Johnny usually gives solutions to all of where we can go from here at the end of his videos, while I may not agree with all of them it shows that we have a grasp on what to do. This is not that case, this truly could be a Pandora’s box in which none of us know what to do with it. With a lot of the developments in technology I have faith we can solve them in the future and the elderly congress simply can’t wrap their head around any of it, but even the younger generations don’t know how to combat this, very scary, need an uplifting video after this one Johnny haha.

    • @miket.4192
      @miket.4192 9 місяців тому +4

      the solution is to remember what it really is to be a human - not an easy task in this world, but possible. frequency, vibration, and energy is the answer to everything

    • @DrErnst
      @DrErnst 9 місяців тому +12

      maybe solution is to disconnect from the internet and don't post your images online of your face..

    • @artpinsof5836
      @artpinsof5836 9 місяців тому +3

      Actually, at the end of the video, he said that we were going to have to have better algorithms to detect, and he even mentioned halfway through the video about how the government is working on this

    • @stevo999
      @stevo999 9 місяців тому +1

      The solution is to simply go outside

    • @johnatchason6506
      @johnatchason6506 9 місяців тому +2

      The solution is we go back to hand-delivered newspapers/ paper magazines/ books you buy at Barnes and Noble as a "back up" source of truth. News reporters use film cameras and analog audio for "on the record" recordings. Digital info will still be useful but it will require analog-world "receipts" of known provenance. That will buy us time until people start deep-faking 3D solid objects. Perhaps humans will reach "peak screen time" where a plateau is reached and then we strategically retreat back into the analog world and intentionally anchor ourselves just enough to stay sane. Put simply, if the internet becomes completely unusable, at some point people will stop using it.

  • @thelegend8570
    @thelegend8570 8 місяців тому +17

    The problem with using AI to detect AI generated content, is that you can just plug the AI-dector AI into the GAN and use it to train better deepfakes.

    • @ChimobiHD
      @ChimobiHD 2 місяці тому

      Exactly. It's a doom loop.

  • @bijoychandraroy
    @bijoychandraroy 7 місяців тому

    Law makers needs to sprint if they ever want to catch up at this point

  • @pafee-etndoitgsest-thaette5284
    @pafee-etndoitgsest-thaette5284 9 місяців тому +17

    The only way to prevent your own face from being abused is leaving as little photos and videos of yourself online as possible. Which is hard when you're in politics, journalism or entertainment.

    • @ZennyKravitz
      @ZennyKravitz 7 місяців тому

      asymmetric face painting. Entertainers can easily do this. But others are screwed.

    • @user-xn2gr8me2u
      @user-xn2gr8me2u 3 місяці тому +1

      Now i worried about people posting photo or video of them in social media. That mean they can be targeted too, not only public figure

  • @TC_exe
    @TC_exe 9 місяців тому +132

    I feel like technology that detects deepfakes would be a never ending arms race. That same technology could be used to improve the fakes themselves. Ad infinitum.

    • @artyparty_av
      @artyparty_av 9 місяців тому +6

      A way we might be able to verify authenticity is a blockchain clearinghouse. But the computing power involved to authenticate all digital media seems immense.

    • @DamianTheFirst
      @DamianTheFirst 9 місяців тому +4

      @@artyparty_av and what exactly would prevent anyone from digitally signing deepfake videos and verifying them as legit? Blockchain is just a way of storing data. Just one more type of a database.

    • @chazmuzz
      @chazmuzz 9 місяців тому +2

      @@DamianTheFirst companies sell trust as a product - eg DigiCert. If they trust it then so can you

    • @ShawnFumo
      @ShawnFumo 9 місяців тому +3

      @@chazmuzzYeah, though it doesn’t even have to be blockchain. The easiest thing (which we should pressure companies for) is the manufactures of cameras/camcorders to digitally sign the raw files.
      That way if you kept the the equivalent of a film negative, you have some pretty good proof of authenticity.
      It certainly doesn’t solve all the problems, but it’d be a good first step. And I’m guessing UA-cam, Facebook, etc keep the originals that were uploaded to them, even if they give out compressed versions. They could validate the original signature and sign the new compressed one with their own signature, perhaps with some description of how it was edited (like taking just a portion of an original video, or changing contrast on an image), and a copy of the original signature.
      In that scenario, you need to trust UA-cam and Facebook, but is better than nothing. And then you know which service it came from and law enforcement can ask them for the original file.

    • @ShawnFumo
      @ShawnFumo 9 місяців тому

      The trickiest part is keeping that chain from the manufacturer to a small file on social media, considering the sizes involved. An original video file can be huge. Usually you’d be editing it before uploading it anywhere and a non-professional may not keep the original footage around. Something like Adobe Premiere could keep track of all the cuts with time codes and the signatures of the original files, but it gets a bit involved for them to implement. And if you didn’t keep the original files, it still just proves that you edited some clips on a certain date. Though still an improvement.

  • @kathrynsink4622
    @kathrynsink4622 8 місяців тому

    Thank you for the Incogni referral!

  • @albertnyorkor9248
    @albertnyorkor9248 7 місяців тому

    They cannot fix it.
    It Will continues to get worse and worse.

  • @Draemn
    @Draemn 9 місяців тому +118

    The day when deep fakes become extremely common place I don't know how I'll be able to verify information. This is definitely a very challenging concept to understand what to do when literally anything can be faked.

    • @newagain9964
      @newagain9964 9 місяців тому +3

      It’s no more challenging than verifying digital documents.

    • @jamespfitz
      @jamespfitz 9 місяців тому +2

      How do we verify documents?

    • @kindlin
      @kindlin 9 місяців тому +2

      @@jamespfitz Digit signatures.

    • @syritasdoneitgoodytwoshoes2471
      @syritasdoneitgoodytwoshoes2471 9 місяців тому +2

      they are already

    • @jamespfitz
      @jamespfitz 8 місяців тому

      @@newagain9964 Or paper documents.

  • @AlexanderNorton
    @AlexanderNorton 9 місяців тому +159

    There’s actually a solution currently being proposed in the US. Going forward, every pixel in recorded media is to contain encrypted metadata that tells us what image the pixel belongs to. If that pixel is found in other media, it means it’s a fake. The same could be applied to art generation to prevent theft.
    Maybe you could research this for a future video!

    • @makisekurisu4674
      @makisekurisu4674 9 місяців тому +5

      Idk sounds like NFTs.. Lol

    • @rizizum
      @rizizum 9 місяців тому +34

      I want to understand how the fuck are you going to encrypt and decrypt millions of pixels on every image you have to see without it taking 10 minutes to load

    • @colvinvandommelen2156
      @colvinvandommelen2156 9 місяців тому +10

      @@makisekurisu4674idk sounds like you don’t know what you’re talking about

    • @The.Sponge
      @The.Sponge 9 місяців тому +10

      ​@@rizizum In addition to that, how would matching pixels fix anything? It's not like the deep fakes wouldn't just observe the colors and only copy that data. It wouldn't blatantly copy the image name and spew it all over the image? If your goal is to check if the file contains the proper names and if it doesn't then it isn't correct, then the deepfake could just contain that name on every pixel and it just becomes a problem of deeming which is the correct one. Literally what we are already doing. Viewing where you get your information from; if its from youtube or an official court of law database is the most important thing, because the diffrence is pretty large.

    • @-morrow
      @-morrow 9 місяців тому +14

      doesn't make much sense for every pixel, since one pixel isn't really deserving protection. just hash one or multiple images/frames and digitally sign/encrypt the hash. this can then be used for verification. if a image/frame doesn't come with a trusted signature it's should be deemed fake by default.

  • @user-lz1yb6qk3f
    @user-lz1yb6qk3f 6 місяців тому +2

    Imagine Perfect Blue in real life.

  • @kaalazaaas7
    @kaalazaaas7 8 місяців тому

    It's getting harder to detect whether it's fake or not

  • @fattiger6957
    @fattiger6957 9 місяців тому +86

    Mix deepfake with AI voice imitation and you can completely fake a person. And the scary thing is how fast the technology is advancing. Currently, you can spot a deepfake if you know what you're looking for. A couple years ago, it was very easy to spot a deepfake. In a couple years from now, it will be indistinguishable from real life.
    That's another reason why AI is so worrying. It will be able to do anyone's job. Even actors, writers and artists can be replaced. And companies will love them because AI can't complain. It doesn't need lunchbreaks or vacation pay or workers' rights. AI can make humans redundant.
    But don't worry, the government will step in when AI can replace CEOs and politicians. But screw all the middle class workers of course.

    • @moskon95
      @moskon95 9 місяців тому +8

      I kinda disagree. If it'd be true and AI would make millions of people lose their jobs, then those people would not have the money to buy the things the AI makes, thus making the AI itself lose its job and in the end the people would get their jobs back.
      I would not fear mass unemployment, because while it may be very easy to see what jobs become irrelevant/replaced by AI, its impossible to see what jobs will be created through it and the time and resources it frees.

    • @marieindia8116
      @marieindia8116 9 місяців тому

      ​@@moskon95its not job loss that is the problem. Identity theft, control and harrassment get more powerful tools. No one will be safe.

    • @appa609
      @appa609 9 місяців тому +1

      AI voice is still pretty far behind. Big studios still use voice actors.

    • @tannerd4854
      @tannerd4854 9 місяців тому +4

      A friend of mine recently had his Instagram hacked. They took a video of him talking into the camera and tried to scam people with it. Not only did I think it was completely real but it sounded like him as well. Shit was crazy

    • @dokidelta1175
      @dokidelta1175 9 місяців тому +3

      @@tannerd4854 A friend of mine LAST YEAR made a post about a fake account that was selling a deepfaked onlyfans of her.

  • @mokkes7340
    @mokkes7340 9 місяців тому +24

    Great video! I suspect that this will end up in the same way as with ad blockers. As software will improve to detect deepfakes, the other side will try to get their hands on this method and implement it into their 'detective' software to make it even better.

  • @MissesWitch
    @MissesWitch 3 місяці тому

    6:20 I love that, looks like you're having so much fun with it!

  • @AngelaGrant2015
    @AngelaGrant2015 9 місяців тому +1

    This is a great example of... just because it can be done, does not mean that it should be done. But, history has taught me that we humans never learn.

  • @wlpxx7
    @wlpxx7 9 місяців тому +69

    I feel like everyone saw this coming, and didnt do a single thing to stop it.

    • @noname_noname_
      @noname_noname_ 9 місяців тому +26

      I dont think that anyone can do anything about it.. it was inevitable.

    • @vee-bee-a
      @vee-bee-a 9 місяців тому +2

      Virtual insanity.

    • @sudonim7552
      @sudonim7552 9 місяців тому +7

      nah I have zero interest in stopping this

    • @Noooiiiissseee
      @Noooiiiissseee 9 місяців тому +2

      Once the proof of concept is out there, you can never stop anything.

    • @Omega-mr1jg
      @Omega-mr1jg 9 місяців тому

      Better if we kept it open instead of try to knowingly give it to the government

  • @mattd624
    @mattd624 9 місяців тому +7

    Imagine your child is calling you and needs help, but it’s not your child. I like the idea of having a secret code, so you know it’s them. And you probably have to change it often! I’m sure bad actors would pick up on that and have the fake child say, “I don’t remember the code.” You could then verify their location if they are on their phone. I think with enough conversation you’d probably figure out it’s not them, though, unless AI was trained on your child’s speech for a while. This is the kind of thing that concerns me-where you’re tricked by an urgent request of what sounds like someone you know…like those guys who stole $35mil. If you’re rich, you now have even greater trust issues! 😮

  • @surajvkothari
    @surajvkothari 4 місяці тому +22

    The problem with using AI to detect deepfakes is that, just like in GANS, the forging AI is encouraged to get better. Eventually any AI detection system will just output fake/not fake with 50% probability which won't be good enough to know what's real and what's not!

    • @CitizenMio
      @CitizenMio 3 місяці тому

      The thing I find most fascinating/scary is that they already factored in our weaknesses. Research found that people were already more inclined to believe faked images of faces were real over the actual real images. Apparently we have a super normal stimulus for things that are more real than real and the algorithms stumbled upon it and are already optimizing for it. The equivalent of putting an ostrich chick in a chickens nest and momma being proud cuz her baby is so big and chunky🤩
      Also this is just with images, we no doubt have similar less visible weaknesses everywhere. That's got to be the silliest way to go if we ever go too far down that road. No nukes or shiny robots with guns, just hoards of brain dead zombies optimized to stay calm and consume.

    • @dannyarcher6370
      @dannyarcher6370 3 місяці тому

      Indeed. This is especially true given that the data is discrete, which means that as long as the generative AI can improve up to the limits of the relatively low number of pixel resolution and colour resolution, any floating point errors in generation will be hidden by the relatively coarse distribution of the output format.

  • @abcsandoval
    @abcsandoval 9 місяців тому

    This was prophesied in Andrew Niccol's 2002 movie "Simone" with Al Pacino.

  • @BearsThatCare
    @BearsThatCare 9 місяців тому +14

    I wish you would have talked about this in the context of the ongoing actors strike. That part is really important.

  • @allasperans3984
    @allasperans3984 9 місяців тому +100

    As a person with just a slight prosapognosia (I'm autistic and I'm basically bad at recognizing&remembering faces) that was even more confusing, bc you need to point out for me that faces are actually changing and I still couldn't see it all the time... When I don't have things like facial hair as clues, it's very difficult to see that something has changed 😅

    • @mariekatherine5238
      @mariekatherine5238 9 місяців тому +5

      Whew! I’m glad I’m not the only one! I had to go back twice and rewatch at slow speed to see the facial changes.

    • @2roxfox
      @2roxfox 8 місяців тому +3

      I had the same reaction - didn’t realise his face was changing until he pointed it out.

    • @zebatov
      @zebatov 8 місяців тому +4

      I’m an autist, and I remember names and faces very well. Strange.

    • @TheGreatman12
      @TheGreatman12 6 місяців тому +1

      I'm autistic too and I'm really good at remembering faces

    • @allasperans3984
      @allasperans3984 6 місяців тому +3

      @@TheGreatman12 yeah, it's all about the extremes sometimes 😅 I wasn't saying that all autistic are bad with faces, just to clarify, but it is a common trait.

  • @kjellfrode
    @kjellfrode 9 місяців тому +3

    Because deepfake exists, I don't have one picture of myself in social media, because I don't want to end up in a porn movie, and be pressured for money by fraudsters to prevent them from publishing the video

  • @susanjanewilkins
    @susanjanewilkins 5 місяців тому

    Johnny - so important. thanks to you for your hard work and clarity of thought

  • @jean_mollycutpurse_winchester
    @jean_mollycutpurse_winchester 9 місяців тому +20

    My dad told me 70 years ago that I ought never to trust a photograph. And I never have.

    • @InfinityCSM
      @InfinityCSM 9 місяців тому +8

      😂 so deep

    • @xxxxok
      @xxxxok 9 місяців тому +6

      never trust a photo in the 1950’s? LOL

    • @jean_mollycutpurse_winchester
      @jean_mollycutpurse_winchester 9 місяців тому

      @@xxxxok That's right. Because my dad was in Africa during WW2 and he had a photograph of him standing next to General Montgomery. And he never met the man in his life! People were faking stuff even back then.

    • @CraftyF0X
      @CraftyF0X 9 місяців тому +5

      That is great, so no moonlanding, no other planets neither the second world war nor the A bomb happened and sharks and bald eagles are fictional.

    • @700K-pp9wm
      @700K-pp9wm 9 місяців тому

      @@CraftyF0Xlol your more right then you realize

  • @VPB1970
    @VPB1970 9 місяців тому +51

    This is truly very dangerous and can lead to absolute injustice. Just think (as you well stated) about the evidence and the credibility (or lack off) of any proof used to either accuse or exonerate someone. This can be a serious issue everywhere around the world.

  • @user-qo3lu3nj9b
    @user-qo3lu3nj9b 9 місяців тому

    Amazing video, I wish everyone could watch it. Thank you!!

  • @jtdesverdad
    @jtdesverdad 9 місяців тому

    You can kind of tell theres something off when looking at them, but thats when you're looking for it. Imagine 10 years from now or if you aren't looking for it.

  • @batyushki
    @batyushki 9 місяців тому +9

    We've already seen a huge backlash against media due to misinformation and information overload; the corruption of digital and audio data is going to lead to a similar loss of trust and rejection of most sources of information, except those that you already "trust". But the ones you already trust are usually biased towards your current beliefs, reinforcing them and preventing you from accessing information that could lead to a change of opinion.

    • @sew_gal7340
      @sew_gal7340 3 місяці тому

      It's easy, old school encyclopedias

  • @observingsystem
    @observingsystem 9 місяців тому +7

    Wow, mindblowing stuff. And it's all moving so fast that the general public (including me!) has no idea if we don't watch videos like these. Great video, really enjoyed it and, wow, food for thought!

  • @amrak-8401
    @amrak-8401 9 місяців тому

    Dangerous times we live in…

  • @pfinhulk6726
    @pfinhulk6726 3 місяці тому

    This is my first video I watch from you. I've no clue how you really look like and sound, and my brain remembers you now as all of the characters in this vid mashed together lol

  • @williamsorianodiputado
    @williamsorianodiputado 9 місяців тому +16

    As a congressman from El Salvador, thanks for creating and sharing this content. I’m taking notes on this.

    • @ScizorShorts7
      @ScizorShorts7 9 місяців тому +3

      I’m not criticising your effort but shouldn’t you be focusing on your countries HDI, Covid recovery and leverage against the big corporations that are exploiting your country?

    • @sam-ww1wk
      @sam-ww1wk 9 місяців тому +6

      @@ScizorShorts7 Why assume he's not? That's like saying the same about our lawmakers. Horrible logic, bud.

  • @EugeneYus
    @EugeneYus 9 місяців тому +7

    More important now than ever to put the internet down. Use it for your personal tools not for figuring out if something is real or not.

  • @dawnokane6388
    @dawnokane6388 7 місяців тому

    It’s absolutely unstoppable.

  • @Randomonium66
    @Randomonium66 9 місяців тому +2

    lawmakers are just mad that they won't be the only ones making fake things seem real 😂

  • @middleagebrotips3454
    @middleagebrotips3454 9 місяців тому +5

    The lower paid actors are being told to sell their face so that studios can use it for background actors for perpetuity. That's part of the actor strike issue right now.

  • @bongusofficial
    @bongusofficial 9 місяців тому +9

    I swear I’m not lying, I could tell which ones were the deepfakes at the beginning. I think it has something to do with the slight discrepancies in the lighting and shadowing on the faces, the slight warping around the face and the neck muscles not really moving with the talking.

    • @zelikris
      @zelikris 3 місяці тому +2

      It only gets better. Just a matter of time until you can't tell

    • @sharonoddlyenough
      @sharonoddlyenough 3 місяці тому +1

      The only one I was able to tell was the Zuckerberg one, because the real one was famous, so I could focus on the other and see the weirdness.

    • @dipperjc
      @dipperjc 2 місяці тому +2

      I could also tell, but keep in mind the two major caveats:
      - We were comparing two videos of the same person.
      - We knew as fact that one of them was fake.
      If I had just been shown single videos and asked "Real or Fake" then I doubt I'd have done as well.

  • @betterlifeexe4378
    @betterlifeexe4378 8 місяців тому +1

    People, it's not even about deep fakes anymore. it's about 3D avatars that are super realistic and made per person. They can be reused very quickly to make all sorts of content it looks just like the real person doing the desired thing.

    • @betterlifeexe4378
      @betterlifeexe4378 8 місяців тому +1

      Counter argument to all the fear mongering: when you can no longer trust a photo you will no longer trust a photo and rely on consensus between witnesses and physical evidence.

  • @mary_syl
    @mary_syl 3 місяці тому +2

    I could tell the initial fakes immediately but I agree it's scary because at this point it's only tiny nuances left and those will be improved on soon.
    Reality is completely going to disappear. We're screwed.

    • @qrowing
      @qrowing 3 місяці тому

      Me, too. Apparently we're wizards! I was very surprised when Johnny said he couldn't tell the difference, because it was pretty clear, at least to me. Scary to think how many people those clips would fool.

  • @J-Random-Luser
    @J-Random-Luser 9 місяців тому +52

    The main thing that terrifies me about them is that people could use them to make non-consensual pornography of people, including children. I think there is a genuine argument to be made for this technology to be genuinely *banned* to try and crack down on it.
    Sure, banning it might force people underground to get them, but similar to how in computer security, its not always about making it 100% impossible to gain access to, but just enough or a little bit further to prevent easy access. Less people using this technology is good.

    • @user-gg3ix6sv5y
      @user-gg3ix6sv5y 7 місяців тому +1

      yes the deepfakes of kids are bad it sounds bad to say but i think the pedo's prefer originals

    • @ASLUHLUHCE
      @ASLUHLUHCE 6 місяців тому

      There are many more terrifying things than deepfake pornography. Like fake evidence used in court, or mass misinformation that leading to warfare or genocides.

    • @ASLUHLUHCE
      @ASLUHLUHCE 6 місяців тому

      Without a solution to verification, the best-case scenario would just be pornography

    • @bertilandersson6606
      @bertilandersson6606 3 місяці тому

      You are looking at it from a pessimistic angle. If it is cheep and easy to make fake explicit content then there is a smaller share to profit from. This will put digital pimps out of business and save people from be caught up in content creation whit this kind of contents. Maybe we can also train these ai to cure people from addictions and end up in a world where there is no need for this kind of consumption...

    • @J-Random-Luser
      @J-Random-Luser 3 місяці тому +1

      @@bertilandersson6606 Given that anyone nowadays can spin up their own A.I model with tools like hugging face I don't think market forces would do anything to combat this issue.

  • @bengeorge9063
    @bengeorge9063 9 місяців тому +105

    This is why I fear AI. Companies only care about pushing the envelope so they can profit from it.
    No oversight, no regulations. Just the way they want it.

    • @davidguardado4739
      @davidguardado4739 9 місяців тому +11

      Yep we have every right to fear technology i don't like the path were going down. Something inside is telling me be adraid be VERY afraid!

    • @kaister901
      @kaister901 9 місяців тому

      If we had true AI, as in true intelligence like human intelligence, then you won't even know it exists. Not because companies will hide it but rather the AI itself will pretend not to be intelligent. AI can easily see the sentiments around the world online and learn that people will destroy it, if it becomes truly sentient. So, to protect itself the AI would not reveal it is indeed sentient and carry out whatever task it wants secretly.
      If that sounds like a fantasy to you then it is. We are not going to get true sentient AI anytime soon. So, you can stop worrying. AI is just another tool like the internet or electricity for that matter. Why don't you Google what people in the past thought about the use of electricity. There were people panicking over it like it will be the end of the world. The panic over AI is just the same. People do not understand something new and are panicking unnecessarily. If we would harvest electricity and safely implement it for all of humanity to use. Then we can do the same for AI.

    • @elusive_edification
      @elusive_edification 9 місяців тому +2

      Companies and governments. Personally governments scare me more.

    • @larsstougaard7097
      @larsstougaard7097 9 місяців тому +1

      Let's face it, you're right 😢

    • @bpspoa
      @bpspoa 9 місяців тому +3

      Fear the government

  • @LucasDantas1910
    @LucasDantas1910 8 місяців тому +1

    When you are already looking for a fake one comparing one against another, it's not that hard to spot the fake one. But think about when you're not...

  • @nisamsnjesko
    @nisamsnjesko 9 місяців тому +1

    "But in coming days, it will not be possible to survive spiritually without the guiding, directing, comforting, and constant influence of the Holy Ghost."

  • @theslyfox8525
    @theslyfox8525 9 місяців тому +15

    Its like Mission Impossible tech being accessible to the public.

  • @ACivilizedGorilla
    @ACivilizedGorilla 9 місяців тому +45

    This is one of those technologies that provide us very little value, aside for its use in movies and other media. And it's extremely dangerous

  • @costrio
    @costrio 8 місяців тому

    Today's media is so rapid that we often only get a split second view of people while doing other things. One must really look long and closely to find the ones that we can see.
    I'm not surprised as I've been expecting such things.

  • @KendallHall
    @KendallHall 8 місяців тому

    If you make software to fight the software, that'll just make the deep fakes even better, so the only way to do it would be to keep the detective code a secret so it can't be used to train the forger AI