This Threat to Free Software is Worse than I Thought...

Поділитися
Вставка
  • Опубліковано 10 лис 2023
  • 💸💸 Help me contribute to KDE and do these videos: 💸💸
    Paypal: paypal.me/niccolove
    Patreon: / niccolove
    Liberapay: liberapay.com/niccolove
    Ko-Fi: ko-fi.com/niccolove
    This video as an article: blog.nicco.love/eu-chat-contr...
    Relevant links:
    www.patrick-breyer.de/en/post...
    edri.org/our-work/most-critic...
    chatcontrol.dk/en/
    edri.org/our-work/why-chat-co...
    itpol-dk.translate.goog/hoeri...
    www.internetsociety.org/resou...
    kinderschutzbund.de/wp-conten...
    ioc.exchange/@matthew_d_green...
    docs.google.com/document/d/13...
    techcrunch.com/2023/05/09/eu-...
    proton.me/blog/eu-chat-control
    proton.me/blog/eu-chat-contro...
    mullvad.net/en/chatcontrol/st...
    Stay in the loop: t.me/veggeroblog
    My website is nicco.love and if you want to contact me, my telegram handle is [at] veggero.

КОМЕНТАРІ • 421

  • @jameshughes3014
    @jameshughes3014 6 місяців тому +160

    in the US, every grab for power starts with a false 'think of the children' argument. And the laws that get passed almost never actually protect children. It's such a common tactic that when ever I hear it, i know something fishy is going on.

    • @birdbeakbeardneck3617
      @birdbeakbeardneck3617 6 місяців тому +11

      i read it with mental iutlaw voice

    • @Pxartist
      @Pxartist 6 місяців тому

      It's almost as if. . . .
      They never cared about the children to begin with and want to actually surveil the masses
      Heck, I'm starting to wonder if the database even has actual bad imagery to begin with.

    • @mgord9518
      @mgord9518 6 місяців тому +15

      Easiest way to get knee-jerk support for whatever authoritarian bullshit you want to pass

    • @giannismentz3570
      @giannismentz3570 6 місяців тому

      That's true, and it's the same everywhere, not only the US. It's not always about kids, when you see govs doing something "for your own good" it's usually not. Most of the time. Well, you need to think for yourself, depending on circumstances. In this case, yes, messages over all those chat services are not private at all. So they have a point. Then again, the same can be said for SMS, and everything else, any coms infrastructure. Govs like to monitor everything and there is nothing that would dissuade them from doing so. It's power you do not have, they like it, they abuse it, and there is nothing you can do about it. If they ban those internet chat services, then their problem is with the US gov. What they are really saying is, look, we wanna spy on you, but we wanna be the only ones doing so. All this "think of the children" is absolute bullshit. Not only that, but what he says on the video is the truth. Sometimes they try to pass policies that do the opposite of what they are supposed to do - either cause of incompetence of the legislators or because of malice, as in intending to pass something nasty under guise of protecting your rights.

    • @charleshines2142
      @charleshines2142 5 місяців тому +6

      While the thing with children is understandably disgusting it would seem that they mention it so that people vote for the laws that are being proposed. Sometimes they even use a rather horrible event that took place 21 years ago. They are just using things like that so that we are more likely to allow the new laws to pass rather than petition against them. When an unfavorable law is proposed it is possible to petition against it and maybe that would stop it. That is until they present it differently the next time hoping we don't realize it is basically the same thing they tried to pass before but now with a hot button topic mentioned in it. It is like the infrastructure bill. There are things in there that really don't need to be there. They want to go so far as to have speed limiting technology in your car that will disable it if you speed too many times. I am not sure how much room they give over the limit if any before a vehicle with it would warn you. I think up to 5 MPH over would be OK in most situations. Some roads have limits that are way too low anyhow. I wonder what else Biden hid in that bill to make our lives miserable.

  • @danielhalachev4714
    @danielhalachev4714 6 місяців тому +309

    Solving crime is a huge concern for our society. However, crime shouldn't be combatted by violating our privacy, but by improving society, so that people don't have the reason to become criminals in the first place.

    • @CaedmonOS
      @CaedmonOS 6 місяців тому +19

      While I agree we should improve society, I don't think it would stop pedos from having a reason to offend

    • @codelinx
      @codelinx 6 місяців тому +4

      Yeah those is a slippery slope. They violate "innocent" users privacy and rights to save others, but it's the least of the two evils. I would gladly give up some of my privacy to save or help reduce/prevent acts like this.

    • @ButWhyMe...
      @ButWhyMe... 6 місяців тому

      EXACTLY! @@Watcher4361

    • @null7936
      @null7936 6 місяців тому +23

      This have nothing to so with crime, just a excuse.

    • @Plantlet-dk6ww
      @Plantlet-dk6ww 6 місяців тому +20

      ​​​​@@codelinx...except it won't solve anything, only give companies and agencies more rights to violate people's privacy...Do you really think that they actually care for the children or anyone else? They only care and see people as data.

  • @DucDigital
    @DucDigital 6 місяців тому +39

    You could have mention the court case of December 8, covered by La Quadruple du Net.
    The France DGSI accused 7 people of conspiracy of terrorism because they used all the privacy focused tools like Tor, Signal, Protonmail. They maybe right or wrong, but the wording and accusation of using Signal is equated to terrorism is extremely scary.
    Not only that, france has constantly push for censorship bills that undermines privacy like the SREN law, which allows online censorship of any content deems unfit for audiences in france and to end anonymity online.
    I am tired of those incompetent politicians who has no other ideas, than to “ban”, to “punish” those who seeks privacy.
    Everyone should vote against the CSAR bill in the EU and please send an email to your EU representatives to say no to this law!

    • @DucDigital
      @DucDigital 6 місяців тому +6

      And as we all know, as long as those mechanism is in the law, there is always a possibility of mis-used (or may I said, intentionally used) to punish dissent.
      Look no where further than China, where their netizen has resorted to writing cryptic messages and using emojis to avoid censorship…
      This could be France or the EU if we are not careful.

    • @FalkonNightsdale
      @FalkonNightsdale 6 місяців тому +4

      Meanwhile in Czechia🇨🇿🇪🇺, government clerks and officers are soft-pressured into using Signal and Tor as a part of cyber-security training.
      And that pressure is slowly increasing as DDoS and phishing cyberattacks from Russia are more and more frequent and sometimes even successfuly penetrating into individual PCs…

  • @AQDuck
    @AQDuck 6 місяців тому +127

    I would just like to formally apologize for the actions of a single person in my country who pushed this idea forward to the EU.
    We hate her too.

  • @act.13.41
    @act.13.41 6 місяців тому +38

    If they could go as far as to monitor our thoughts, they would.

    • @samm9196
      @samm9196 6 місяців тому +6

      NeuraLink is on it's way...

    • @mgord9518
      @mgord9518 6 місяців тому +6

      Unfortunately, it'll probably be possible eventually

    • @elder_guardian
      @elder_guardian 5 місяців тому

      They're working on it. Thought crimes lol, stay classy political psychopaths.

  • @BendyLemmy
    @BendyLemmy 6 місяців тому +19

    This brings back an old argument - 'there's nothing to fear if you've nothing to hide'. So why not have everyone install live webcam feeds in their bedrooms and bathrooms to prove there's nothing illegal going on?
    Big brother is getting very scary.
    Before Internet took over, we used to communicate using Amigas and AtariST computers over Packet Radio... so it seems quite basic that any criminals that you really want to catch will simply avoid the newly compromised end to end chats and move to more secure lines of communication.
    The impact would be heaviest only on innocent people - who already sound more like criminals when they ask for privacy than people who simply don't expect to be spied on.

    • @boslyporshy6553
      @boslyporshy6553 5 місяців тому

      If you were born into a world where eyes were always on you, people wouldn't care. That's a cultural matter given how much people willfully broadcast.
      A plus of the cultural side of it though is anything can be used as a vehicle of privacy with the relative nature of action. Nonsense or babble advantageous in a new world.

    • @TSteffi
      @TSteffi 5 місяців тому

      You kind of got the point.
      The real CSA happens almost exclusively on the darkweb, and none of this legislation is going to effect them at all.
      There may be some lazy ones who don't know about the ways to stay safe, but those are few and far between.
      If you are running a specific version of linux, there is no network traffic whatsoever that is not wrapped in multiple layers of end-to-end encryption sent over the darkweb. Client side scanning isn't going to work there, neither is server side surveilance.
      And worse, the people crafting those laws know exactly that they will not work for that purpose. The whole CSA thing is just a smoke screen. The reality is, they are after political opposition. NGO's, protesters, environmentalists and so forth. We now live in a time where the industry is actively destroying our ecosphere and within a few decades, this planet will be no longer able to sustain life. But change would impact profit. So our wise leaders have decided to eradicate the whole human race together with all life on earth for the sake of shareholder value. It is inevitable that more and more resistance will arise as conditions worsen. And they are now preparing to suppress any resistance.
      This is NOT about protecting children. It is about protecting capitalism.

  • @insu_na
    @insu_na 6 місяців тому +77

    What truly annoys me about this kind of legislation is that they're making privacy-invading mechanisms a huge focus of the legislation, but when you actually look at enforcement you'll find that most countries don't even properly enforce the laws they already have, and police agencies that are supposed to be dedicated to tracking down csam are incredibly understaffed. In 99 times out of 100 it's much more effective to contact the webhoster of csam forums and have them delete all data and remove access than it is to contact the police.
    A German TV station has made the experiment once, they've been tracking down sites of csam and then reporting them to the police and followed up on the results 12 months later, and out of 100 sites they reported to the police, I think 3 or 4 were taken down. At the same time they also contacted the webhosters of other such sites and literally all such reported websites were deleted within a single week.
    What does the legislative think will happen if they give the executive even more work for those already understaffed and underfunded departments? Certainly not any better results for the "intended" goal. But ofc spy agencies will love it

    • @kuhluhOG
      @kuhluhOG 6 місяців тому +8

      also: if you actually ask the policemen (meaning the ones who actually do the work, not the higher-ups), they surprisingly enough are also against that
      while they say that it would be useful in some cases, even in their opinion it doesn't outweigh the disadvantages

    • @Flynn217something
      @Flynn217something 6 місяців тому +13

      Because it's not about "protecting ze keeds" it never has been.

    • @vocassen
      @vocassen 6 місяців тому

      That sounds really damning, what's the source / TV station? Kinda don't feel like randomly searching for that

    • @insu_na
      @insu_na 6 місяців тому

      @@vocassen Strg+F

    • @72Yonatan
      @72Yonatan 5 місяців тому

      Common sense works. Thank you for sharing this.

  • @mx338
    @mx338 6 місяців тому +15

    My device is not supposed to surveil me in anyway, this is just not acceptable in any way.

  • @mx338
    @mx338 6 місяців тому +29

    The abuse itself takes case within the family or with close relatives for the majority of cases, this just isn't something you can legislate away by fighting a singular symptom.

    • @thomaslechner1622
      @thomaslechner1622 6 місяців тому

      Their goal is not child protection, but total control and enslavement. It is about time for people to understand this!

  • @Zantorc
    @Zantorc 6 місяців тому +52

    This is like legislators insisting that providers come up with a solution to an NP-hard problem.

    • @ArbitraryCodeExecution
      @ArbitraryCodeExecution 6 місяців тому +12

      yeah just casually solve P=NP whats so hard about that

    • @kuhluhOG
      @kuhluhOG 6 місяців тому +11

      @@ArbitraryCodeExecution fun fact: to this day we have neither proved that P=NP nor did we prove that P!=NP
      and on both sides of the argument have quite well-known proponents

  • @bobclarke5913
    @bobclarke5913 6 місяців тому +15

    It's easy to catch the crims if you arrest everyone

  • @Maric18
    @Maric18 6 місяців тому +22

    these people have their own ftp servers and methods to avoid detection (like youtubers say someone gameovered themself so they don't get demonetized), what makes anyone think that inconveniencing them is helping more than it erodes freedoms of individuals?
    Having investigators search for it and in general trying to keep track of childrens situation through schools and councelours actually believing children would do way more.
    Why do we need to inspect every chat messages when victims that go to a teacher either get told that things are tough sometimes or their parents get called, informed of what their child said, asked if its true, and if they say no, the child gets send back to them?

  • @Mantorp86
    @Mantorp86 6 місяців тому +21

    In this world where Businesses say the data is safe and than process it for business purposes, where people write most hideous messages but in real life are “normal” and where the government say they need your data to find criminals but then use it for their own personal gain it’s just better to delete all social apps and go back to static noninteractional web pages like in 98…

    • @YarikTH
      @YarikTH 6 місяців тому +6

      Some day it would be illegal to host static noninteractional web pages without tracking js module in them

    • @Vicorcivius
      @Vicorcivius 6 місяців тому

      @@YarikTH there will be a way until they install the chips in everyone's brains.

    • @user-id8pv7oz9u
      @user-id8pv7oz9u 6 місяців тому +1

      @@YarikTH Gopher to the rescue! :)

  • @Xankill3r
    @Xankill3r 6 місяців тому +38

    Given Glaze and soon Nightshade for messing with generative AI training there is absolutely no doubt that any AI scanning for CSAM would simply end in an arms race. An arms race where all that we end up with is reduced privacy.

    • @nullvoid3545
      @nullvoid3545 6 місяців тому +4

      Glaze and nightshade are too little too late.
      Best defense against peeps steeling your work, regardless of AI is to use A good watermark.
      Even now AI models generate watermarks that are being used as evidence in court.
      Nightshade and glazed images will be filtered from training pretty easily, But A tool that hides a visible watermark in your work could taint the model so that the risk of accidentally generating one of those watermarks in A way that can be caught later becomes too much of A risk for anyone publishing AI images with money to loose.

    • @Xankill3r
      @Xankill3r 6 місяців тому +2

      @@nullvoid3545 the watermark thing will become useful only once we get actual courts or lawmakers deciding that using data to train generative AIs without compensation violates IP laws. Till then the threat of poisoning will at the very least keep companies from either using our art without permission or force them to spend time and money into developing auto-detection strategies.

    • @nullvoid3545
      @nullvoid3545 6 місяців тому +1

      @@Xankill3r Fair enough.
      But while generating A model off of technically public data is not illegal, making use of copyrighted material in the images generated still creates the same liability any human illustrator would if they drew someone else's IP such as Micky Mouse.
      This means if you create A clear watermark and maybe even trademark it for greater legal ownership, then hide it inside of your images in ways where if you both knew exactly what the mark was and that it was supposed to be there, then any image generated with your hidden watermark could be identified as using your work. Then its just A matter of proving that the use is to close to your own use and therefore hurting your business which precludes fair use.
      Also just to clarify, I think IP rights never should have existed and should not exist.
      I just also think the ways stock image sites are currently making the most headway into legally legitimized claims of IP theft through AI diffusion using their watermarks as indicators to make claims against company's training these models, are some of the only ways that AI can be made liable for the instances where it does currently make use of A specific work in its training data to an extent that does not constitute fair use.
      Id argue this can be compared to human illustrators tracing other drawings in the sense that its not always an easy thing to notice and because of that, traced drawings which are not caught are often original enough, but only if the work their traced from is not directly compared.
      I think A lot of large AI diffusion models currently mostly make images that even when held next to their greatest influences in their training data, would clearly be considered original to the point of being entirely non derivative. I also do not think that can be said for every diffusion model when some are intentionally fine tuned to bias toward A smaller selection of images often taken from the same illustrator or of the same subject. In this case having concrete evidence that an image takes great influence from your own illustration through watermarking may be enough to get past the first legal barrier of weather the work is derivative at all, then making it A case of weather the work is transformative enough to constitute fair use which in part considers the effects on the financial value of the original to its owning party. This means if someone makes A model copying your style, A watermark could act as A defense against claims that the model was actually trained on some other similar illustrator or mix of them.
      Then if you can prove it hurts your ability to sell your work, you win!
      Theoretically.
      Wow this is long, whoops.

  • @SvalbardSleeperDistrict
    @SvalbardSleeperDistrict 6 місяців тому +27

    People who, to the wider public, seem needlessly preoccupied with digital privacy and security - with their experiments with FOSS software and privacy/security-focused hardware - while everyone is happily using their Google services and Apple phones, will soon start seeming like common-sense individuals.

    • @Caellyan
      @Caellyan 6 місяців тому

      They won't, it takes 1984 in full swing for that to happen, and then those same people are the ones who will snitch on anyone who's trying to free them just to gain some favor even though they're working against themselves. They can't understand reason as they're born with sheep mentality. You can't explain it to them either because they'll laugh it off and stay conformant deluding themselves someone else will take care of it.

    • @OcteractSG
      @OcteractSG 6 місяців тому +1

      Amen

    • @lua-nya
      @lua-nya 6 місяців тому +4

      I would like to hope that you're right, but I honestly don't think I have enough hope for that.

  • @Qyngali
    @Qyngali 6 місяців тому +57

    How about actually doing undercover police work instead, eh? No false positives, no invasion of everybody's privacy without cause.
    The increased workload for screening all the messages for false positives could be used to fund actual classic police work. Just the power bill to power the AI...

    • @gabrielbarrantes6946
      @gabrielbarrantes6946 6 місяців тому +3

      Yeah, is a excuse to charge more taxes... As usual...

    • @freedustin
      @freedustin 6 місяців тому

      @@gabrielbarrantes6946 why do they need taxes when they can just take ultimate power over all?

    • @mgord9518
      @mgord9518 6 місяців тому

      >no invasion of privacy without cause
      Yeah, that comes with the assumption that police are actually doing their job correctly. In reality, police are humans, some of which are evil.
      Any power provided with the intent of stopping crime WILL be used to strip as many rights from citizens as possible.

    • @mgord9518
      @mgord9518 6 місяців тому

      ​@@gabrielbarrantes6946Less than half of which will actually be going to the program it was budgeted for

    • @1p2k-223
      @1p2k-223 6 місяців тому

      since the ai would run locally, it would just drain batteries, and also could probably be reverse engineered

  • @bennyjensen1
    @bennyjensen1 6 місяців тому +5

    there would be two additions to this kinda of system that would be required for me to have any trust in it. 1. automatically send me a notification whenever someone looks at my information (with a max 1 year delay) and 2. an ability for me to punish (sue) anyone looking at my information.

  • @iclonethefirst
    @iclonethefirst 6 місяців тому +8

    What I also see as thread about this law is that some EU states already aren't representing the values of democracy and ironically not the values of the EU itself. So this law could give them the excuse to build these technologies and then abuse it

  • @PrezVeto
    @PrezVeto 6 місяців тому +13

    Kiddie porn today. "Misinformation" tomorrow.

    • @erubianwarlord8208
      @erubianwarlord8208 5 місяців тому

      more like 5 minutes after they get their jackboot in the door

  • @pettycrimesandmisdemeanors
    @pettycrimesandmisdemeanors 6 місяців тому +12

    Sweden should disown Ylva Johansson

  • @skyde72
    @skyde72 6 місяців тому +22

    as an EU citizen I will immediately flee the EU to Serbia

    • @louisfifteen
      @louisfifteen 6 місяців тому +6

      Hahahahahaha, good luck with that. Or try Hungary. I'm sure Orban will welcome you.

    • @bleack8701
      @bleack8701 6 місяців тому +1

      I don't believe you even for a second.

    • @aqua-bery
      @aqua-bery 6 місяців тому +3

      Please, do not flee to Serbia, it's not great there.

    • @aronkvh
      @aronkvh 6 місяців тому +2

      ​@@louisfifteen this would still come into effect in Hungary, but with much less oversight

    • @louisfifteen
      @louisfifteen 6 місяців тому

      @@aronkvh yeah and hungary is not even a democracy. It should be kicked out of the EU, but we so busy inviting every criminal east european country to join the EU.

  • @kelvinnkat
    @kelvinnkat 6 місяців тому +10

    The only way to meaningfully train an AI to scan for unknown CSAM is to give it a bunch of known CSAM to look at. And a stash of CSAM material, especially one big enough to train a model from, is someone that no one, including a government, should have.

    • @elder_guardian
      @elder_guardian 5 місяців тому +1

      Also wouldn't that belong to the victims, not the government? Imagine being abused so that you could train AI models. Horrible.

    • @liforra
      @liforra 5 місяців тому

      Well u could use the csam material thats being made by terrible people and save them as hashes aka photodna

    • @kelvinnkat
      @kelvinnkat 5 місяців тому

      @@liforra Well that's just the same problem then, isn't it? All that changes is the program the computer runs to assess what it's being fed. It still requires feeding the AI a bunch of CSAM, and with how much CSAM there is out in the wild it would need a hell of a lot of CSAM to make the tiniest of dents in the issue.

    • @liforra
      @liforra 5 місяців тому

      ​@@mystic_scytheso, they can never send a picture anymore to anyone because its their face in the picture?

    • @liforra
      @liforra 5 місяців тому

      ​​@@mystic_scytheidk, the idea is okay but my problem is mostly that any chat control will violate privacy, but the thing with the face would probably have too much false positives

  • @qlum
    @qlum 6 місяців тому +29

    The problem with false positives is not to be understated.
    You would ridiculously low error rates, let's say when a match is found it's 1m times more likely to be child pornography than any random image, seems like a pretty good rate.
    But if 1 in 10m messages is child pornography, that would still mean 90% of what you detect is not. For a hashing algorithm with a known database and only matching exact matches, that is doable, but for a looser match it really is not.

    • @Deathbunnygirl
      @Deathbunnygirl 6 місяців тому +4

      A lot of these hashes are built from the ICSE database, of which private companies like Microsoft have poorly supported. Hashes are a very basic level of detection from often older image datasets which ignore duplication, edited copies, and similar. Ai is a potential solution but currently not fit for service when it comes to CSEM.

    • @vocassen
      @vocassen 6 місяців тому +2

      Something there is not right, you're saying that 1/1000000 = 1/10
      But I get the sentiment, there WILL be false positives, and as long as law enforcement is unable to deal with this quickly and not make an innocent persons life hell, it is almost as morally questionable to go forward with it than not
      We do need a solution, but as long as law enforcement is understaffed, trying to push flaky algorithms isn't going to do any good, just harm

    • @jamesphillips2285
      @jamesphillips2285 6 місяців тому

      @@vocassen Look up Bayesian probability. Combining something with a high probability (1:1,000,000 false positive rate) with something with a low probability (1:10,000,000 CSA rate) gives counter-intuitive results.
      If you look at 10,000,000 images: 1 will be a true positive, while the other 9 hits will be false positives.

    • @vocassen
      @vocassen 6 місяців тому

      @@jamesphillips2285 Heads up, your comment is hidden, just see it as a reply.
      So I reread the OP and it's just super easy to misread...
      "it's 1m times more likely to be X than any random image"
      I read that as in "it's 1m times more likely to be X than not (instead being a random image)", whereas to make sense, it should read as "it's 1m times more likely to be X than any randomly selected image"
      Now that's a troublesome sentence

    • @jamesphillips2285
      @jamesphillips2285 6 місяців тому

      @@vocassen I meant if the flagging software is 99.999% reliable: false positives still dominate if the CASM images are very rare.

  • @OcteractSG
    @OcteractSG 6 місяців тому +5

    Client side scanning in messaging apps accomplishes nothing. All the criminals have to do is use two end-to-end encrypted messengers, encrypt a zip file and send it over one messenger, and then send the password for the zip file over the other. People who are trying to stay out of prison will go to that level of effort, but normal people never will.

    • @prometheus9096
      @prometheus9096 6 місяців тому +2

      This was exactly what I was thinking. Not that hard to circumvent this system.

  • @kaz49
    @kaz49 6 місяців тому +9

    Some time in the near future, Tor might be the only option for actually private messaging.
    Dang, that's dark.

    • @gabrielbarrantes6946
      @gabrielbarrantes6946 6 місяців тому +5

      Not quite, you can send private messages through any channel, you just need to perform the encryption manually...
      For example an app could be developed that creates a private and public key, send the public key to your recipient using Whatsapp or whatever, it encrypts the message and send it using Whatsapp or whatever... On the other end the same app can handle the decryption...
      This would work as long as they don't take full control of what you can and cannot run/install on your device... Oh wait lol...
      Jokes aside, there is always a way, you can root your phone, don't buy anything from apple or Microsoft, or just run it in a plasma/Linux device... Hopefully they will never enforce restrictions over what you can do with your hardware...

    • @seedney
      @seedney 6 місяців тому +4

      Tor is not private if you are using compromised relays.. You also can be corelated etc.... There are'nt any 100% safe methodology for anything... You even aren't safe from falling if you have good legs...

    • @1p2k-223
      @1p2k-223 6 місяців тому +1

      Yep

    • @paulojose7568
      @paulojose7568 6 місяців тому

      @@gabrielbarrantes6946 if they take over hardware we're fucked. Makes me worry for microsoft pushing TPM

  • @RaresCelTare
    @RaresCelTare 6 місяців тому +9

    Whatsapp uses E2EE? I call bs on that.
    How can one verify such clame? From the fact that it's a Meta owned company? to the fact that the source code for the app and how it encrypts messages is definitely open?
    And even if they do have E2EE, how can you tell that the app doesn't send to the server the encryption keys alongside the messages?
    Leave Meta while you still can. Signal is one of the few when it comes to private messaging.

    • @Mark-kt5mh
      @Mark-kt5mh 6 місяців тому +3

      If you didn't compile the app, it's unreasonable to believe the E2EE claim

  • @anon_y_mousse
    @anon_y_mousse 6 місяців тому +6

    We all know that these measures aren't about protecting children, it's entirely about maintaining control over the populace. However, I would just get a Linux phone if I actually used my phone for more than phone calls and playing that colored dot connection game. In fact, the only pictures they'd find if they searched it would be screencaps of 99% colored dot screens so I could hit the refresh button and ensure a 100% perfect score, and pictures that might help them diagnose my bowel health.

  • @dandeeteeyem2170
    @dandeeteeyem2170 6 місяців тому +17

    Won't this just drive abusers to make new, unique videos each time they share? This will increase the supply and demand. How about law enforcement spend less time on investigating drug crime or other low hanging fruit, and invest in resources to physically investigate these types of crimes? Did you know they also changed the law to allow for police to "remotely alter" a target machine. So, they can now plant evidence on a target machine, for political or any other reasons they see fit?

    • @ButWhyMe...
      @ButWhyMe... 6 місяців тому +3

      Pretty much.

    • @dandeeteeyem2170
      @dandeeteeyem2170 5 місяців тому

      @@mystic_scythe no sh1t? :/
      I have heard they changed the law to allow for police to "alter files on a target's computer" lawfully.. So, legally allowed to plant evidence?
      All I see is more and more erosion of the safeguards we had in place, to protect us from the few bad apples. They mess with privacy laws under the guise of "not committing a crime? Then what have you got to hide?"
      Oh I don't know, ask Julian Assange. Ask Andrew Wilkie. Ask people who are in the witness protection program who's identity and location gets revealed by corrupt officers.
      There's so much more to lose when we give up privacy. We lose the ability to hold government to account.
      All I see is misuse of these laws for draconian control that we can never undo. They claim to have the numbers of arrests of child predators as evidence it's being put to good use, but when you look at the rates of convictions, rather than arrests, it's clear they need to direct more resources to old fashioned investigative methods..
      They spend far too much on drug crime. Drugs don't make people break other laws. Rather, people who don't respect laws are likely to use drugs too. The vast majority of drug users would never break another type of law, and the police don't see those people in society because they don't run into them. Legalise them, organised crime lose their income model overnight, and taxing them would pay for the rehab centres and healthcare necessary to help people with problematic use.
      Violent crime, csam, assault, rpe, homicides, theft.. They are the crimes police should be focused on to have the most positive impact on society, and it's becoming increasingly obvious why they target the low hanging fruit instead. Allocate the resources where the public wants their taxes spent. Then we'd see real positive change

  • @yasir_7530
    @yasir_7530 6 місяців тому +2

    16:20 Why did I hear him say "big chungus" 😟

  • @Dr-Zed
    @Dr-Zed 6 місяців тому +14

    Everyone should start using Matrix instead of proprietary messengers.

    • @niccoloveslinux
      @niccoloveslinux  6 місяців тому +19

      Just because a project is open source doesn't mean it's not affected by this legislation!!!

    • @Dr-Zed
      @Dr-Zed 6 місяців тому +1

      ​@@niccoloveslinuxTrue, but at least it'll be known what exactly the software does.

    • @guss77
      @guss77 6 місяців тому +7

      ​​@@Dr-Zedthat's incorrect. I'm assuming that whatever chat scanning software will be installed on a matrix server, it will not be open source. The service provider gets the copyleft chat software, then installs the government mandated proprietary snoopsoft on that, and that does not breach the copyleft license - as long as the provider doesn't then takes the combined software and distributes it to other people - which they have no reason to ever want to do. Copyleft will not save you from not knowing what the snoopsoft is doing, but that being said - Matrix isn't even copyleft: it's APL, which most definitely allows people to add non-FLOSS components and then sell the combination without providing the sources.
      I'm guessing this will be a successful business model in Europe: "want to use open source chat software? Buy law compliant software from us!"

    • @Dr-Zed
      @Dr-Zed 6 місяців тому +1

      @@guss77 If I am in control of the Client, which is always possible with Matrix, it doesn't matter what the server does. That's the whole idea of e2e encryption. The server can try to scan my encrypted messages all day long, why should I care?

    • @thetechguychannel
      @thetechguychannel 6 місяців тому

      @@guss77 The best solution is to host a forked "black-flag" version of Matrix on a server one controls. This is what I do to keep secrecy in a very small group of people without any federation ability. It's a closed loop where only we can talk with encryption and time-to-death on each message. Reverse proxy through onion routing ensures that the server's location cannot be super easily ascertained, and no records are kept after a few days. If we don't chat for a while, the server is completely a clean slate. That way, if one member appears in court about another member's statements, that person only has their word to go on it.
      Maybe this server may become illegal in the future in the EU, but you can't get services taken down when those services are outside the EU. You gotta do what you gotta do when you have to organize. Remember, they did this to coffee houses in the 1700s because people were chatting about stuff that displeased the powers at that time. The black-flag servers are very much akin to those coffee houses.

  • @enkiimuto1041
    @enkiimuto1041 6 місяців тому +5

    The irony is that by having those hashes being stored on phones, it is just a matter of time for people to make that kind of content that avoids that hashes because they have a whole database of what haskeys they should avoid.
    It is pointless.

    • @erubianwarlord8208
      @erubianwarlord8208 5 місяців тому +1

      and/or creates a poisoned hash out of an innocent image and drops it into some poor shuck's lap to ruin their life due to a false positive

  • @PWingert1966
    @PWingert1966 6 місяців тому +9

    Microsoft's commenting policy is very arbitrary and suppressive as well. I can't mention the name of the place I live, company names, the names of software packages and certain verbs and object names such as gun, knife, and hunt. This can be exceedingly difficult if you play online games such as CSGo or other weapon based online games.

    • @ShoMinamimoto314159
      @ShoMinamimoto314159 6 місяців тому +5

      such as being unable to say "he"s in the basement" because it contains semen.

    • @PWingert1966
      @PWingert1966 6 місяців тому +5

      @@ShoMinamimoto314159 Oh you mean the cement block caisson for framed timber structural support of a residential dwelling where the person being discussed is currently residing??🤪

    • @stephenwalker6823
      @stephenwalker6823 6 місяців тому

      @@ShoMinamimoto314159 Residents of Scunthorpe had problems for years.

  • @Beryesa.
    @Beryesa. 6 місяців тому +7

    Now only if EU starts making the ground for a credit system 😬

  • @genstian
    @genstian 5 місяців тому +1

    Also, according to secret algorithms on client devices, "everything is open source if you read assembly".

  • @Beryesa.
    @Beryesa. 6 місяців тому +13

    Imagine meta/whatsapp keeping the hashes for such ok/no/yes/never phrases to make more sense from the metadata and sell more adds :|

  • @ferd9438
    @ferd9438 6 місяців тому

    Wow, such a good job! Thanks so much for this breakdown and deepive.

  • @IamTheHolypumpkin
    @IamTheHolypumpkin 6 місяців тому +4

    And even if they would use a completely secret hasting CSS hash function. How long until the function is reverse engineered and published. I would give it a month or two. You can’t keep code a secret!

  • @igorgiuseppe1862
    @igorgiuseppe1862 6 місяців тому +5

    if the scan is done locally, then someone with techinical knowledge (eg an hacker) can scan any file he wants to make sure it wont get detected before he send the file foward, in that case, it will be quite easy to avoid the detection wich makes it useless anyway

  • @DeanHoover
    @DeanHoover 6 місяців тому +2

    Keep doing what you're doing Nicco!

  • @eldaria
    @eldaria 6 місяців тому +5

    Easy to make sure you can read everything. Don't hash messages, hash each individual word. Then you have a "translation" and can easily revert whatever was sent. Makes sure to also do fuzzy hashing so that your database can link hashes of similair words. This would be very easy to do with AI. So as soon as this is in place, all privacy is gone.

    • @jakubrogacz6829
      @jakubrogacz6829 5 місяців тому

      except all it takes is someone using any asymetric cipher to prehash his message and you are looking at someone sending random blobs of 01s.

  • @dontmindme8709
    @dontmindme8709 5 місяців тому

    Thanks for this video! I've been following these developments closely and have reached out to politicians. It's tough though because decisions and proposals have been delayed which has made it difficult to know to whom and when to contact.
    Also, I'm happy to see you talking about the election. This comes at the perfect time to deal with this and get lasting change! I hope the Pirate Parties will do well.

  • @Khytau
    @Khytau 6 місяців тому

    congrats!
    that's very well researched and explained, especially how legal assumptions most often lead to absolutely no technical guaranty on privacy, as long as governments, providers, private actors technically can, they most likely will anyway
    and there's indeed so many levels of wrong with these hashes methods
    we should indeed question who is in charge on how to build child abuse material datasets in the private sector which would join the broader platforms moderation labour exploitation question which only makes harmful content invisible to the average consumer and limits its propagation to underground channels
    techno-solutionism is not the solution
    more surveillance tools will be defeated by the so called targeted "bad guys" eventually but citizens will end up paying the price and be constantly scrutinized
    we don't save children with privacy endangering proprietary software, we just keep making the problems more and more hidden, untold and hard to investigate on
    would we really want to achieve child protection, we wouldn't go for people's phones, we would go against pedocriminal culture and toward education and prevention, but it's taboo because it's unsettling
    in the meantime we protect pedocriminals, especially here in france, we've a big culture over there, from famous writers to famous tv hosts and guests, to school directors and teachers, church representatives, sports trainers, we've had our share of scandals in past couple years alone, and not a single decade without its own high profile affair
    it's fucking depressing how politicians statements and actions don't care about victims at all and just want to make profit off selling mass surveillance solutions right when fascism is making a huge comes back
    sorry for my explicitly heated comment, feel free to moderate it if needed but I had to get this out and thanks again for your work

  • @FredPilcher
    @FredPilcher 6 місяців тому

    An excellent analysis - thanks Nicco!

  • @JoeJoeTater
    @JoeJoeTater 6 місяців тому +4

    I don't know how perceptual hashes work, but I wonder if there's a way to forge perceptual hashes, kind of like how Glaze forges image classification. Of course, it's concerning that people could use it to hide CSAM. More concerningly, people could use it to send regular images with a forged CSAM perceptual hash to a victim, so the victim gets flagged as having (or even sending!) CSAM on their device.

    • @niccoloveslinux
      @niccoloveslinux  6 місяців тому +5

      I literally talk about this in the video!

    • @JoeJoeTater
      @JoeJoeTater 6 місяців тому +1

      @@niccoloveslinux ye, I was typing while watching. :P
      Well, I recall that you mentioned actual CSAM could be altered to change its perceptual hash enough to avoid detection (cause a false negative). I don't recall you talking about whether an arbitrary image could be altered to match the perceptual hash of a target image (cause a false positive).

  • @Mightydoggo
    @Mightydoggo 6 місяців тому +3

    European govs be like: "Hey, if the States and China can legally spy on you as much as they want, we want to join in, too!"

  • @docopoper
    @docopoper 6 місяців тому +16

    I really wish I could vote for the Pirate Party. Unfortunately they have no candidates in Ireland.

    • @iclonethefirst
      @iclonethefirst 6 місяців тому

      And had some huge problems with sexual assaults

    • @OcteractSG
      @OcteractSG 6 місяців тому +3

      Your country is surrounded by water, but you have no pirates. Is that really good fortune, or is it just irony? :P

    • @docopoper
      @docopoper 6 місяців тому +2

      @@OcteractSG That is kind of a funny point actually.

    • @1p2k-223
      @1p2k-223 6 місяців тому

      @@iclonethefirst Did you click reply to the wrong section? (I can't see the connection between the 2 topics)

  • @DavidAlsh
    @DavidAlsh 5 місяців тому +3

    Agree in the necessity of combatting crime but forcing chat services to reveal user messages only really affects non criminals.
    User space encryption (like pgp) would easily bypass this, has been used successfully by criminals for decades, and I'd imagine remaining criminals would simply adapt.
    Do they have any evidence that this regulation would do anything?
    In my view, they should address crime at the source rather than focusing on regulating our general means of communication.

  • @betapacket
    @betapacket 6 місяців тому +3

    8:58 You could just not send the request, because, remember, it's the client™. Although you could add a check on the server to check if they have sent the request and the hash of the image is in one of their attachments, and that all attachments have been also sent to the endpoint where you do your image scanning.

  • @seedney
    @seedney 6 місяців тому +4

    if hashing is happening on device - we can go and compile software without that functionality? -- So we have another junk bloatware processes running, and criminals would just use other software?

  • @OcteractSG
    @OcteractSG 6 місяців тому +1

    This legislation demonstrates a lack of understanding regarding the process of stopping crime. When you implement laws that define what is to be searched, criminals know immediately not to operate there.
    This kind of crime cannot operate in a vacuum-it must be communicated between individuals who place some level of trust in each other. That trust is the weak point to be exploited. The inability to trust is what will deter crime.
    How do we abuse criminals’ trust and scare them out of crime? Undercover police work. When the dark world of CSAM is bristling with undercover agents (or even AIs working on their behalf-finally a good use for the technology!), we will see this problem diminish.

  • @SirSomnolent
    @SirSomnolent 6 місяців тому +7

    I am sure the impetus for this was completely legitimate. It had nothing to do with said governments experiencing record unrest and political turmoil during that time. It's about the children.

    • @thomaslechner1622
      @thomaslechner1622 6 місяців тому +2

      I guess you re being sarcastic, right?

    • @1p2k-223
      @1p2k-223 6 місяців тому

      @@thomaslechner1622 I think so too

  • @RobMoerland
    @RobMoerland 6 місяців тому

    Bedankt

  • @mx338
    @mx338 6 місяців тому

    When they talk about hosting providers, do they mean people who host Nextcloud or WordPress servers, or companies like Hetzner and AWS, who just provide compute and block storage?

  • @pablogonzales6194
    @pablogonzales6194 6 місяців тому +1

    ciao, sai se il pinetab2 con RK3399 lo spediscono anche in Italia?

  • @RandomGeometryDashStuff
    @RandomGeometryDashStuff 6 місяців тому

    18:40 is popup like this solution?
    about to send message will be sent to recipient AND to app creator
    [send to both] [don't send]

  • @RandomGeometryDashStuff
    @RandomGeometryDashStuff 6 місяців тому

    11:32 does launchpad (ubuntu PPAs) count as app store? it's more like app store store
    assuming app store = repository

  • @ndchunter5516
    @ndchunter5516 6 місяців тому +1

    ...and then we keep hearing of intelligence agencies or just the police already having had information about planned terrorist attacks just to do nothing with that information. The problem isn't about having the info, its how bad agencies are at following up with it due to lack of staff or sheer incompetence (in ability or permissions). Also running image classification on mobile devices eat up battery life in record time, i don't think it would be feasable to have a reliable detection on those devices and not completely brick them. Also nothing would stop me from running a homebrew OS on my phone...

  • @sjoervanderploeg4340
    @sjoervanderploeg4340 6 місяців тому +3

    The funny thing about these rules are made up by people who have no clue what they are talking about and it is all driven by fear without any reasoning.
    If you want to combat grooming, you do not need to screen everyones messages at all.

  • @JustArion
    @JustArion 6 місяців тому +1

    I might add that some solutions to keep privacy are that some companies create an encryption layer on-top of cloud storage for you be it end-to-end or encrypted-at-rest on your behalf. That means that when you view your google drive you'll only see encrypted data, you'll then use your decryption software to decrypt that and use the data. Issue is, you'll need to go through this third party's apps to use the drive. Ideally this would add a layer of abstraction that large mega corporations can't track. This layer should be open source and you should hold the sole means of encrypting / decrypting. You can back those keys up anywhere you see fit. Be it behind a master-password or simply leaving it as a "super-important-password.txt"
    Politically speaking, one should have an absolute stance towards privacy since over time, government can erode even your most basic rights. It might be controversial to say but making sure that privacy is protected even if it implicates CSAM you'll know for a fact your fate and future is not in the hands of a corporation or a government where your only say is but a fraction of hundreds of millions.

    • @prometheus9096
      @prometheus9096 6 місяців тому

      Or you just run your own cloud with a raspberry pi at home. It's not that hard.

  • @MrBluelightzero
    @MrBluelightzero 6 місяців тому +3

    If 2 people are having a private encrypted conversation and one of them decides to give the conversation to law enforcment.
    Is it possible to prove that the messages were in fact sent by the other person (As opposed to manipulating the informant's phone so it looks like they did)
    Does Whatsapp store the needed information locally to prove information origin?

  • @niewazneniewazne1890
    @niewazneniewazne1890 6 місяців тому +1

    Boys it was nice knowing yah all, it's time to forever get closed in a psychiatric yard after my psy finds out

  • @m4rt_
    @m4rt_ 5 місяців тому

    Also, if the hash is not checked on your device, they could just arbitrarily just tell you to send over the unencrypted messages, and we wouldn't know for sure if it was a false positive or them abusing it. (since they would need to have some way of telling the client to send it for manual review)
    Also, if hackers modify where to send the hash they could abuse this to get all your messages instead of just random ones that by chance matches their modified database.

  • @Alice_Fumo
    @Alice_Fumo 5 місяців тому

    Oh yeah, btw I'm pretty sure if I was getting hashes of all messages, I could be able to reverse engineer the messages just on statistical frequency of hashes I've seen and putting them in the context of other hashes which I've already reverse engineered.
    Maybe not everything, but all common phrases. Then, I could also make guesses as to the contents of a message inbetween a bunch of already known messages regarding their contents, so I'd know roughly what was said and AI-generate a bunch of ways to phrase the inbetween until I get a matching hash.
    I mean if a single modern gpu cracks 12 character somewhat high entropy password hashes in a few hours, giving hashes for anything which isn't a wall of text is entirely unsecure.
    If the implementation is such that it sends hashes on a per-sentence basis, you might as well just send plaintext at that point.

  • @Jad2410
    @Jad2410 6 місяців тому +2

    Time and time again, the protection of children falls back on their parents. From smart phones to gaming consuls they all have parental controls that allow parents to monitor and restrict access to what a child can see and do. The only government action that should be taken is that parents are forced to implement those controls and penalize for not doing so. If you’re a grown adult with kids and you don’t know how to work or figure out parental controls you shouldn’t be giving your kids access to any type of tech.

    • @ButWhyMe...
      @ButWhyMe... 6 місяців тому

      I disagree.

    • @Jad2410
      @Jad2410 6 місяців тому

      @@ButWhyMe... part of the reason why the government keeps getting involved is because of parents pushing the blame on others and not taking responsibility for their kids. Example, is a adult game comes out the parents blaming the developers instead of restricting their kids game consoles access. Also, predators online are only able to target kids because parents aren’t watching what their kids are doing online.

    • @ButWhyMe...
      @ButWhyMe... 6 місяців тому

      @@Jad2410 I don't really feel like debating bruh. Predators wouldn't even exist if it wasn't for parents abusing youths.

    • @Nina-cd2eh
      @Nina-cd2eh 6 місяців тому +1

      But saying that "protection of children falls back on their parents" doesn't actually provide any solution. This is vague gesturing. Saying "parents should do X or shouldn't do Y" doesn't actually address CSA or any issue at all. What specific policy would help, is what you should try to think about instead.

    • @Jad2410
      @Jad2410 6 місяців тому

      @@Nina-cd2eh Parent involvement in their kids digital life and in their life in general is a better mitigrator of all these issues. A lot of parents today just put their kids in front of the TV, give them a electronic device, or send them to school willy-nilly without sitting down with them and explaining boundaries. As a parent you should know who your kids are friends with and if their family values align with yours, you should know what type of digital content they like and the risk that come along with access to that content, and you should know what your kid is being taught at school by asking then on a daily basis. If you let the government get too involved in how you raise your kids you can't complain when the government does things you don't agree with or take freedoms from you and everyone else.

  • @seedney
    @seedney 6 місяців тому +2

    So... If hashes could be diffrent images etc. What about putting people to jail if we doesn't have 100% proof of abuse?

    • @ShaLun42
      @ShaLun42 5 місяців тому +2

      It should be easy to do.
      1. Find some person who is 18 years old, but looks younger.
      2. Record a video with them, where they first show their face and passport, then show their cleanly shaved private parts. This video will prove YOUR innocence if you accidentally caught with photos on your devices.
      3. Take a frame of this video, where they show their private parts.
      4. Use an AI model to mutate image in such a way that visible changes will be minimal, but hash will change to that of known CSAM.
      5. Plant the image on some server. Server should give this image only to specific IP of your victim, but for everyone else it will give completely different image. It's achievable via simple PHP script.
      6. Send the link to victim. Automatic link scanning will show nothing (because scanning bot uses different IP than that of victim)

  • @CatMeowMeow
    @CatMeowMeow 5 місяців тому

    If you where to have the hash of a text message, I imagine it would be quite easy to crack. Text messages usually only contain a very small set of words in predictable combinations. Combine that with a gpu hash cracker + a few hours + government access to huge amounts of compute resources & most text messages would likely not stand a chance imo

  • @brokegamer353
    @brokegamer353 6 місяців тому

    can we use different themes in different virtual desktops in kde plasma and if we can how can we do that. Also after changing taskbar to look like dock can we set that when we open any app in maximized window the dock also maximized and look like taskbar while don't changing taskbar apps location.

  • @azraelvrykolakas157
    @azraelvrykolakas157 6 місяців тому

    I'm still surprised that they don't have some sort of online age verification card that would be sold in the cigarette isle and would function similar to those gift cards you can get for you game console.

  • @tacticalassaultanteater9678
    @tacticalassaultanteater9678 6 місяців тому +1

    A more fundamental issue here is that these approaches antagonize open platforms. Even if alternative clients remain legal for the time being, they will become the latest scapegoat for the child pornography epidemic. It's not clear that this is a slippery slope, but seeing the rise of authoritarian governments across Europe I'm worried that it might be.

  • @DamjanDimitrioski
    @DamjanDimitrioski 6 місяців тому +1

    If they force us to read our messages, after each message to a friend I will say how I hate the government :D.

    • @SuperPupperDoggo
      @SuperPupperDoggo 5 місяців тому +1

      If we all say we hate the government, that would sure be interesting for them.

  • @ronaldcoleman9370
    @ronaldcoleman9370 5 місяців тому +1

    This is the soul reason Apple has put Niro-engines and machine learnt into their computers!!! It will also end end-to-end end inscription!!!!

  • @user-wj7sm9bm6p
    @user-wj7sm9bm6p 6 місяців тому +2

    Part of having a free society is accepting you are going to have a slightly more violent one.

    • @sarthakgothalyan8952
      @sarthakgothalyan8952 6 місяців тому

      Actually it might be opposite. If other is speaking their mind we dont have to speculate their intention and be anxious so less violence.

    • @Nina-cd2eh
      @Nina-cd2eh 6 місяців тому +1

      A more free society, by definition can't be a "more violent one". Unfree societies are the ones that rely on violence, whether it's to oppress or to revolt. True freedom is not having to worry about violence, but for individuals to be prosperous, or simply put, happy.

  • @jandraelune1
    @jandraelune1 6 місяців тому +2

    This will just end up being another member in the war on ads, trackers and scripts.

  • @christianmartin8751
    @christianmartin8751 5 місяців тому

    All these complex regulations and technologies but they introduce such a simple loophole !
    With hash checks on your own device you just have to use a private encryption system to encrypt images and text and then send the encrypted data through the app. Et voilà !
    And your system, the app provider and the governement will think that you are only doing legal things as the hash of your encrypted data will never be recognised.
    Once again regulators do not understand anything to what they are doing, only clean people will be spied and real criminals will go away with it. What a failure.

  • @modellking
    @modellking 6 місяців тому

    Why no CSS on the receiving side as opt-in?

  • @false_positive
    @false_positive 6 місяців тому

    Who needs those encryption and hashing shizzlewizzles when your phones AI sees your screen and reports content to center?

  • @tothandras8160
    @tothandras8160 6 місяців тому

    background music is quite annoying.
    otherwise, great video. thank you!

  • @dustysoodak
    @dustysoodak 4 місяці тому

    I bet there is a department somewhere that publishes and/or spreads illegal content specifically for “think of the children” appeals.

  • @m4rt_
    @m4rt_ 5 місяців тому

    Technically you could avoid some of the age verification if it was directly in Play Store by using a 3rd party app store.

  • @JoeJoeTater
    @JoeJoeTater 6 місяців тому +1

    Does this legislation only apply to proprietary software? It seems like it woukd be easy to get around this by simply using open-source, peer-to-peer chat software.

  • @nir8924
    @nir8924 6 місяців тому +2

    If such rules areapprove, run and invest in Kodak and Fuji 😊

  • @lvmlvm1729
    @lvmlvm1729 6 місяців тому +1

    On device AI scanning means you need to have a powerful device with "AI accelerators" - i.e top of the line expensive phone.
    So what is the solution here for older or cheaper phones that can not run the AI scan because of hardware limitation?
    Make them illegal to own?
    Prohibit installing chat apps on them?
    Force them to send all your messages some company to scan?

  • @alexisandersen1392
    @alexisandersen1392 5 місяців тому

    "It should be impossible to produce the same hash value entering different input."
    Actually, If a hash function produces a hash values that is smaller in size than the data being hashed, there will inevitably be overlap. It is impractical to use a hash as a unique identifier. This is something that I've seen misunderstood all over the place and it's alarming. The politicians do not comprehend how encryption and hashing works. This fundamental misconception will lead to ALL of these arrests based on hash values to be called into question... a hash function is similar to a checksum... it's actually really easy to make a false positive. If you know the hash function, and you know the hash you're trying to match to.... you can trivially generate junk data to fit... this is why encryption hashvalues still need to be secured secrets.

  • @aronkvh
    @aronkvh 6 місяців тому +1

    everything is worse than we thought

  • @seedney
    @seedney 6 місяців тому +1

    If we can have legislators that do understand technology..

  • @christianknuchel
    @christianknuchel 6 місяців тому +1

    And, of course, Zensursula is involved. If this continues, her role in History will be that of a small Hindenburg; one of the people who will have paved the way for the next fascist dictatorship.

  • @rudde7251
    @rudde7251 5 місяців тому

    You do know you can extract the idea of the original picture from a perceptual hash? It's not the full quality of course. But what a perceptual hash is is a tiny tiny thumbnail of the pictures. often cropped and in grayscale. So no, it's not possible to do a scan of pictures where E2EE is kept intact.

  • @LViruz
    @LViruz 5 місяців тому

    Imagine if you could just pack files in one compressed file and slap a password there.. Oh dear. Luckily we cannot do it.

  • @technomad9071
    @technomad9071 5 місяців тому

    privacy isn't safe they want to have a trojan on my phone

  • @DamjanDimitrioski
    @DamjanDimitrioski 6 місяців тому

    Imagine you send a picture of your cat and matches that database :D, and you go to jail.

  • @JoeJoeTater
    @JoeJoeTater 6 місяців тому

    11:12 test "accuracy" isn't a thing. Specificity tells you how good it is at returning true negatives. Sensitivity tells you how good it is at returning true positives. You need both.

  • @Maxtraxv3
    @Maxtraxv3 6 місяців тому

    whats app doesn't use end to end encryption, spite saying it does and the government can already check anything you use on it.

  •  14 днів тому

    Child porn is subjective in the case of a child sharing pictures with another child or an adult in close range age of the child e.g. 16 old and an 18 old adult.

  • @habromania8778
    @habromania8778 6 місяців тому

    hey , this is out of context . there's a brainstorm going on regarding a new plasma logo (KDE DISCUSS) . could u look into it and share ur thoughts. i think there's a need for a proper brief for people to work on it rather than arbitrarily pasting some shapes .

    • @niccoloveslinux
      @niccoloveslinux  6 місяців тому

      I've seen it, but I really don't think we will be changing logo at all :)

  • @igorgiuseppe1862
    @igorgiuseppe1862 6 місяців тому +1

    15:50
    "ilegal material in general"
    like piracy

  • @prometheus9096
    @prometheus9096 6 місяців тому +1

    And couldn't all of this be easily avoided by for eg sending a encyped zip file frome one end to another? If you compress pictuers videos together with a bunch of other files.

  • @taukakao
    @taukakao 5 місяців тому

    The idea to make a hash that had both visual proximity and can not recreate the original image is absurd.
    You could easily train a Neural Network on a million pictures with their hashes and get back the original with that.
    Because, if you have two pictures that are close, you can just take a mix of two of them and get back the original of the hashed image. Probably in a horrible resolution, but still.

  • @aonodensetsu
    @aonodensetsu 6 місяців тому

    i mean, 1) signal is open source, so it doesn't need to be checked
    2) pretty much anyone can use DDNS and a raspberry pi to have an unchecked server
    so like this really doesn't hinder actual criminals at all, just reduces privacy

    • @niccoloveslinux
      @niccoloveslinux  6 місяців тому

      Note that it doesn't matter whether an app is open source, it only matters if they earn from it

    • @aonodensetsu
      @aonodensetsu 6 місяців тому

      @@niccoloveslinux signal does accept donations, but i don't think that counts, since it is neither part of the app, nor required to use the app in its full capability

  • @gambinante
    @gambinante 6 місяців тому

    People have a right to privacy be it online or offline.

  • @speculi
    @speculi 6 місяців тому +2

    Great content and very informative, but:
    Please stop using those camera zoom in effects, they make the video way more annoying to watch, imho.