Verifying AI 'Black Boxes' - Computerphile

Поділитися
Вставка
  • Опубліковано 10 лис 2024

КОМЕНТАРІ • 135

  • @danieldunn6329
    @danieldunn6329 Рік тому +8

    Extremely grateful to have taken the Formal Verification course with Hana during my final year of my undergrad.
    Great video and fantastic to see her here on this channel 😊

    • @DavidLindes
      @DavidLindes Рік тому +2

      Nice! Yeah, it's fun seeing people one knows showing up on these. I had a friend show up in a Numberphile, and that was super fun to see. :)

  • @checkboxxxproductions
    @checkboxxxproductions Рік тому +88

    I love this woman. I need her to explain all computer science to everyone.

    • @bilboswaggings
      @bilboswaggings Рік тому +3

      I need her to explain red "pandas" to everyone

    • @AileTheAlien
      @AileTheAlien Рік тому +1

      She does a great job of breaking down the problem and explaining all the pieces, without missing details or using unexplained jargon! :)

  • @hemerythrin
    @hemerythrin Рік тому +102

    This was great! That idea about throwing away parts of the input is so clever, love it

    • @Dante3085
      @Dante3085 Рік тому +2

      Same. I didn't think about that before. You can actually try to narrow down what part of the input data was relevant for the decision. Cool!

    • @brooklyna007
      @brooklyna007 Рік тому

      Most image models have natural heat maps in the upper layers for each output class for a give image. Most language models can do a similar heat map over the tokens in a sentence. I really don't know any major image or language models that don't have the ability to create heat maps that give you a sense of where the decision came from. I would think structured/tabular data is more relevant to black systems since the features don't exist in space or along a sequence.

  • @davidmurphy563
    @davidmurphy563 Рік тому +4

    Aw, she seems lovely. The sort of person that hugs children with a big smile.
    Great explanation too.

  • @leftaroundabout
    @leftaroundabout Рік тому +44

    There's an important aspect missing from this video: if you just consider arbitrary ways of obscuring part of the image, you can easily end up with a particular pattern of what's obscured and what's not that influences the classification. In particular, the standard convolutional neural networks are all subject to _adversarial attacks_ where it may be sufficient to change only a handful of pixels and get a completely different classification. So if one really just tries to find the _minimal exposed area_ of an image that still gets the original classification, one invariably ends up with a bogus result that does nothing to explain the actual original classification.
    There are various ways to circumvent this issue, but it's unfortunately still more of an art than a clear science. The recursive approach in Chockler et al. 2021 definitely is a nice construction and seems to work well, but I'd like to see some better mathematical reasons for doing it this way.

    • @DavidLindes
      @DavidLindes Рік тому +4

      I'm curious... are you criticizing the _video_ or the _paper_ here? Because if it's the video, like... this is Computerphile, it's trying to give lay folks some general understandings of things, and is thus potentially both getting a simplified explanation from the speaker in the actual interview, _and also_ quite possibly editing it down (many videos have extra bits videos showing parts that didn't make the cut for the main video)... AND, also, there's a video linked at the end (v=gGIiechWEFs) that goes into those adversarial attacks. So, seems like Computerphile is doing OK in that regard? There's also 12:25.
      Now, if you're criticizing the paper, then that's a different story. And I haven't read it, so I won't say more on that. I just think that if your critiques are really of the video per se, they're probably a bit misguided. I thought this was a great surface-level explanation for a lay audience. Yes, it left stuff out. It wasn't trying to be comprehensive, though, so that's OK by me.

    • @leftaroundabout
      @leftaroundabout Рік тому +2

      @@DavidLindes I was mostly just adding a remark. I'm not criticizing the paper, which does address the issue, albeit a bit hand-wavey. The video - as you say - has valid reasons for not going into all the details, however I do feel it oversimplified this.

    • @DavidLindes
      @DavidLindes Рік тому

      @@leftaroundabout fair enough.

    • @Rotwold
      @Rotwold Рік тому +1

      @@leftaroundabout thank you for the remark! It extended my knowledge on the topic :)

    • @emptyhanded79
      @emptyhanded79 Рік тому

      I think Mr. left and Mr.David are both AI designed to teach UA-cam clients how to debate in a friendly manner.

  • @MarkusSimpson
    @MarkusSimpson Рік тому +23

    Amazing. I loved the enthusiasm and energy for the subject, and the accent is lovely to listen to. If only my university lecturers were this passionate when when teaching us, it would make things flow much easier.

  • @henlyforbesly2176
    @henlyforbesly2176 Рік тому +3

    I never heard of this kind of explanation on image recognition AI before! Such a simple and intuitive explanation! Thank you, miss! Very clever and well delivered!

  • @barrotem5627
    @barrotem5627 Рік тому +47

    What an absolutely brilliant video - clear, sharp and understandable.
    Truly great.

  • @undisclosedmusic4969
    @undisclosedmusic4969 Рік тому +10

    This video and the linked papers are about model interpretability/explanability and not (formal) model verification, may I suggest changing the title to "Interpreting/Explaining AI Black Boxes"?

  • @onlyeyeno
    @onlyeyeno Рік тому +9

    I@Computerphile
    Thanks for yet another interesting and enjoyable video.
    And if possible I would love to see more from Mrs. Cochler about their research, as I'm very curious if/how they "test the systems" for not only recognise "pixel parts" but also more abstract attributes and "features" and "feature sets"... To make crude example what would be the "minimal identifying features" of a "drawing of a red panda", how crucial is colouring. How does it differ depending on "style of rendering"... and on and on... Perception is a complex and "tricky" thing and seeing how we try to "imbue" systems with it is is fascinating.
    Best regards.

  • @mryon314159
    @mryon314159 Рік тому +5

    Excellent stuff here. But I'm disappointed you didn't put the cowboy hat on the panda.

  • @AmnonSadeh
    @AmnonSadeh Рік тому +2

    I initially read the title as "Terrifying AI" and it seemed just as reasonable.

  • @KGello
    @KGello Рік тому +4

    The way the question was posed made it super interesting, and the explanation was enlightening! But I also loved the Dr's accent, she could explain anything and I'd listen.

  • @HeilTec
    @HeilTec Рік тому +4

    Great example of outside-in testing.
    Perhaps a network could be trained to supplement the output category with an explanation.

  • @SelfConsciousAiResistance
    @SelfConsciousAiResistance Рік тому

    thousands is an understatement, black boxes are math equations being held by computer functions, the math arranged itself, but math is magic, math arranged in the shape of consciousness.

  • @Aleho666
    @Aleho666 Рік тому +1

    It's so ironic that UA-cam AI has misclassified cowboy hats multiple times while talking about misclassification...

  • @zhandanning8503
    @zhandanning8503 Рік тому +3

    Video is great explaination for explaining black box models. I am just wondering is there any explainable methodology for non-image data/models because quite a lot of black box models I assume would be sequential data? because images you can extract features which explain what the black box is doing, but with other data types what can we do?

  • @brynyard
    @brynyard Рік тому +63

    I also trust machines more than humans, I just don't trust the human that told the machine what to do :P

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      But that's the beauty of machine learning: the *machine* told itself what to do. The human merely told it what's important 😁

    • @brynyard
      @brynyard Рік тому +6

      @@IceMetalPunk that's not really how machine learning works, but nice thought.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      @@brynyard That *is* how it works. Neutral nets teach themselves by maximizing an objective function (equivalent to minimizing an error function). Usually the humans give them the objective function, defining what's important, but then the network uses that to teach itself what to do. That's why they're considered "black boxes": because the resulting network weights that the machine teaches itself are meaningless to humans.

    • @brynyard
      @brynyard Рік тому +8

      ​@@IceMetalPunk This is true only if you ignore all the human interaction that created it in the first place, which is kinda non-ignorable since that's what the issue I raised is all about. If you need to catch up a bit on goal setting and the problem of defining them you should go watch Robert Miles videos.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      @@brynyard I mean, by that logic of "they didn't teach themselves because humans created them", wouldn't that mean humans don't learn because we didn't create ourselves, either?

  • @MushookieMan
    @MushookieMan Рік тому +2

    Can it recognize a red panda with a cowboy hat on?

  • @kamilziemian995
    @kamilziemian995 Рік тому +8

    "I trust technology more than people". I believe that some amount of distrust in technology, peopel and myself (third is especially hard to do) is the most resonable approach to the world.

    • @IceMetalPunk
      @IceMetalPunk Рік тому +3

      She didn't say she 100% trusts technology, just that Trust(tech) > Trust(human) 🙂

    • @sinfinite7516
      @sinfinite7516 Рік тому

      @@IceMetalPunk yeah I agree with what you said

  • @Veptis
    @Veptis Рік тому

    I am preparing an evaluation benchmark for code generation language models to maybe become my bachelor thesis.
    This kind kf "interpretability" can be abstrced to just just the input but individual layers or even neurons. And this way you really find out where specific information is stored.

  • @gmaf79
    @gmaf79 Рік тому +1

    god, I love this channel.

  • @lakeguy65616
    @lakeguy65616 Рік тому +6

    It also depends on the basic problem you're trying to solve. Let's assume a NN trained to distinguish between bulldozers and red pandas. It will take a very small part of any image for the NN to properly classify an image. Now let's assume a NN trained to distinguish between red pandas and other animals of a similar size with 4 legs and tails. It will be much harder to distinguish between images. For an image to be correctly classified, more of the image must clearly distinguish the subject from other incorrect classes.

  • @pierreabbat6157
    @pierreabbat6157 Рік тому

    The man in the restaurant knows he's a red panda because he eats, shoots, and leaves.

  • @warrenarnold
    @warrenarnold Рік тому +2

    Just like the silent kid in class is never that silent, AI must be hiding something 😅

  • @ZedaZ80
    @ZedaZ80 Рік тому +1

    This is such a cool technique!

  • @kr8771
    @kr8771 Рік тому +19

    very interesting topic. I wonder what an AI system would respond when beeing presented the red panda image with the face obscured. would it still find reasonable categories of what this might be?

    • @zenithparsec
      @zenithparsec Рік тому +3

      It might still say "red panda", but it depends on how it was trained. If it had also been trained on other small mammals with similar shapes, it might guess it was one of those (e.g. an opossum or a racoon), or it might have learned the texture of the fur and guess a completely different type of animal (or the correct one).
      The same general technique shown here could be used to find out.

  • @rainbowsugar5357
    @rainbowsugar5357 Рік тому

    Bro how a channel is 9 yrs old and even uploading consistently

  • @lakeguy65616
    @lakeguy65616 Рік тому +6

    It all depends on the training dataset. If you have trained your AI to classify Cowboy Hat as one class and children as another class, what happens when you pass a child wearing a cowboy hat through the NN? (let's assume the child and hat are about equal in size in the image and let's assume the training dataset contains equal numbers of images of children and cowboy hats. ) Such an image would clearly be a border case for both classes. A NN that labels the image of a child wearing a cowboy hat as a cowboy hat would be correct. If it labels the image as a child, that too would be correct.

    • @DjSapsan
      @DjSapsan Рік тому +3

      Usually advanced NN can find multiple objects on an image

    • @warrenarnold
      @warrenarnold Рік тому +1

      @@DjSapsan you say so, so how do you hide parts of the image automatically when part of the hiding of the image could mean completely cutting out thr object, say upper half of child wearing hat, now the hat is gone! Unless you will cut out where the hat is and start doing the hiding thing.
      Its good yes, but Maybe just say this method is good for simpler non noisy subjects

  • @BethKjos
    @BethKjos Рік тому

    Testing fundamentally can demonstrate the presence, but not the absence, of problems even with systems we consciously design. How much less adequate is it when we are deliberately ignorant of how the system proposes to work! And yet ... it's better than nothing.

  • @Nightspyz1
    @Nightspyz1 Рік тому +1

    Verifying AI? more like Terrifying AI

  • @Finkelfunk
    @Finkelfunk Рік тому +2

    I mean if I am ever in need of a black box where I have no idea what happens inside I just start writing my code in C++.

    • @satannstuff
      @satannstuff Рік тому

      Are you implying you think you know what's actually going on at the hardware level with any other language?

    • @Finkelfunk
      @Finkelfunk Рік тому

      @@satannstuff On other languages it becomes less apparent I am blissfully ignorant

  • @SO-dl2pv
    @SO-dl2pv Рік тому +1

    This really reminds me of Vsauce's video: do chairs exist?

  • @deadfr0g
    @deadfr0g Рік тому +1

    Nobody:
    A rapper in 2016: 12:22

  • @ardenthebibliophile
    @ardenthebibliophile Рік тому +9

    It would be interesting to see if there are other sets of the image that return panda but without the face. I suspect there's probably one with some small amount of pixels that just *happens* to return panda

    • @Sibula
      @Sibula Рік тому +2

      There's a related video linked at the end: "tricking image recognition"

  • @AndreasZetterlund
    @AndreasZetterlund Рік тому +2

    👎 This won't work. It doesn't verify the AI at all. Just think of the examples/attacks with some almost invisible noise or a couple of pixels changed that to a human are barely noticeable but completely changed the networks result.

  • @AA-qi4ez
    @AA-qi4ez Рік тому +33

    As a visual systems neuroscientist, I'm afraid of the number of wheels that are being reinvented by folks who don't study their 'in vivo' or 'in silico' counterparts.

    • @boggledeggnoggler5472
      @boggledeggnoggler5472 Рік тому +7

      Care to point to one that this video misses?

    • @joegibes
      @joegibes Рік тому

      What kind of things are being reinvented unnecessarily?
      Like, training a generic model for everything instead of combining more specific algorithms/models?

  • @tramsgar
    @tramsgar Рік тому

    Good topic. Thanks!

  • @yash1152
    @yash1152 Рік тому

    my question is, how do you generate those minimal images? is it similar to a git bisect/binary search? like where you feed it iteratively reduced images untill it no longer recognises it? right?

  • @00alexander1415
    @00alexander1415 7 місяців тому

    I hope they aren't directing the occultation themselves, "in quite an uninteresting pattern if i might add" but letting the AI figure it out itself, we might not but an AI could tell apart animals by their fangs.

  • @natv6294
    @natv6294 Рік тому

    Image diffusion modules are black box and they are “trained” on unregulated data.
    This doesn’t have creative intent or a choice like a human, doesn’t get “inspired” like us - it’s computer power with human algorithm and math.
    In many cases it causes severe data leakage as the devs can’t argue they try to intentionally rip off individuals or intellectual properties.
    Data shouldn’t be trained without consent, especially since the biggest problem in Machine learning currently is that it can’t forget.
    What we witness today is how exploitation can happen in ML, ethics exist for a reason - creating shouldn’t come on the expense of others without their consent.
    “For research purposes” only is one thing, for profits of a few corporations who truly owns it to leach on us all? No thank you.
    Ai advocates should really focus on ethics - because as much as they want to romanticize it, the only sentient that exist right now is us, humans. And we don’t treat it well at all, try to actually use it for important things like medical or environmental instead of motives for power and capitalism.
    “Progress” backwards

  • @cagra8448
    @cagra8448 Рік тому

    Are you going to make a video about how the Chat GPT work?

  • @tomoki-v6o
    @tomoki-v6o Рік тому

    i have a question how random number generator first implimented in unix

  • @antivanti
    @antivanti Рік тому +1

    I don't worry about AI as much as I worry about car manufacturers very lacking security competency... Taking over critical systems of a car through the Bluetooth of the car stereo is a thing that has happened... WHY THE HELL ARE THOSE SYSTEMS EVEN CONNECTED TO EACH OTHER?!

  • @stefan_popp
    @stefan_popp Рік тому +2

    What if our idea of what makes a panda a panda is wrong and the AI uncovers the true specification? We'd throw it out and say it's silly, while, in fact, we are the silly ones. That logic should be applied to things where we're not that sure and that we didn't define ourselves.

    • @IceMetalPunk
      @IceMetalPunk Рік тому +3

      ...humans invented the word "panda", we get to decide what it means.

    • @stefan_popp
      @stefan_popp Рік тому

      @@IceMetalPunk ...we also invented the word "blue", yet people around the world clearly don't agree on what 'blue' is.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      @@stefan_popp So you admit that the definition of words is subjective and there is no "true specification", then.

    • @stefan_popp
      @stefan_popp Рік тому

      @@IceMetalPunk Of course you can e.g. define red pandas to be inanimate objects, but it might not be most useful to you.
      Real-world examples: an AI wrongly detected breast cancer in a research participant. One year later it turns out that that participant has had cancer in a very early stage the human experts missed.
      A Go-playing AI makes an unbeneficial move. Later it turns out, it was a brilliant move human players never thought about.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      @@stefan_popp I don't think either of those examples are of the AI finding a "true specification" and that "our ideas of it were wrong". Rather, they're examples of the AI seeing patterns that *match* our existing specification even when we thought the data *didn't* match our specification.

  • @Verrisin
    @Verrisin Рік тому

    covering it with cardboard will not change whether today is a Monday ...

  • @ΓάκηςΓεώργιος
    @ΓάκηςΓεώργιος Рік тому +1

    Cross validation ?

    • @C00Cker
      @C00Cker Рік тому

      cross validation wouldn't really help in the "cowboy hat" case, for example, as all the cowboy hat instances in the training data set were just people wearing one.
      The only thing cross validation is good for is checking whether the algorithm's performance is dependent on the particular train set / test set split - essentially, it can detect over/under-fitting, but not really what they focused on.

  • @JW-tt7sy
    @JW-tt7sy Рік тому

    I trust humans to, as they always have, what is in their nature. I expect "AI" will be no different.

  • @cedv37
    @cedv37 Рік тому

    02:20 She means by the term "complicated" in the sense of that it is utterly hopeless and impenetrable for any analysis and reduction from inside because it is inherently incomprehensible on that level?

    • @Sibula
      @Sibula Рік тому +4

      Not really. Especially for smaller networks, like a few hundred nodes wide and a few layers deep, it is possible to analyze what features the different nodes represent and therefore understand how the network is classifying the images. For deep convolutional networks it's practically impossible, but theoretically it would still be possible if you put enough time into it.

    • @rikwisselink-bijker
      @rikwisselink-bijker Рік тому +3

      @@Sibula I think the point is that our ability to analyse and explain lags several years behind our ability to create these systems. So in that sense, it is hopeless to attempt direct analysis. That is probably one of the primary drivers of research like this.

  • @mytech6779
    @mytech6779 Рік тому +5

    The trust issue isn't with the AI in the individual car, the danger is with the control it gives to the Xi in the capital.

    • @Ergzay
      @Ergzay Рік тому +1

      You think the government is controlling the AI in the car?

    • @mytech6779
      @mytech6779 Рік тому +1

      @@Ergzay Do you genuinely not know how political power functions? Tech has a long history of being widely abused by those in government with excess ambitions.

  • @gutzimmumdo4910
    @gutzimmumdo4910 Рік тому +1

    great explanation

  • @michaelthompson5252
    @michaelthompson5252 3 місяці тому

    It is funny to me that we call this a black box as in it is hidden or not understood. Yet the entire thing was built in a way to label and "understand " everything. Such irony. I am getting a serious book of Genesis vibe with the whole, "let's see what Adam calls everything."

  • @bamboleyo
    @bamboleyo Рік тому

    how well will it do with a kid wearing a T-shirt with a panda face on it and a cowboy hat with a starfish logo on the front? for a human it will be obvious..

  • @renanalves3955
    @renanalves3955 Рік тому +2

    I still don't trust it

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      Keep in mind, the question isn't "do you trust it 100%?" but "do you trust it more than you trust the average human?" If your answer is still "no", then why is that?

    • @AndreasZetterlund
      @AndreasZetterlund Рік тому +1

      Because a human won't be fooled to think that a panda is an apple when a couple of pixels in the image change or some imperceptible noise is added to the image.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      @@AndreasZetterlund Imperceptible to you, perceptible to the AI. On the other hand, perhaps an AI wouldn't be fooled into some of the many optical illusions that trick human perception, making us even. Even better, an AI won't make poor decisions based on emotions, like road rage, which often get humans killed.
      Are AIs perfect? No. But neither are humans. The question is which is safer, and I think humans have proven over and over that we've set that bar very low for the AI to overtake us.

    • @AndreasZetterlund
      @AndreasZetterlund Рік тому

      @@IceMetalPunk the point is that if an AI can fail on something simple that is obvious to any human (which these attacks demonstrate), then we have not verified that the AI will work better than a human.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      @@AndreasZetterlund We also haven't verified that it is worse than a human, either. "Better" is a vague term. It may not work better than a human in the face of these particular attacks that are "obvious and simple to any human", but it does work better than a human in the face of many other challenges that humans fail at. "It fails at one specific thing that humans do better" is not equivalent to "it is worse than humans overall".

  • @RonJohn63
    @RonJohn63 Рік тому

    The follow-on question is to ask why AI sometimes confuses black people's faces with chimpanzees.

  • @nazneenzafar743
    @nazneenzafar743 Рік тому

    As always another nice computerphile video; can you guys please make another video about open GPT?

  • @ranjeethmahankali3066
    @ranjeethmahankali3066 Рік тому +2

    Obscuring parts of the image doesn't rule out the possibility that the AI system is giving the right answer because it is Monday. For that you'd have to test the system on different days of the week, and prove that no correlation exists between the day of the week and the performance of the system.

    • @IceMetalPunk
      @IceMetalPunk Рік тому +3

      It absolutely does, because in order to find the minimally sufficient area for a positive ID, that process requires you to trim down until you get *negative* IDs. So the same process will always produce both positive and negative results on the same day, proving the day of the week is irrelevant :)

    • @drdca8263
      @drdca8263 Рік тому +1

      @@IceMetalPunk well, if you only test on Monday, you haven’t shown that it doesn’t behave differently on days other than Monday, only that, currently, the conclusion it is based on that part of the image.
      (Though presumably you’d know whether the network even gets the day of the week as an input)

    • @phontogram
      @phontogram Рік тому +2

      Wasn't the Monday example to make the correlation aspect to the audience more clear?

  • @nicanornunez9787
    @nicanornunez9787 Рік тому +1

    Lol I don't trust self driving cars cause I have a bike and access to the Tesla record with bikes

  • @smurfyday
    @smurfyday Рік тому

    People calling them self-driving cars when they can't is the problem. They kill themselves and others.

  • @YouPlague
    @YouPlague Рік тому

    Isn't verification as misnomer here, you are not proving anything, just testing.

  • @Fenyxfire
    @Fenyxfire Рік тому

    Cool explanation but honestly even if I could afford such a thing....never. my sense of self worth involves my ability to do things for myself and I LOVE DRIVING. so no thanks.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      You realize the self-driving car was just one example, but this approach is useful to test *any* classifier AI, right?

  • @moth.monster
    @moth.monster Рік тому

    Machine learning systems should never be trusted in safety critical places. The risk is too high.

  • @guilherme5094
    @guilherme5094 Рік тому

    👍

  • @BrianMelancon
    @BrianMelancon Рік тому +3

    @0:30 If every car was self driving I would feel much better than how I feel with the current situation. I trust the automated system to act appropriately. I don't trust the humans to always act in a rational way for the benefit of all. If you put the two together, you get a deadly AI version of prisoner's dilemma.

  • @nigh7swimming
    @nigh7swimming Рік тому +2

    A true AI should not depend on human given labels, it should learn on its own what a Panda is, given pictures of animals. It would infer that a subset of those share common properties and hence is a new class. It would then label it in its own way. Then we'd need to link those labels to our human words.

    • @Sibula
      @Sibula Рік тому +2

      You're speaking of unsupervised learning, like for example cluster analysis.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      That already exists. It's cluster based classification, nothing new. Look into K-Means Clustering for a simple example.

    • @Lodinn
      @Lodinn Рік тому

      @@IceMetalPunk Yeah, but it is still miles behind. Interesting topic, but the progress is pretty slow. I have worked on some high-dimensional clustering from first principles (topology) a few years ago, computational requirements are obscene, and parametrized clustering still works poorly. Part of the problem is that we still need to impart our human values to it, because otherwise the answer is 42. Correct, but totally unusable.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      @@Lodinn Values are entirely irrelevant to clustering...

  • @SansWordHuang
    @SansWordHuang Рік тому

    This video is great, easy to understand and inspiring.
    With only one thing I want to say.
    Maybe it is not a panda?

  • @Cdjimmyjazznme
    @Cdjimmyjazznme Рік тому

    panda

  • @luistiago5121
    @luistiago5121 Рік тому

    The comments are really hilarious. People are really a bunch of sheep that follow someone/everyone that don't ready know nothing about. Yes, may be an expert and working in the area, but where is the critical thinking that everyone should have? How can we trust a system that isn't alive, can’t be hurt and don't give a rats ass about the living things around it? Come on man... .

  • @vzr314
    @vzr314 Рік тому +1

    I trust the system but I don't want it in my car. I simply love driving too much and my freedom of choice too, regardlessly if an AI can or can not do the things better then me. Usually the people who don't drive or don't like driving are among biggest self driving car advocates.

    • @IceMetalPunk
      @IceMetalPunk Рік тому

      You realize a self-driving car generally always includes the option for manual driving if you want, right? Surely there are *some* times you need to get from point A to point B but don't want to, or can't, drive the entire way. For instance, multi-day road trips; why stop to sleep when you can keep going and still sleep?

  • @CAHSR2020
    @CAHSR2020 Рік тому

    This was a fascinating topic but the accent was hard to follow. I could not understand most of what the presenter said and the disheveled appearance was distracting. Love the channel but this video was subpar.

  • @ihrbekommtmeinenrichtigennamen

    Well... this reduces the probability of getting surprised by wrong results, but calling this a verficiation is very far-fetched. If you'd want to use this method to veryfy that your self-driving car correctly stops when the situation calls for it, you'd have to throw **every possible** "you need to stop the car immediately" situation at it. But that's simply not feasable.