Geoffrey Hinton: What is wrong with convolutional neural nets?

Поділитися
Вставка
  • Опубліковано 27 гру 2024

КОМЕНТАРІ • 8

  • @RickeyBowers
    @RickeyBowers 2 роки тому

    I'm fairly certain the audio does not match the video. The audio in from another lecture I've seen on YT.

  • @teckyify
    @teckyify 4 роки тому +6

    why does he have slides when he doesnt use them 🙄

  • @mritunjaymusale
    @mritunjaymusale 3 роки тому +4

    The only thing bad about this video is, how horribly it's recorded

  • @ProfessionalTycoons
    @ProfessionalTycoons 5 років тому +1

    great lecture

  • @primodernious
    @primodernious 4 роки тому +2

    they will never get skynet this way. actually i think the layer model is right and wrong at the same time. its right in principle but not by the way it works. the network supposed to do all its thinking in the outer layers but only use the input layer to store memory of all its thinks and use the outer layers as a hirarchy. the network must have a way to store its memories permanently in the network and organize how the infomaiton is wired in the outer layers by doing the actual thinking in the outer layers. my idea is that the outer layers do the raw thinking of how to extract data from the input layer. if the outer layers are in a hirachy the next layer after the first will extert control over a much smalller limit of nodes in the input layer beforing passing data to the next layer. that means a hirachy of nodes behave like generals above generals. you just limt how many nodes in the input layer can send a combined sum to one of the nodes in the next layer and the same one each time but not the same arangement of pieces. i mean that the outer layers must decide what part of data in the input layer to combine and find best fit between pieces of data. you split the input data into small pieces and feed each piece or section into each perceptron one by one until each node in the input layer saturate to a optimum value and let the rest of the network do the thinking on how to process this data further. what google is doing does not work this way but do somthing similar as they mess it up by shifting weight in the outer layers that screw up the input. instead of letting the network guess what part of input prertained data to merge they just pass input data partiallly processed into the next layers and then modify it and then back propagata error correction data into the input layer. this process make the network stuck in mimic the brain and can only be used for a spesific purpose and not work like our brain do.