Cascade Correlation

Поділитися
Вставка
  • Опубліковано 3 гру 2024

КОМЕНТАРІ • 5

  • @aminemohamedaboussalah6169
    @aminemohamedaboussalah6169 6 років тому +6

    Hi Martin, thanks a lot for your video. I really appreciate it.
    Actually, I have one comment. I think that the input weights, I mean the ones between the candidate hidden units and all the inputs along with the previous hidden units are learned by maximizing the correlation between the activation of the candidate units and the residual error of the net by training all the 'input links' of the given candidate unit. Once this step is done, we choose the one who maximizes this correlation and we freeze its input weights. Then, we connect it to the output layer and we train all the connections again until the error of the net no longer decreases, and so on. This is at least what i understood :-)

  • @Now842
    @Now842 8 років тому +1

    Now I finally got the concept! Thank you!

    • @MartinThoma
      @MartinThoma  8 років тому +1

      +Now842 You're welcome :-) When I have some time, I will try to get the equipment for a higher quality video. Meanwhile I got the "correlation" part, too. So I can improve some details.

  • @nnvskh8269
    @nnvskh8269 3 роки тому

    thank you so much, that helped a lot