There are NO input layer Neurons!

Поділитися
Вставка
  • Опубліковано 16 січ 2025

КОМЕНТАРІ • 12

  • @MrWeb-dev
    @MrWeb-dev 5 місяців тому +1

    This is correct, except that using "full neurons" for the input layer will still work just fine. You just fix the transfer and activation functions.

  • @salmanshah-ci3yr
    @salmanshah-ci3yr 7 місяців тому +2

    To be fair, all the circles are placeholders. The hidden layer neurons don't do any calculations either. The calculations happen on the edges (i.e. the weight matrix), so the original diagram is more correct than your diagram with boxes.
    However, I agree that the diagram can be misleading. It's better to look at the math instead. Your neural network can be broken down like this:
    First hidden layer: h_1(x) = ReLU(W_1x + b_1), where W_1 is a 5x3 matrix, b_1 is a length 5 vector
    Second hidden layer: h_2(x) = tanh(W_2x + b_2), where W_2 is a 5x5 matrix, b_2 is a length 5 vector
    Output: y(x) = ReLU(W_3x + b_3), where W_3 is a 5x1 matrix, b_3 is a length 1 vector
    Then, your neural network is a function f composed by h_1, h_2, and y, like this: f(x) = y(h_2(h_1(x)))
    So given an input x = [x1, x2, x3], you get the output of the neural network by plugging x into f(x).
    If we try to explain the diagram using these equations, then
    - the orange circles represent x
    - the blue circles on the first layer represent the output of h_1
    - the blue circles on the second layer represent the output of h_2
    - green circle represents the output of y
    - The lines between x and h_1 depict W_1 (5x3 = 15 lines)
    - The lines between h_1 and h_2 depict W_2 (5x5 = 25 lines)
    - The lines between h_2 and y depict W_3 (5x1 = 5 lines)
    But where are the biases and activations represented? Who knows. That's why it's not a perfect representation. However, the circles and lines (i.e. vertices and edges) have their roots in graph theory, and linear algebra and graph theory have a lot of connections. For example, let's construct a function g without the biases and activations (i.e. without non-linear functions). In other words, g(x) = (W_3(W_2(W_1x))). This function is a direct equivalent of the graph depicted in the diagram, and while it's not an accurate representation of your neural network, it gets pretty close.
    In graph theory, vertices are always drawn as circles, so when you replace that with boxes, you're disregarding some of the roots of the diagram, which, in my opinion, makes your diagram with boxes even less representative of a neural network.

  • @nicogsplayground
    @nicogsplayground 7 місяців тому

    This simply isn’r true. If you would argue that the diagram is incorrect, then there is absolutely no difference between the circles in the hidden layer & the input layer as the W & B are applied from a certain circle to another circle given by the connecting line, that’s why this diagram is used to visualise a neural network like we are used in graph theory.
    That’s where you are mistaken: you assumed that the purpose of the diagram was to visualise neuron’s while the purpose is to visualise the network, like every network in graph theory, all edges are given by lines connecting all nodes… not neurons… nodes. Just becauze the name is “Neural” network, does not mean that all nodes in the graph should be neurons.

  • @suvammitra6242
    @suvammitra6242 8 місяців тому +1

    But sir if we have a feeder model where the input of a neuron is from the output of other neuron (as biological neuron works, dendron of one connects with axon of the other) isn't it be right to call the input layer as a neuron too ?

    • @thinking_neuron
      @thinking_neuron  8 місяців тому +1

      Hey Suvam!
      What you described happens between two hidden layers within the ANN.
      As per the definition, a Neuron performs the calculations of multiplying the weights with inputs and passes it to the transfer function, this is why the input layer should be called out explicitly that they are not neurons because they do not perform any calculations. Refer to the introduction section Page-2 of the book called 'Neural Smithing' from MIT Press
      mitp-content-server.mit.edu:18180/books/content/sectbyfn?collid=books_pres_0&fn=9780262181907_sch_0001.pdf&id=4937
      Hope this helps!

  • @prasadmukkawar7388
    @prasadmukkawar7388 9 місяців тому

    Finally you are back!

  • @akashpanigrahi9136
    @akashpanigrahi9136 9 місяців тому +1

    Please launch a course or make videos, i understand there are not as many views to boost u to work on the channel but ur videos are gems yet to be explored by many❤

  • @hsiaowanglin9782
    @hsiaowanglin9782 7 місяців тому

    I understand all those now, that’s Algorithmic ethics.

  • @hsiaowanglin9782
    @hsiaowanglin9782 7 місяців тому

    I has done many things this morning, but I forgot own my hero Elon Musk an important work. ❤

  • @nicogsplayground
    @nicogsplayground 7 місяців тому +2

    This simply isn’r true. If you would argue that the diagram is incorrect, then there is absolutely no difference between the circles in the hidden layer & the input layer as the W & B are applied from a certain circle to another circle given by the connecting line, that’s why this diagram is used to visualise a neural network like we are used in graph theory.
    That’s where you are mistaken: you assumed that the purpose of the diagram was to visualise neuron’s while the purpose is to visualise the network, like every network in graph theory, all edges are given by lines connecting all nodes… not neurons… nodes. Just becauze the name is “Neural” network, does not mean that all nodes in the graph should be neurons.