LLM Chronicles #2.1: Neural Networks and Multi-Layer Perceptrons

Поділитися
Вставка
  • Опубліковано 18 лис 2024

КОМЕНТАРІ • 15

  • @josepha133
    @josepha133 11 місяців тому +12

    I'm feeling a little lost in my deep learning course at university right now and I'm so glad I've found your channel! Your videos are VERY helpful!

    • @donatocapitella
      @donatocapitella  11 місяців тому +4

      Thank you so much, I'm so glad this is helping! Feel free to share the channel around, I'm trying to reach out students like yourself that might find this useful 😀

    • @josepha133
      @josepha133 11 місяців тому

      @@donatocapitella I will! 😊

  • @DausnArt
    @DausnArt 6 місяців тому +5

    Donato,
    I just wanted to express my gratitude for the learning experience you provide. Your ability to explain complex processes in such an accessible and nice way is remarkable. You make learning a joy, and I feel fortunate to be your humble online student.
    Thank you for all that you do. You are bringing knowledge into my life.
    With warmest regards.

  • @ReflectionOcean
    @ReflectionOcean 9 місяців тому +1

    Understand how neural networks function, focusing on artificial neurons and multi-layer perceptrons 0:10
    Learn about the role of perceptrons and their basic operation, including their use of weights and bias terms for processing inputs 0:55
    Recognize the importance of activation functions in neural networks and familiarize yourself with common types like sigmoid, tanh, and relu 2:00
    Comprehend the concept of stacking perceptrons to form multi-layer perceptrons (MLPs) and their structure, including input, hidden, and output layers 3:18
    Grasp how MLPs with non-linear activation functions can approximate any continuous function, enhancing their problem-solving capability 5:00
    Identify the process of modeling inputs for MLPs, including feature selection and data conversion 6:02
    Differentiate between classification and regression tasks for neural networks and the setup of output layers accordingly 8:05
    Learn about the efficiency of vectorization in implementing neural networks and how it utilizes matrices and tensors for operations 10:18
    Understand the advantage of using GPUs and TPUs for parallel computing in deep learning, especially for handling tensor operations 11:40

  • @donatocapitella
    @donatocapitella  Рік тому +2

    🚨 Clarifications & Corrections 🚨
    1. Bias in Diagrams: In the sections discussing multi-layer perceptrons and vectorization, the bias values are not depicted in the diagrams for simplicity. However, it's crucial to remember they are typically present. One common way to implement the bias in neural networks, especially in matrix multiplications, is to treat the bias as an additional weight. This weight is connected to a "dummy" input neuron that always outputs a constant value, typically 1. This way, during the forward pass of the neural network, the bias gets added to the weighted sum of the inputs, just as if it was another weighted input.
    2. At 1:41, the second term of weighted sum is missing a negative sign, it should read -32 and the result should thus be −30.618.
    3. Batch Representation: Around the 12:12 mark, I mention "each row corresponding to an item in the batch". However, upon reviewing the calculations, it's evident that each column corresponds to an instance of the batch, not the rows. Apologies for the oversight; the confusion arose because in the example shown, the first row and column of the output vector coincidentally have identical values.
    4. At 11:35 it should say "Multi-dimensional array" not “dimentional”

    • @DausnArt
      @DausnArt 6 місяців тому

      Continuous improvement; Bravo!

  • @teixeirah1643
    @teixeirah1643 Рік тому +1

    Awesome explanation!! Keep up the good work!

  • @stefanotuv
    @stefanotuv Рік тому +1

    Impressively well-explained. Good Stuff !!

  • @cmthimmaiah
    @cmthimmaiah 10 місяців тому

    Amazingly clear, thank you

  • @noomondai
    @noomondai 9 місяців тому

    Thank you!