Epoch in Neural Network|neural network example step by step |Neural network end to end example data

Поділитися
Вставка
  • Опубліковано 15 вер 2024
  • Epoch in Neural Network|neural network example step by step |Neural network end to end example data
    #NeuralNetworkEpoch #NeuralnetworkbackPropagation #UnoldDataScience
    Hello All,
    Welcome to Unfold data science, this is Aman and I am a data scientist. In this video I explain every step of neural network with data. I explain how forward propagation and backward propagation works. I explain how weights are adjusted using gradient descent.Below questions are answered this video:
    1. How neural network work?
    2. Neural Network example step by step
    3. How forward propagation works in neural network
    4. How backward propagation works in neural network
    5. How neural network adjusts weight
    6. What is a neural network epoch?
    About Unfold Data science: This channel is to help people understand basics of data science through simple examples in easy way. Anybody without having prior knowledge of computer programming or statistics or machine learning and artificial intelligence can get an understanding of data science at high level through this channel. The videos uploaded will not be very technical in nature and hence it can be easily grasped by viewers from different background as well.
    Join Facebook group :
    www.facebook.c...
    Follow on medium : / amanrai77
    Follow on quora: www.quora.com/...
    Follow on twitter : @unfoldds
    Get connected on LinkedIn : / aman-kumar-b4881440
    Follow on Instagram : unfolddatascience
    Watch Introduction to Data Science full playlist here : • Data Science In 15 Min...
    Watch python for data science playlist here:
    • Python Basics For Data...
    Watch statistics and mathematics playlist here :
    • Measures of Central Te...
    Watch End to End Implementation of a simple machine learning model in Python here:
    • How Does Machine Learn...
    Learn Ensemble Model, Bagging and Boosting here:
    • Introduction to Ensemb...
    Access all my codes here:
    drive.google.c...
    Have question for me? Ask me here : docs.google.co...
    My Music: www.bensound.c...

КОМЕНТАРІ • 123

  • @omnnnooy3267
    @omnnnooy3267 26 днів тому

    You are a hero! thank you for simplifying such very complicated concepts like this!

  • @anujpatel6438
    @anujpatel6438 3 роки тому +3

    just 5k views?! This video is Gold...even the best youtube channels on NN can't explain you what this SIR here has taught us....the level of simplicity is just too high.....tysm for sharing your knowledge.....this explaination of one entire epoch is nowhere else.

  • @skylerx1301
    @skylerx1301 3 роки тому +5

    OMG!!!!!!............I can't believe I watched this video for free!

  • @martincarbonell7032
    @martincarbonell7032 3 роки тому +2

    Simplest explanation I've found on how the epoch works.... Muchas gracias!

  • @sunainamukherjee4169
    @sunainamukherjee4169 9 місяців тому

    I m a data scientist with 10+ years experience, but I must say this is by far the best explanation possible. I am eagerly waiting for many more videos from you. Keep us enlightened 🙂👍👌.

  • @swathisree2618
    @swathisree2618 Рік тому

    I searched various videos related deep learning in youtube. I didn’t get correct explanation from them..I have come across your video..wow your explanation excellent and easy to understand..you are best👍

  • @sameertemkar
    @sameertemkar 3 роки тому +3

    Hi Aman, this is the best video I have come across to explain the maths... simple brilliant!!!!!

  • @user-wg6jc5jr3r
    @user-wg6jc5jr3r 8 місяців тому

    Thanks so much from Ghana, I have followed from the beginning to this point and for the first time I am able to comprehend what basic of neural networks, and the maths behind it. Please I will highly appreciate it if you can do a video for the neural network design and choosing the right architecture given a real world problem, before we start implementing in codes. Thank you.

  • @Celius-o8m
    @Celius-o8m 2 місяці тому

    God bless you! I wish I could subscribe a thousand times to your channel. Best content ever on DL

    • @UnfoldDataScience
      @UnfoldDataScience  2 місяці тому

      Your comments are precious to me.

    • @Celius-o8m
      @Celius-o8m 2 місяці тому

      @@UnfoldDataScience You have the gift of knowing how to simplify things. By the way, do you have a plan on teaching Reinforcement Learning in the future?

  • @UnfoldDataScience
    @UnfoldDataScience  4 роки тому +8

    At 6:51, the outputs 0.7513 from O1 and 0.7729 from O2 are "after passing their respective inputs to sigmoid activation function". Missed to write in whiteboard.

    • @brijkishortiwari2077
      @brijkishortiwari2077 4 роки тому

      tanks a lot for all clarification . here only i was having doubt.

  • @preranatiwary7690
    @preranatiwary7690 4 роки тому +2

    Made simple and clear. Thank you.

  • @gamingrex2930
    @gamingrex2930 4 роки тому +1

    Thank you! The video was very simple yet helpful!

  • @mathewjohnsinocruz3926
    @mathewjohnsinocruz3926 3 роки тому +1

    Thanks for a very clear explanation on what epoch is.

  • @tulasik4514
    @tulasik4514 3 роки тому

    Hi Aman, your explanation is very very clear and anybody can understand easily. Thanks

  • @tejaswinideshmukh8912
    @tejaswinideshmukh8912 6 місяців тому

    Nice and easy explanation. @Unfold Data Science can you suggest good book which has easy way of understanding and all the above mathematics

  • @kesealberton6276
    @kesealberton6276 2 роки тому

    Simple and amazing!!! Congratulations !!!!

  • @desaigeneral
    @desaigeneral 7 місяців тому

    First of all, great explanation - very easy to understand, thanks much! When I was pondering further at the end part when we calculated the error, I got a question. What if we had 8 input neurons, 12 in the first layer, 8 in the second hidden layer, and one neuron in the output layer in case of a regression problem (you have a video on similar structure to demo capability of keras). Let's say we took a batch of 10 rows - first iteration. For each of 10 rows, there will be one target value in our data. Question is how will we end up with the error value at the end on the output neuron? Does it happen the way that after the second hidden layer, we get a weight and a bias as input to the output layer - the values then applied to all the 10 rows to get individual prediction. (How does it get back original values at this stage?) Each predicted value will be compared with the actual value, and then for all the 10 rows the sum squared will be averaged out? which will be the final error coming out from the output layer. Then this error will be used to find gradient which will be applied to optimize weights at individual neurons in hidden layer? Hope, my doubt is understood.

  • @abiramimuthu6199
    @abiramimuthu6199 4 роки тому +1

    Thanks aman.....great explanation 🙏

  • @gokuljith
    @gokuljith 4 роки тому +3

    Excellent bro. Nice and indepth video. Kudos to you. One doubt - at 11th min the Gradient for W5 calculated was 0.08. But when calculating W5 new value you made it 0.8. Ideally should it not be W5 newvalue = (.40) - (0.1 x .08) = .40 x 0.008 = 0.392 ?. Please clarify.

  • @anindian4601
    @anindian4601 2 роки тому

    Superb sir thank you but please take feed back and implement, new people difficult to get concept

  • @SerawitMamo-zj8um
    @SerawitMamo-zj8um 7 місяців тому

    very good teaching !!!!!!

  • @brijkishortiwari2077
    @brijkishortiwari2077 4 роки тому

    Thanks you very much for a great conceptual explanation.

  • @raviyadav2552
    @raviyadav2552 3 роки тому +1

    simply helpful

  • @akshayrao6134
    @akshayrao6134 4 роки тому

    Amazing job. Keep up the good work. Understood completely.

  • @50_shirsenduroy84
    @50_shirsenduroy84 9 місяців тому

    underrated channel

  • @deepakrajdev9504
    @deepakrajdev9504 3 роки тому +1

    Really good explanation.....

  • @bhabeshmali3640
    @bhabeshmali3640 3 роки тому

    Thanks for the video, nice explanation

  • @bidishabera3551
    @bidishabera3551 6 місяців тому

    Pls give the calculations in details... Specifically partial derivatives of chain rules applied in your final calculations in minimization.

  • @debjyotibanerjee7750
    @debjyotibanerjee7750 4 роки тому

    Really great explanation, and helpful

  • @ksrajavel
    @ksrajavel 4 роки тому +3

    Hi Aman,
    Wonderful explanation. BTW can you please let me know what is the meaning of 'num_epcoh' value we give in training?

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому +1

      Hi Rajavel, it tells to python how many epochs we want in the model, this is just a parameter we can control actually.

  • @we_around_the_world
    @we_around_the_world Рік тому

    Hi, during parameter optimization example, for the first part, while using the partial derivate of E_total w.r.t Out_o1, why you multiplied by (-1) after which you got a final value of 0.7413? It is between 10:08 and 10:40 in the video. Can you please explain?

  • @nargisirfan821
    @nargisirfan821 3 роки тому

    Wow! You are really good

  • @ajaya652
    @ajaya652 2 роки тому

    Wounderfull explanation sir,but how accuracy is calculated for one epoch?

    • @just_eric
      @just_eric 2 роки тому

      I know I'm late but if it works of something, the accuracy it's not the thing that is calculated, the objective of every epoch is to minimize the error till the point it reach 0 or pretty close(as close the error to 0, the more accurate is the model), the error is minimized when you update the weights in every epoch , so the algorithm it comes to the end, after completing the epoch you define, or the error cost reach 0, thing that is very unlike to happen

  • @udayjoshi5606
    @udayjoshi5606 4 роки тому

    Nice explaination

  • @youssefdirani
    @youssefdirani 4 роки тому

    Thanks from Lebanon

  • @dimplechutani2768
    @dimplechutani2768 Рік тому

    Hi Aman , Wonderful work :)
    Quick question here , this whole error that you have calculated is for one observation or this is for the entire dataset that we have fed into the network ?
    Does Output 1 indicates the output of the first row and so on ? It means we have 50 observations then , we would have 50 outputs at the end ? Please respond . Thanks

  • @sandipansarkar9211
    @sandipansarkar9211 3 роки тому

    great explanation

  • @sonalmaheshwari8222
    @sonalmaheshwari8222 3 роки тому

    Well explained 👍

  • @juliandelcastillo8378
    @juliandelcastillo8378 3 роки тому +1

    Thank you for the video. Can you explain the difference between 1 epoch from another epoch? Are the weights adjusted at each iteration or at each epoch?

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      If its a full batch processing then 1 iteration = 1 epoch.
      Weights are adjusted in various iteratio

    • @sololife9403
      @sololife9403 3 роки тому

      'Are the weights adjusted at each iteration or at each epoch?
      '-weights will be adjusted after each batch. Am I right, @unfold data science?

    • @sololife9403
      @sololife9403 3 роки тому

      'difference between 1 epoch from another epoch'. the next epoch will do the same stuff, the difference is, the next epoch is based on the updated weights which were adjusted earlier on. Am i correct, @Unfold Data Science?

  • @barishasdemir2
    @barishasdemir2 4 роки тому

    great explanation!

  • @akashmanojchoudhary3290
    @akashmanojchoudhary3290 3 роки тому

    Great explanation aman. Just one thing, the gradient you calculated was 0.08, but you wrote 0.8

  • @lanaaldabbas9047
    @lanaaldabbas9047 2 роки тому

    how can I calculate the gradient from the hidden layer to the input layer , is it E/outH1 x outH1/l1 x l1/w1 ?? because I didn't find anyone explaining that

  • @himanshirawat1806
    @himanshirawat1806 2 роки тому

    Can you please make one video on back propagation

    • @UnfoldDataScience
      @UnfoldDataScience  2 роки тому +1

      Go to playlist, find neural network playlist, you will find all videos on this topic there.

  • @anirbansarkar6306
    @anirbansarkar6306 3 роки тому

    Hi Aman,
    Thank you so much for explaining this concept.
    But I have few doubts like
    #.) what is the essence or say need of activation function? Why do we need them? What will happen if we don't use them, and for each node, what goes as input will come out as output?

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +1

      Without activation function its a linear regression model.

  • @sadhnarai8757
    @sadhnarai8757 4 роки тому

    Very nice Aman

  • @akashsahu6271
    @akashsahu6271 4 роки тому

    Hi Aman,
    So Basically What I understood from your video is that an epoch is the number of iterations that an algorithm has to work to optimize the weights and biases to minimize the error function for each training dataset.
    Please correct me if i am wrong.

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому +6

      Epoch is one cycle completed for entire training data.
      suppose you have 100 rows in training data and you divide into 10 batches.
      One iteration means, covering one batch that is 10 records
      One epoch means, covering 100 records that is10 iterations.

  • @mohammedkareem549
    @mohammedkareem549 3 роки тому

    Thanks for explanation it was wonderful,
    what can we get if increase and decrease the epoch?

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +1

      Hi Mohammed, model performance might change with more shift.

  • @kalpatarusahoo6309
    @kalpatarusahoo6309 2 роки тому

    Hi Aman,I am using Rasa during model training my EPOCHS are taking long time to load should I use NPU in the server for faster loading...currently using GPU which is faster than CPU

  • @soumyaranjankar9718
    @soumyaranjankar9718 3 роки тому +1

    Here you updated only one parameter. One Epoch = all the weights and biases are updated once. Right?

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      Yes Soumya, for demo purpose just one parameter I showed but 1 epoch means all the weight and bias get updated. You are absolutely right.

  • @aimenmalik8929
    @aimenmalik8929 2 роки тому

    hello there!, as per knowledge each epoch has a training and validating phase, can you explain the validation phase??

  • @siddharthchoudhury4158
    @siddharthchoudhury4158 2 місяці тому

    at last , gradient is 0.08 then why are we taking 0.8 . also how is the step size 0.1 , can it be also 0.01? please help

  • @samanhci7806
    @samanhci7806 3 роки тому

    I have a question.i stake cardano in a wallet that give me 5percent and it epoch my crypto and every 5 day it begin its approach process,how it work and how much cardano i give fron out put if my input is 69.
    thanks

  • @mouleshm210
    @mouleshm210 3 роки тому

    Hi aman sir,
    Can you post a video on dropout layers for regularization in Deep neural network?..it will be very useful for us i guess..
    Thanks in advance. @Unfold Data Science

  • @simo6927
    @simo6927 3 роки тому

    It's not at all with number that someone would understand a concept. You must do the analogy with speed and acceleration to explain the gradient descent

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      Good suggestion, I explain GD here:
      ua-cam.com/video/gzrQvzYEvYc/v-deo.html

  • @RaghavaJoijode
    @RaghavaJoijode 3 роки тому

    Do values in input layer (x1, x2, x3...) represent features in single data row.. or no. of rows of data?

  • @sandeepvivek6134
    @sandeepvivek6134 3 роки тому

    Sir in feed forward neural networks how will the parameter updation take place since there is no feedback loop??

  • @ritusanjay9490
    @ritusanjay9490 4 роки тому +1

    So if give the batch size for one epoch as 10, does this mean that in the total_error, we will sum up all the errors?

  • @santoshprasanth6633
    @santoshprasanth6633 4 роки тому

    Aman bai, please explain difference between adam optimizer and sgd please...

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому +1

      Thanks for the feedback Santosh. Will create Video on suggested topic. Happy Learning. Tc

  • @karunkrishna1111
    @karunkrishna1111 3 роки тому

    (1) What happens if you are training on a batch of records? Does that mean, once all the rows in the batch is completed, that is equivalent to one EPOCH. (2) If it is not being batched, and training is occurring on a row-by-row basis, then a full EPOCH is 1 row being traininged?

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      Hi Karun,
      Good question.
      If you are training on batch or non-batch, in any case, one epoch is when all the records complete forward and backward propagation once.

  • @xuejieyu
    @xuejieyu 4 роки тому

    Could you explain a bit why doing more epochs could lead to overfitting ?

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Yes, because of many iterations, the parameters may get tuned in a way that might lead to fitting problem.

  • @Solution4uTx
    @Solution4uTx 3 роки тому

    Hi Sir, do you have any video on conjugate gradient training method, if not could you please make the one.

  • @rahulhumble
    @rahulhumble 4 роки тому

    One doubt i have, what we see in epoch accuracy , that accuracy is before changing weight or after changing weight, because epoch first calculate total error....could not get the given accuracy is before changed or after

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому +1

      epoch 10 accuracy is after adjusting the weight in epoch 9. Just an example.

  • @ReemaYAhmad
    @ReemaYAhmad 2 роки тому

    can help me i wanna prediction by ANN in Matlab?????plz

  • @vaishaliravi4515
    @vaishaliravi4515 4 роки тому

    Can you tell me how to simulate random binary noise for random inputs and targets?

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому +1

      Not able to recollect now. Will do some research and let u know.

  • @SHADABALAM2002
    @SHADABALAM2002 3 роки тому

    Hi: What is the relationship between epoch and accuracy????

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +1

      Good question Shadab,
      More epoch lead to better model with high accuracy as model has more learning opportunity.
      May not be always true though. Normally it happens.ok

  • @meethamin427
    @meethamin427 3 роки тому

    sir can you please guide me what is the right way to begin with machine learning , neural networking if I am now in 12th

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      Please start learning statistics and python coding.

    • @meethamin427
      @meethamin427 3 роки тому

      @@UnfoldDataScience thankyou sir for replying and sir i already know to code in python

  • @InfraUpdates-
    @InfraUpdates- 3 роки тому

    who is defining actual value in error? at 7:19

  • @claymore9112
    @claymore9112 3 роки тому

    what if you dont have bias do you just do the sum of xiwi then?