A very nice video with abundant didactically talent. It may be worth to note that instead of partial derivatives one can work with derivatives as the linear transformations they really are, and also looking at the networks in a more structured manner thus making clear how the basic ideas of BPP apply to much more general cases. Several steps are involved. 1.- More general processing units. Any continuously differentiable function of inputs and weights will do; these inputs and weights can belong, beyond Euclidean spaces, to any Hilbert space. Derivatives are linear transformations and the derivative of a neural processing unit is the direct sum of its partial derivatives with respect to the inputs and with respect to the weights; this is a linear transformation expressed as the sum of its restrictions to a pair of complementary subspaces. 2.- More general layers (any number of units). Single unit layers can create a bottleneck that renders the whole network useless. Putting together several units in a unique layer is equivalent to taking their product (as functions, in the sense of set theory). The layers are functions of the of inputs and of the weights of the totality of the units. The derivative of a layer is then the product of the derivatives of the units; this is a product of linear transformations. 3.- Networks with any number of layers. A network is the composition (as functions, and in the set theoretical sense) of its layers. By the chain rule the derivative of the network is the composition of the derivatives of the layers; this is a composition of linear transformations. 4.- Quadratic error of a function. ... --- Since this comment is becoming too long I will stop here. The point is that a very general viewpoint clarifies many aspects of BPP. If you are interested in the full story and have some familiarity with Hilbert spaces please google for papers dealing with backpropagation in Hilbert spaces. A related article with matrix formulas for backpropagation on semilinear networks is also available. For a glimpse into a completely new deep learning algorithm which is orders of magnitude more efficient, controllable and faster than BPP search in this platform for a video about deep learning without backpropagation; in its description there are links to a demo software. The new algorithm is based on the following very general and powerful result (google it): Polyhedrons and perceptrons are functionally equivalent. For the elementary conceptual basis of NNs see the article Neural Network Formalism. Daniel Crespin
You have given o.15 as w1and mentioned 0.05 its wrong..and the same happened with other weights and x values tooo.. when you declare particular data you need to continue the same right.. people who can't identify mistake goes the same way as yours... Irrespective of this i love your vidoes thaats why im watching and just letting you know whats wrong in this.. i know it happens with everyone.. its our duty to let you know or tue followers who get confused and open comments will know and feel relaxed that yeah she wrote wrong that's the only intention im commenting this thankyou❤ for uour videos
Where is the calculation for back propagation? Where are you adjusted weight according to weights? This is forward propagation and can be easily done by any one, please make a video on back propagation calculation
The only thing I had to know to learn something was why we are learning this and the purpose of learning this how it is applicable in daily life During my lectuerer class I have these doubts
Ma'am, could you please make a video on the XOR problem using MLP. My exam is coming up on January 4th, and it would be really helpful. I am studying at St. Joseph's College of Engineering, which is in Chennai
In exams ask question like error back propagation algorithm using multilayer perception network..? What should be write three parts are write or not tell me..?
My exam is on 12th August.. I need explaination of types of perceptron. Like there are discrete perceptron, continuous perceptron,multi category single layer perceptron.. each explain separately.. college name is Jawaharlal Nehru University Hyderabad
U cn explain the question with solution 1. back propogation Rule for artificial neural network why is it not likely to be trapped in local minima? I have exam on 29 so as possible upload it's urgent mam plzz
Bro u have exam in few more hours and asking her to upload videos. Do u think she is some magical robot for you 😂😂next time when you request her Pls give her some time. All the best for u next time 😅
Can you please explain BST(basics of sensors and technology) subject for CSE students even we can't find ani all in one of that subject and owr exam is on soon i.e,25th aug I'm from kakatiya institute of technology and science clg
Thank you maam!!!❣
A very nice video with abundant didactically talent.
It may be worth to note that instead of partial derivatives one can work with derivatives as the linear transformations they really are, and also looking at the networks in a more structured manner thus making clear how the basic ideas of BPP apply to much more general cases. Several steps are involved.
1.- More general processing units.
Any continuously differentiable function of inputs and weights will do; these inputs and weights can belong, beyond Euclidean spaces, to any Hilbert space. Derivatives are linear transformations and the derivative of a neural processing unit is the direct sum of its partial derivatives with respect to the inputs and with respect to the weights; this is a linear transformation expressed as the sum of its restrictions to a pair of complementary subspaces.
2.- More general layers (any number of units).
Single unit layers can create a bottleneck that renders the whole network useless. Putting together several units in a unique layer is equivalent to taking their product (as functions, in the sense of set theory). The layers are functions of the of inputs and of the weights of the totality of the units. The derivative of a layer is then the product of the derivatives of the units; this is a product of linear transformations.
3.- Networks with any number of layers.
A network is the composition (as functions, and in the set theoretical sense) of its layers. By the chain rule the derivative of the network is the composition of the derivatives of the layers; this is a composition of linear transformations.
4.- Quadratic error of a function.
...
---
Since this comment is becoming too long I will stop here. The point is that a very general viewpoint clarifies many aspects of BPP.
If you are interested in the full story and have some familiarity with Hilbert spaces please google for papers dealing with backpropagation in Hilbert spaces. A related article with matrix formulas for backpropagation on semilinear networks is also available.
For a glimpse into a completely new deep learning algorithm which is orders of magnitude more efficient, controllable and faster than BPP search in this platform for a video about deep learning without backpropagation; in its description there are links to a demo software.
The new algorithm is based on the following very general and powerful result (google it): Polyhedrons and perceptrons are functionally equivalent.
For the elementary conceptual basis of NNs see the article Neural Network Formalism.
Daniel Crespin
The way you explain is amazing such lecturer we don't have in one of the top IT university in Pakistan God bless you
Hi princess
Mera paper kal ha , mei sab umeedaen chor chuka tha but godd damnnittt i found you
my princess
THOKO TAALI!!!
The way you explain is very nice i tried reading with other videos but didnt understood thankyou for making it so easy
One of the best lecturer rather than the dummy lecturer 🙏
aap bot acha padhati ho mam kal mera paper hai or mera poori tahah tayaar hu mam sirf aapke karan mam sirf apke karan mam :) ;)
if anybody from Government College of Engineering Dharmapuri like here Exam date : coming soon...............
Shaata nindu
........
i am from krishnagiri
You have given o.15 as w1and mentioned 0.05 its wrong..and the same happened with other weights and x values tooo.. when you declare particular data you need to continue the same right.. people who can't identify mistake goes the same way as yours... Irrespective of this i love your vidoes thaats why im watching and just letting you know whats wrong in this.. i know it happens with everyone.. its our duty to let you know or tue followers who get confused and open comments will know and feel relaxed that yeah she wrote wrong that's the only intention im commenting this
thankyou❤ for uour videos
Hello mam this back propagation is similar to the deep learning subject .
and also please keep deep learning subject tutorial also.
just wanted to say a big thanks akka ❤️god bless uh
Your handwriting is very beautiful 😍
Where is the calculation for back propagation? Where are you adjusted weight according to weights? This is forward propagation and can be easily done by any one, please make a video on back propagation calculation
Best explanation
Mam we have machine learning exam tommarow. So please kindly share some important question ☺
Good teaching as heaven ❤🔥
my suggestion is you should've put arrows in between the lines to know which line is going in which direction.
Excellent Mam...Superb
The only thing I had to know to learn something was why we are learning this and the purpose of learning this how it is applicable in daily life
During my lectuerer class I have these doubts
thank you so much for such great explanation ma'am
bro can uh explain me
@@rohanlucky4810 yaa sure bro
@@saisasidhar8006 bro can you give your number
video starts at
3:00
👍🏻 respect
Ma'am, could you please make a video on the XOR problem using MLP. My exam is coming up on January 4th, and it would be really helpful. I am studying at St. Joseph's College of Engineering, which is in Chennai
Madam thank you so much for the explanation
Thank you for the content, additionally, it is quite funny to pronounce H as "hatch"...😂😂
Awesome explanation ❤
Thanks for the very clear video
Mam can you please explain the videos on distributed system ...for4-2semester exams your videos are very helpful for exams mam
Please do videos on deep learning also
THANK YOU SO MUCH
WELL UNDERSTOOD
ma'am while calculating h2 you should use b2 (bais-2) instead of b1 please correct this
In exams ask question like error back propagation algorithm using multilayer perception network..?
What should be write three parts are write or not tell me..?
My exam is on 12th August.. I need explaination of types of perceptron. Like there are discrete perceptron, continuous perceptron,multi category single layer perceptron.. each explain separately.. college name is Jawaharlal Nehru University Hyderabad
Bro u should not demand!!!,
You should request 🙂
how did it go bro?
@@Thammudu_____ ok simp
@@itv5610 ok simp
You got notes of ml?
HAE'EYH ONE. 😍
I LOVE THIS ALREADY.
Your english is good
ma'am want distributed systems subject videos on jntu syllabus please upload as early as possible
Sister I need briefly about back propagation without any examples plzz can you do this video for me ...I had exams on 24th onwards ..plzz
passed ah...🥲😅
verey level expalanation.💥💥
From which university you are ?
My Final EndSemester is tomorrow morning, could you please provide notes for the complete Convolution Neural Networks.👨🎓
thank you so much easier u explained
Hi mam
I have an ml exam tommarrow
Till now I didn't know the single topic please explain now only .my college is affiliated to jntuk
Mam please help me
Mam 1 dout how we take b1 value and b2 value or is it in question
same doubt
Mam my 4semester exam is in July 10 so pls upload videos for AIML and TOC
What is the criteria to select biased factor b.
U cn explain the question with solution
1. back propogation Rule for artificial neural network why is it not likely to be trapped in local minima?
I have exam on 29 so as possible upload it's urgent mam plzz
Panjab university. Thank you.
Mam I have exam by tomorrow afternoon 2clock
Nice video
My exam today 2 hrs left some video , no idea about syllabus, imp questions plz,RGUKT Nuzvid.
Will u teach iot and multicore Architecture
Why have you not explained stochastic gradient descent rule? Please do it
mam i have exam at 10AM today ,only 8hrs left soo pls upload fast
Niceeeeeeeeeeeeeeee bro all the best for your future 👍🏽
Pls help this poor guyy🙂
How was your exam
Bro u have exam in few more hours and asking her to upload videos. Do u think she is some magical robot for you 😂😂next time when you request her Pls give her some time. All the best for u next time 😅
😂😂
Ma'am ab to akhiri raat hai..kl hi exam hai
Mam,Iam ktu btech student.Our neural network and deeplearning exam is scheduled on 4th march
Nice explanation mam ... Thank you
NICE EXPLANATION
Thankyou
Mam..I had a doubt like why we take the same bias value 0.6in both o1and o2 input calculation
Tomorrow is my exam Narasimha Reddy Engineering College
Thanku mam
Madam osmania university affiliated college make more videos on ou syllabus
I take weights as numbers like 5,4,5,6,7
Any problem occure
Today 2:30 pm I have ML exam mam
passed?
Maam my exam are scheduled 19 dec psit kanpur
thaa nee dont get confused solra ana engaluku dhan d theriyum evlo kashtama iruku nu
adhuvum un part 3 irukey yepppa 🙄
cryptography lecture videos..........VTU 7th sem..............need lecture videos
Stm (software testing methodologies)exam on 26th augest pls made videos as soon as possible
#jnuth
nice explain
Exam at June 9th for 4th sem 2nd year over Anna University syllabus
Thank you so much....😍😍I understood every topics easily in only one time
Thank you mam. Clear 😍😍😍😍💞💞💞💞
hi mam i have my exams starts from march 4 2024 i request you to post the ML topics . my college name is Ace engg college
Thank you very much ma'am
How the hiden layer gives that output ?
Any one explain Explain ......
Can you prepare playlist for artificial intelligence also
Mam data mining subject k sare video banaiye hamari exam hain 19/4/22,
Vadodara institute of engineering
Mam Can You Please Explain Hmm concepts on or before this Friday because we have exam on 22nd.
from gitam banglore
Thankyou 😊
I need some help in kernel PCA and matrix completion generative models (mixture and latent factor)
Madam meeru aa College Lecturer🤔 Just Asking .........🙂
4 hours to go for my exam 🥲
Dgvc exam on June 6 subject name enterprise computing
Mam how you obtained value for bias factor b1=0.35,b2=0.60
Mam i am from tamil nadu ,can you put a video on gaussian mixture model pls
I love this women.
on calculating o1 why are you taking b2
We need back propagation algorithm
That is step by step process
can we take our examples (like random numbers ) as weights and inputs????
Hi mam, I'm Shwetha from UBDT College, Davangere. I have on 12.10.2023....plz make according to the vtu syllabus for M Tech scheme 2022 CSE
Can you please explain BST(basics of sensors and technology) subject for CSE students even we can't find ani all in one of that subject and owr exam is on soon i.e,25th aug I'm from kakatiya institute of technology and science clg
3-2 sem
date of my exam is 12/july please realelse a important question vedeo on machine learning DBATU univercity lonere
How u taken values of b1and b2 can u explain
Mam how to calculate e power 0.90
Subject Name: Machine Learning
Exam Date: kal paper hai
College Name: SVPCET
Syllabus: Pass kara do bas
Language: Hindi
Drk clg ❤
can you please tell me what is lineraly seprable data , im not understanding from vidoes
Rajasthan Technical University ,DATA MINING & TECHNIQUES , KINDLY make a separate playlist according to its syllabus , PLEASE.
thank you
If we write the matter wich is explained in the vedio we will get marks in jntu exam aa
Mam plzzz explain selection sort algo and pseudocode
My exam 12dec 2023
R18 JNTUH
Exam on Aug 21
Mam i am having the backpropagation in data mining subject mam can I write this mam