How to Improve the Performance of a Neural Network
Вставка
- Опубліковано 29 чер 2024
- In this video, we share practical tips on enhancing the performance of your Neural Network.
Notes - drive.google.com/file/d/1FsOz...
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
👍If you find this video helpful, consider giving it a thumbs up and subscribing for more educational videos on data science!
💭Share your thoughts, experiences, or questions in the comments below. I love hearing from you!
✨ Hashtags✨
#NeuralNetworks #PerformanceImprovement #DeepLearningTips
⌚Time Stamps⌚
00:00 - Intro
01:11 - How to improve a neural network?
09:10 - Fine Tuning Hyperparameters
29:05 - Outro
Channel description, channel tags, video description tags etc... look into all this. Did you alter any settings from your YT account settings? It is unbelievable how this channel has less subs than other similar not that good channels. Your content is undoubtably top tier. I think there is some problem with YT algorithm not picking up this channel during search results.
Exactly I am also agree with this. I am liking every video and commenting on every video. There are definitely some issues no doubt on that. I personally think the logo is very very un-catchy. The color are too dark for a visible logo. The color combination of red blue and green is too dark. I understand the logo represent "C" in blue color for Campus and red "X" for X and green is picture of campus. But it is completely nonvisible. I really want to see this channel grow.
Very informative video. I think loss function is also an important hyper-parameter. Selection of the right loss function may improve the performance.
Hattss offf to your hard work sir... Thank You So much for DL series❤
Very good tutorial. Thank you very much for such a good tutorial.
Very good explanation..
Sir please also add the part 2 of machine learning interview questions..🙏🙏
Please cover underfitting problems as well.
Super knowledge... Thanks.. a ton...
Bro,please update NLP interview playlist every week
The best 👍💯 thanks🙏
Great content sir👌 👏 👍
Very well explained sir.
-11:31 ,please cover aboutearning rate in detail bhai
Sir superb video...kindly guide some resources about hyperparameters tuning of neural network in matlab
very nice explanation...
Thank you.
Well explained...........
Thank You Sir.
Amazing.....
thank you so much
Incredible!
Bhai tusi grt ho.. :)
thank you sir
you are the best
Bro, are you planning to teach coding part in deep learning using tensor flow , keras , pytorch such we have a reliable source to learn in a streamlined manner , over internet it's clumsy
We are Waiting For 20K subscribers🥳
sir how do you approach new ML/DL topics .. when you are learning those topics for the first time .?
Thanks
God level teacher ❤️🤌🏻
Really enjoyed Your tutorial
Sir I have one doubt how to run all that code in pycharm could you please help me in that.I am all done with the code of IPL win probablity but don't know that pycharm part please sir help me out...🙏🙏🙏🙏
incredible
thanks sir
Why exploding gradient is not seen in ANN when ReLu like activation function is used.
love you sir
From pak
Doing God's work!!
thanks
best
Sir, please update NLP and DL playlists
Gotta love how all the text is in English the intro is in English, and then it turns hindi 🙃
Don't leave UA-cam please, dont even joke about it...🙏
finished watching
❤
why u stopped data science interview sir???
Finally hass...
Revising my concepts.
August 12, 2023😅
💙
Sir please do English videos too.. these videos are precious!!
,🙏
Good but you missed underfitting
Bro,did you miss exploding graident problem
he covered a bit at the end of last video but said he would cover it during RNN
As exploding happens in sequences as time goes the gradients were become too large as all the sequences were merged in 1 vector to minimise it we can use he initialisation