9:33: The performance of an ML model could be evaluated using the base models. It's hard to say that 60% is good or bad. It really depends on the quality of the data and whether the extracted/selected features help predict the target variable (in other words, they are correlated with the target variable). Therefore, 60% accuracy is not bad if the base model performs at 50%. However, the model should be improved, which could be achieved by adjusting the parameters or by having more and/or high-quality data. The point of this comment is this: there's no rule that says 60% accuracy is bad.
ua-cam.com/video/40SamdcOZbM/v-deo.html with respect you, but Adding more data is not included in the techniques to solve underfitting. but the model is already more simplify so it won't make your algorithm better. you can reduce the regularization parameter, or do feature engineering to increase the polynomial degree , or add more layers and neurons in the neural network to make model more complex. and thank you for you great explanation.
ua-cam.com/video/40SamdcOZbM/v-deo.htmlsi=tGpHgZO0WOa1egXk&t=806 with respect, but the C parameter that is controlling misclassification penalty so, Low C → Soft Margin with higher margin (more tolerance for misclassification, better generalization) High C → Hard Margin with lower margin (less tolerance, higher risk of overfitting if the data is noisy) and this because the C parameter is the inverse of Regularization term. and thank you for your great job.
من افضل الاساتذه علي يوتيوب
مكملتش مثال الكفته 🤣🤣🤣 الششرح جميل ربنا يجازيك
عظمة بجد جزاك الله خيرا شرح ممتاز جدا
I'm graduated from mathematics department,oh really I can understand all you said , now I know why I topology and linear algebra are so important 😄👍
شرحك كويس ما شاء الله .
جزاك الله خيرا يا حبيب
💙💙
شكرا ي مهندس الشرح رائع جدا
شكرا على مجهودك مهندس مصطفى
جزاك الله الف خير . شرح و مجهود بصراحة جبار
رائع يا باشا ❤
9:33: The performance of an ML model could be evaluated using the base models. It's hard to say that 60% is good or bad. It really depends on the quality of the data and whether the extracted/selected features help predict the target variable (in other words, they are correlated with the target variable). Therefore, 60% accuracy is not bad if the base model performs at 50%. However, the model should be improved, which could be achieved by adjusting the parameters or by having more and/or high-quality data. The point of this comment is this: there's no rule that says 60% accuracy is bad.
يعطيك الف عافيه
ممكن تشرح svr?
لو سمحت ممكن تعمل حلقة عن الsupport Victor regression
نفس سؤالي، هل انت فاهم الsvr
طب متي يتم استخدام شجره القرار و خوزمية SVM
the course is well designed well explained, and simplified enough to be understandable.
Art 🫡♥️
عاوزة لينك دورة البايثون المدفوعة ؟؟
ua-cam.com/video/40SamdcOZbM/v-deo.html
with respect you, but Adding more data is not included in the techniques to solve underfitting. but the model is already more simplify so it won't make your algorithm better. you can reduce the regularization parameter, or do feature engineering to increase the polynomial degree , or add more layers and neurons in the neural network to make model more complex. and thank you for you great explanation.
ua-cam.com/video/40SamdcOZbM/v-deo.htmlsi=tGpHgZO0WOa1egXk&t=806
with respect, but the C parameter that is controlling misclassification penalty so,
Low C → Soft Margin with higher margin (more tolerance for misclassification, better generalization)
High C → Hard Margin with lower margin (less tolerance, higher risk of overfitting if the data is noisy)
and this because the C parameter is the inverse of Regularization term. and thank you for your great job.