@@Doppel95 Nice man.. It makes sense. Because you can't have 2 different classification for exact same data. 1 more doubt : In general hypothesis,, no taking the attribute values which have not been discovered yet. Is it correct ? because he did not do the same in previous examples
Sir, your videos are awesome. But, I have a doubt that in Candidate Elimination Series Example 3 for negative instance, you are writing the general hypothesis by considering the whole dataset. But here for negative instance you're writing general hypothesis by only considering seen examples. Why is that so ?
In general hypothesis,, no taking the attribute values which have not been discovered yet. Is it correct ? because he did not do the same in previous examples
Thank you so much sir
Most welcome
Do like share and subscribe
sir in G2 at 5th Attribute why you not included NO ?.
so
sir you are awesome
love and respect from NEPAL
@@MaheshHuddar sir, Vellore Institute of Technology ..really found your videos helpful
all 5 example videos have a slightly different concept while calculating G. this makes it so much tougher
in other example, if it is positive example then we make changes with generic boundary first but in this example it is reverse, why is it so?
What is the version space in this example sir??
Why didn't u take yellow
The previous sets information base we will proceed since the previous set doesn't have yellow we will not consider yellow
He's racist
In G2 you forget {?,?,?,Y,?} Or i am wrong ?
It's correct
Watch one more time
thank you for this vdo
In g2
What will we do in the case where specific hypothesis (Si) is accepting a negative training example ?
The algorithm terminates and you can't classify anything using it.
@@Doppel95 Nice man..
It makes sense.
Because you can't have 2 different classification for exact same data.
1 more doubt :
In general hypothesis,, no taking the attribute values which have not been discovered yet.
Is it correct ? because he did not do the same in previous examples
Only purple
sir for +ve label, you calculated for most general hypothesis first in the previous videos. But in this, you have calculated most specific hypothesis.
Both are perfectly fine
You will get same output
@@MaheshHuddar VJTI Mumbai
Sir, your videos are awesome. But, I have a doubt that in Candidate Elimination Series Example 3 for negative instance, you are writing the general hypothesis by considering the whole dataset. But here for negative instance you're writing general hypothesis by only considering seen examples. Why is that so ?
@@MaheshHuddar ok, got it sir. I will check ✅. Thanks
How was that? Can you pls explain @@GekkoCode-naturephilic-techy
In general hypothesis,, no taking the attribute values which have not been discovered yet.
Is it correct ? because he did not do the same in previous examples