In your theoritical explanation "I" has considered as unique one but whenever it comes to practical implementation then "I" has not considered as unique one. Can you please explain why does it occur?
A very good explanation thank you. I'd like to see you demonstrate a matrix where lemmatization is used so I can visualize how the text pre-processing isn't just reduced to a numerical value but also see how sentiment is assessed, for cases where it is needed.
Cool explanation
Glad it was helpful!
Nice Ranjan...Good explanation..You have good Teaching Skills.. Your channel will have nice growth in Future
Thank you so much 🙂 Glad you liked it.
@@RanjanSharma Why you stopped making videos? Just before the boom 😶
Excellent Explations
Congratulations 1000 subcribers good going 👌 keep it up
Thank you so much 😀 Many thanks.
I was checking infused on steps
Rwa to token text cleaning then vector and then apply ml models or algo
Thank you for clearing
Great explanation but I'm confused why it only tokenized 11 words instead of the actual 12 words. It seems to be missing the token for the word "I".
In your theoritical explanation "I" has considered as unique one but whenever it comes to practical implementation then "I" has not considered as unique one. Can you please explain why does it occur?
A very good explanation thank you. I'd like to see you demonstrate a matrix where lemmatization is used so I can visualize how the text pre-processing isn't just reduced to a numerical value but also see how sentiment is assessed, for cases where it is needed.
i got error in line no. 42 in your code
sentenses = tokenize.sent_tokenize(corpus)
NameError: name 'corpus' is not defined
the variable should be there: paragraph
sentenses = tokenize.sent_tokenize(paragraph)