Lecture 9 : N-Gram Language Models
Вставка
- Опубліковано 1 сер 2024
- To access the translated content:
1. The translated content of this course is available in regional languages. For details please visit nptel.ac.in/translation
The video course content can be accessed in the form of regional language text transcripts, books which can be accessed under downloads of each course, subtitles in the video and Video Text Track below the video.
Your feedback is highly appreciated. Kindly fill this form forms.gle/XFZhSnHsCLML2LXA6
2. Regional language subtitles available for this course
To watch the subtitles in regional languages:
1. Click on the lecture under Course Details.
2. Play the video.
3. Now click on the Settings icon and a list of features will display
4. From that select the option Subtitles/CC.
5. Now select the Language from the available languages to read the subtitle in the regional language.
This is one of the best n gram videos I've seen. Straight up no cap
N-grams are continuous sequences of words or symbols or tokens in a document. In technical terms, they can be defined as the neighboring sequences of items in a document. They come into play when we deal with text data in NLP(Natural Language Processing) tasks.
The term smoothing refers to the adjustment of the maximum likelihood estimator of a language model so that it will be more accurate. ... When estimating a language model based on a limited amount of text, such as a single document, smoothing of the maximum likelihood model is extremely important.
Thanks a lot sir
JAZZ😉👌
completion prediction