Lecture 9 : N-Gram Language Models

Поділитися
Вставка
  • Опубліковано 1 сер 2024
  • To access the translated content:
    1. The translated content of this course is available in regional languages. For details please visit nptel.ac.in/translation
    The video course content can be accessed in the form of regional language text transcripts, books which can be accessed under downloads of each course, subtitles in the video and Video Text Track below the video.
    Your feedback is highly appreciated. Kindly fill this form forms.gle/XFZhSnHsCLML2LXA6
    2. Regional language subtitles available for this course
    To watch the subtitles in regional languages:
    1. Click on the lecture under Course Details.
    2. Play the video.
    3. Now click on the Settings icon and a list of features will display
    4. From that select the option Subtitles/CC.
    5. Now select the Language from the available languages to read the subtitle in the regional language.

КОМЕНТАРІ • 6

  • @MohitSharma-lp4cp
    @MohitSharma-lp4cp Рік тому +2

    This is one of the best n gram videos I've seen. Straight up no cap

  • @pawanchoure1289
    @pawanchoure1289 2 роки тому +1

    N-grams are continuous sequences of words or symbols or tokens in a document. In technical terms, they can be defined as the neighboring sequences of items in a document. They come into play when we deal with text data in NLP(Natural Language Processing) tasks.

  • @pawanchoure1289
    @pawanchoure1289 2 роки тому +1

    The term smoothing refers to the adjustment of the maximum likelihood estimator of a language model so that it will be more accurate. ... When estimating a language model based on a limited amount of text, such as a single document, smoothing of the maximum likelihood model is extremely important.

  • @louerleseigneur4532
    @louerleseigneur4532 4 роки тому +4

    Thanks a lot sir

  • @vivekanand8912
    @vivekanand8912 3 роки тому +2

    JAZZ😉👌

  • @pawanchoure1289
    @pawanchoure1289 2 роки тому +2

    completion prediction