MIT Sloan's Rama Ramakrishnan Shares Primer on ChatGPT

Поділитися
Вставка
  • Опубліковано 24 січ 2025
  • Rama Ramakrishnan, Professor of the Practice in Data Science and Applied Machine Learning at the MIT Sloan School of Management, guides us on an exploration of the AI model ChatGPT. The video traces the evolution of ChatGPT from its predecessors, GPT-3 and GPT-3.5. It demystifies the complex mathematical and neural network foundations that enable the model to predict and generate text based on vast amounts of data sourced from the internet.
    Through this video, you’ll gain insights into:
    The foundational “predict the next word” mechanism that powers GPT models.
    The vast training datasets and the role of deep neural networks.
    The emergence of unexpected capabilities as the model evolved.
    The challenges faced, from generating nonsensical to biased outputs, and the steps taken to mitigate them.
    The transition from GPT-3.5 to ChatGPT, emphasizing its conversational prowess.
    For more MIT Sloan resources on teaching with generative AI, visit our Resource Hub: mitsloanedtech....

КОМЕНТАРІ • 7

  • @hemanshuvernenker6894
    @hemanshuvernenker6894 Рік тому +1

    Excellent explanation of evolution from GPT to Chatgpt...thank you

  • @Rohwit
    @Rohwit 3 місяці тому

    What a lovely explanation! I am so proud to be a graduate of this magnificent college!

  • @user-bg9em7ch6k
    @user-bg9em7ch6k 3 місяці тому

    Really really interesting! I also find that if I ask ChatGPT for its sources for certain information, it sometimes understands that I am feeling doubtful about its answer, and it has often corrected itself, given me the (new) sources, with an apology.

  • @col.hemantaggrarwal2118
    @col.hemantaggrarwal2118 Рік тому

    Amazing explanation!

  • @ashutoshghavi1
    @ashutoshghavi1 Рік тому

    Fascinating!

  • @arihantjha7062
    @arihantjha7062 Рік тому

    As explained at around 3:52 , chathpt would take out the next word after sampling.. which would mean that got should generate wrong sentences many time.. which is not the case. Could you explain this

    • @shashankchauhan
      @shashankchauhan Рік тому

      Sampling is done in probabilistic manner, so you are more likely to get words that have higher probability (but not always the one with maximum probability). This also means that there is a likelihood of picking a word that is incorrect, but since the probability associated with this choice is comparatively very low, its chances of being picked are also low.