How to Reduce Hallucinations in LLMs

Поділитися
Вставка
  • Опубліковано 28 гру 2024

КОМЕНТАРІ •

  • @jim02377
    @jim02377 Рік тому

    When you think about answers to queries. Do you draw a distinction between wrong answers and hallucinations? Or not providing an answer when you know it has the data to provide it?

    • @TeamUpWithAI
      @TeamUpWithAI  Рік тому +2

      Good question! And yes, it's different.
      "wrong" is arbitary anyway, so it is "wrong relatively to the training data provided". Meaning the correct answer exists and it failed to delivere it.
      Hallucinations it's about AI generating information that it is not at all supported by that training data. When I learn XYZ about certaining things, and come up with a new way to do the things, that didnt excist in my training, I'm being creative. The LLM is being hallucinating :)
      As far as not providing an answer when it has the data, it can happen when the model didn't understand the query and it is uncertain about the answer. If you have trained the model to say "I dont know" when the queries are ambiguous, it might resolve to that answer to avoid providing potentially incorrect information.
      I hope this helped ^^

  • @jim02377
    @jim02377 Рік тому

    Also...I don't see the links to the papers?

    • @TeamUpWithAI
      @TeamUpWithAI  Рік тому +1

      Jeez, I always forget those! Thanks for reminding me, I'm on it! :)