[Own work] On Measuring Faithfulness or Self-consistency of Natural Language Explanations

Поділитися
Вставка
  • Опубліковано 18 жов 2024

КОМЕНТАРІ • 20

  • @serta5727
    @serta5727 2 місяці тому +4

    Cool 😎 your explanation was very understandable

  • @MikeAirforce111
    @MikeAirforce111 2 місяці тому +5

    Congrats Doctor!! :-) Looking forward for your future work!

  • @theosalmon
    @theosalmon 2 місяці тому +6

    Thank you Dr. Letitia.

  • @alexkubiesa9073
    @alexkubiesa9073 2 місяці тому +3

    This sounds very useful! LLM users tend to assume that just because it writes like a human, that it can introspect and reason about its thought processes, which of course not a given. But it’s great to see progress on measuring this ability (or at least self-consistency) so that newer models can be more ergonomic.

  • @MaxShawabkeh
    @MaxShawabkeh 2 місяці тому +3

    Congrats on the PhD! This is really valuable work! I'm currently trying to squeeze out as much reasoning capabilities as I can out of small LLMs (7-15B) for my company's product, and I'd love a longer video or recorded talk going into details of your findings, any patterns you've found that contribute to improving or reducing self-consistency, or any insights on which existing models or training corpora result in better self consistency and reasoning capabilities. If you have any pointers, I'd appreciate it!

    • @AICoffeeBreak
      @AICoffeeBreak  2 місяці тому +2

      As far as we can see with this paper's experiments, RLHF helps improve self-consistency, but we have not yet any hints for what else had this effect. Maybe size, but for what we *could* test on our infrastructure, we did not measure an effect, but it might be there, we just couldn't test far enough.

    • @MaxShawabkeh
      @MaxShawabkeh 2 місяці тому

      @@AICoffeeBreak Thanks!

  • @Thomas-gk42
    @Thomas-gk42 2 місяці тому +6

    Congratulations to your doctorate🖖

  • @beatrixcarroll8144
    @beatrixcarroll8144 2 місяці тому +6

    Congrats Dr. Letitia!!!! Wow, YOU ROCK!!!!!!! :-D :-) P.S. We missed you!!

  • @DerPylz
    @DerPylz 2 місяці тому +5

    Thanks for sharing your work! Always great so see what you're up to!

  • @naromsky
    @naromsky 2 місяці тому +4

    🎉

  • @fingerstyledojo
    @fingerstyledojo 2 місяці тому +5

    Yay, new video!
    Thanks for letting me pass yesterday lol

    • @AICoffeeBreak
      @AICoffeeBreak  2 місяці тому +1

      Wow, you have a channel! It's amazing, just checked it out! 🤩

  • @nitinss3257
    @nitinss3257 2 місяці тому +5

    1 minute ago for non members ... good to see ya

  • @Ben_D.
    @Ben_D. 2 місяці тому +4

    No ASMR? 😟

    • @AICoffeeBreak
      @AICoffeeBreak  2 місяці тому +2

      It was an entire blooper. Next time for sure. 😅

  • @anluifb
    @anluifb 2 місяці тому +1

    So you came up with a method, didn't have time to explain the method to us, and didn't show us that it works. Great.
    If you still have time before Bangkok I would suggest rerecording and focusing on the implementation and interpretation of results rather than the context and wordy descriptions.

    • @AICoffeeBreak
      @AICoffeeBreak  2 місяці тому +1

      Thanks for your feedback. The method is in the video, just not the tiny details.
      1. Interpret with SHAP prediction and explanation. (Mentioned in the video)
      2. Measure their alignment (mentioned) after:
      - normalisation: to bring the values to the same range (mentioned. Did not mention that shap properties make their value very different between output tokens with different probabilities)
      - aggregation: to collect the many values from many outputs. (mentioned. Did not mention we use the mean for this)
      For the results I've synthesized what we see with words and the main takeaways. For lengthy tables, please check the paper and its appendix. I don't know what you mean that the video doesn't show that it works. I've also shown an individual example before the takeaways. The problem that there is no ground truth, of course exists for us as well as for previous work. But for the first time in literature, we now *compare* existing works to each other-and to our method to them.
      This is why the context is important, namely to make this clear. Because our paper makes the contribution to evaluate and clarify the state of the field, and as a follow-up contribution, we have this new method by solving the shortcomings of existing tests.