Why Generative AI hallucinates and gives different answers

Поділитися
Вставка
  • Опубліковано 25 чер 2024
  • I try to relate how you can find your answers with LLMs, much like a detective would interrogate a witness about what really happened. I hope you find this analogy useful when you think about leveraging LLMs in your own business or context.
    I am deeply grateful to my long-time friends, Srinivasan Sundhararajan and K. Govindarajan, for over 40 years. This collaborative project got us back to playfulness as we collaborated across the world through Zoom every Sunday, and we still continue that touch point-often taking our ideas in tangential directions. They help bring back the curiosity and inquisitiveness of our young minds, if I may say so.
    We had a good time with RAG experiments, writing and critiquing code, trying to fix the traditional "it works on my computer" problems, and having a jolly good time. Our insightful conversations have been instrumental in shaping the ideas presented in this video.
    ------------------
    I help businesses tell effective stories for digital transformation, so they can drive results.
    www.drrajramesh.com
    www.linkedin.com/in/rajramesh

КОМЕНТАРІ • 8

  • @SatishPatel
    @SatishPatel День тому

    Great. Just great. So very pedagogically explained. Thanks.

  • @janzandberg7294
    @janzandberg7294 3 дні тому +1

    Thank you for sharing your excellent insights. Your videos give me a much better perspective in using AI and deserve to be viewed by many more viewers.

    • @RajRamesh
      @RajRamesh  2 дні тому

      Thank you! Appreciate that.

  • @Zappbrannigan83
    @Zappbrannigan83 3 дні тому +1

    Great analogy. It highlights my concern. Gen AI as a dog makes a lot of sense. From my experiences and bias, I am very fearful of it, because I haven't seen a lot of evidence lately from large corporations about how they would use this technology in a way that (to put it bluntly) wouldn't ruin everything. I have friends who work in ai and fully believe there is a beneficial capacity and can be a useful tool. However, that is drowned out by those viral clips of executives displaying a zealous disregard for human suffering. However, I'm aware I am biased and those are viral clips for a reason. I'm willing to be talked off the anxiety ledge. Are there any guard rails in place to regulate this technology?

    • @RajRamesh
      @RajRamesh  2 дні тому

      I agree with you - but I also hope you can get off the anxiety edge, though I don't have any great suggestions to offer.
      Many leaders have either hidden agendas, are clueless, or truly believe what they say. I, too, am concerned that the technology will fall into the hands of a few who might then control the fate of humanity, at least in some set of populations - especially non-democratic nations. As far as I know, no significant guardrails are in place right now - other than regulation through suggestions or, in some cases, laws. However, laws do not assure 100% adherence - and there does not seem to be a fail-proof way to regulate AI.

  • @michaelnurse9089
    @michaelnurse9089 3 дні тому +1

    On a more technical level, hallucination has a lot to do with temperature (how much variability is forced on the AI) and rlhf, a process where they teach it to respond to humans in a way humans 'like', since they use large amounts of lower paid people for this they tend to prefer answers which sound 'confident' rather than those that say 'I don't really know the answer or there is too much vagary to answer'.

    • @RajRamesh
      @RajRamesh  2 дні тому

      Yes, for sure, part of it. Unsupervised training tries to overcome that problem by proximity measures, but that only goes so far. Thanks for the additional info.

  • @mystudy1512
    @mystudy1512 2 дні тому

    How do you see GPT models advancing in construction industry?