Exploring LangFuse

Поділитися
Вставка
  • Опубліковано 29 вер 2024

КОМЕНТАРІ • 8

  • @samyio4256
    @samyio4256 2 місяці тому

    Whats the difference between this and langsmith?

  • @CarlosAmegos
    @CarlosAmegos 3 місяці тому

    The ideas on how to adjust Langfuse to work with a proxy are interesting. However, many developers are not using kubernetes or even docker, so how would Langfuse still be accessible to them?
    Also although Langfuse has to maintain their wrappers, they would instead have to maintain the proxy to correctly process the HTTP requests... albeit would require a lot less maintenance most likely.

    • @CarlosAmegos
      @CarlosAmegos 3 місяці тому

      Looked into it more, it seems like GPT does provide a way to manually proxy requests using the baseUrl param. Although that would cause a lot more latency compared to lib wrappers I guess.

    • @learncloudnative
      @learncloudnative  3 місяці тому

      It's not necessarily just Kubernetes - you can run the "sidecar" model outside of k8s as well.
      Yep, majority of libraries/frameworks allow for setting the proxy URL and routing the traffic through an intermediary.
      Regarding latency -- you're doing LLM inference, latency of an additional proxy hop shouldn't be your biggest issue here :)

  • @danraviv7393
    @danraviv7393 5 місяців тому +1

    Thanks for the overview! I am thinking of using langfuse, and it helped me understand how it works and I will try to integrate it now. Will be interesting to see how you compare it to openLllmetry

    • @learncloudnative
      @learncloudnative  5 місяців тому +1

      Yes, I haven't had a chance to look at the openllmetry yet!

  • @mohsenghafari7652
    @mohsenghafari7652 7 місяців тому

    hi. please help me. how to create custom model from many pdfs in Persian language? tank you.

    • @learncloudnative
      @learncloudnative  6 місяців тому +1

      is there any specific thing you're having issues with doing?