Це відео не доступне.
Перепрошуємо.

Tree of Thoughts: Deliberate Problem Solving with Large Language Models - Let Your LLMs Play Games!

Поділитися
Вставка
  • Опубліковано 14 сер 2024
  • In this video, I explore Tree of Thoughts, a technique for helping large language models perform better at complex reasoning tasks!
    🔗 Paper: arxiv.org/pdf/...
    🔗 Repository: github.com/ysy...
    🔗 Colab Notebook in Video: colab.research...
    About me:
    Follow me on LinkedIn: / csalexiuk
    Check out what I'm working on: getox.ai/

КОМЕНТАРІ • 12

  • @OlabodeAdedoyin
    @OlabodeAdedoyin Рік тому +3

    Well, this is a good companion to reading the paper.Just went through it today and also decided to watch your video to see if my thinking around it was solid.

  • @unclecode
    @unclecode Рік тому +2

    Grate content, I alwayse enjoy from you theoretical perspective to these papers. However I have a mixed feeling about this tree of thoughts. I find it puzzling why people try to employ large language models to solve problems that can be easily tackled by fantastic algorithms. When I look at the tree of thoughts, concepts like divide and conquer and branch and bound techniques come to mind. These involve creating a sample space of possible solutions, eliminating unpromising ones, and continuing until finding answers. Classical algorithms, such as dynamic programming, can handle these problems effortlessly. It concerns me to see someone attempts to solve a well-known algorithm, like the knapsack problem, using a large language model and presents it as a groundbreaking approach. I don't believe this is the future of large language models. Instead, it lies in utilizing their capabilities to call functions, similar to agents in Langchain or the recently released functionality by OpenAI that model can decide to call a function. We should expect large language models to excel in tasks where algorithms can outperform them significantly. The future lies in combining the strengths of both approaches, rather than simply replicating existing solutions with a large language model, which consumes more tokens, contributes to air pollution, and requires more energy while being slower. This is not the way forward. This is what crosses my mind when I contemplate the tree of thoughts. I simply wanted to share my feedback. Again thx for your video content (Specially the one on LoRA)

    • @chrisalexiuk
      @chrisalexiuk  Рік тому

      While I largely agree with your comment, I do think the "innovation" is in leveraging existing DSA to let the model "think" more clearly.
      Sure, it's not going to revolutionize anything overnight - and there's a lot of optimization to be done - but it's an interesting direction to poke into.
      However, I can't argue that I believe this is the "way forward". It's certainly one of many things worth trying, but at the end of the day it winds up so computationally expensive that it's effectively useless. In areas where computation budget is effectively unlimited though...it's great!

  • @boukm3n
    @boukm3n Рік тому +3

    *I’m quite interested in this tool to see if it produces better creative writing*

  • @kennethlarsen3907
    @kennethlarsen3907 Рік тому +1

    Very nice!

  • @Dr_Tripper
    @Dr_Tripper Рік тому +1

    I keep running into a lot of bugs. Perhaps a more clear instruction manual? I have dedicated all of my time to getting this to work. ToT, langchain and the Compiler will be the tech for the future and I want on this edge!

    • @chrisalexiuk
      @chrisalexiuk  Рік тому +1

      Hey Rick!
      No problem, what specific bugs are you running into?

    • @Dr_Tripper
      @Dr_Tripper Рік тому +1

      @@chrisalexiuk I won't be back at the terminal for a few hours, is there a Discord or another platform where we can talk?

    • @chrisalexiuk
      @chrisalexiuk  Рік тому +1

      @@Dr_Tripper Hit me up on LinkedIn, Rick!