ADD LLM TO Knowledge-Graph: NEW GIVE Method (Berkeley)

Поділитися
Вставка
  • Опубліковано 16 жов 2024

КОМЕНТАРІ • 17

  • @carlhealy
    @carlhealy 2 дні тому +8

    Imagine querying a chatbot with, "why is the sky blue", receiving the answer we all expect, but then from that point in the set it is possible to traverse all the way down the epistemic tree, or perhaps up, or to a different conceptual neighborhood entirely? What a beautiful vision. I cannot wait to have a conversation with the first philosopher since antiquity. Thank you for producing these videos!

    •  2 дні тому +2

      So Wikipedia before ideological capture.

    • @christopherd.winnan8701
      @christopherd.winnan8701 2 дні тому

      Does wikidata still fit the bill?

  • @techpiller2558
    @techpiller2558 2 дні тому +5

    I can't put my finger on it, but I sense there is something a bit naive or limiting with the knowledge graphs in the format as we think about them, as seen here. The sort of "things connected by a named relationship" model. However, there is something special about how semantic connections are stored in LLMs. I think the "graphness" of LLMs should be better studied and understood in order to build the sort of knowledge representations we need for reasoning. Also, the LLM hallucination issue might be solvable algorithmically. One key aspect to solve also is generalization/abstraction and analogization (which is abstraction in the sense of "seeing one thing as an instance of another").

    • @fkxfkx
      @fkxfkx 2 дні тому +1

      you're on to something. graphs are static until they're changed, updated yet there is likely to be many potential changes available to be included.

  • @En1Gm4A
    @En1Gm4A 2 дні тому +4

    This is history being written here 🤯🤯😎👍

  • @toddbrous_untwist
    @toddbrous_untwist 2 дні тому +3

    Talk about a CLIFFHANGER!!
    I can’t wait for Part 2!!!

  • @Gorto68
    @Gorto68 2 дні тому +3

    I love that this is basically a codification of Luhmann’s Zettlekasten. A linked system in which you can enter from any direction to construct a new idea.

    • @En1Gm4A
      @En1Gm4A 2 дні тому +1

      You r 100 % right. I found out about this graph AI structure when I was building my own in Obsidian. Its amazing. 🎉

    • @Gorto68
      @Gorto68 2 дні тому +1

      @@En1Gm4A Yes. I used Obsidian to construct a personal Zettlekasten that shows the emergence of new poles of thought in re-structured patterns of knowing. There is a Timelapse feature that visually shows the local connections that emerge into larger global structures. And sure enough, this emergence follows a power law where there a few large nodes/attractors that reflect deep symbolic meaning. One might then ask about the shifting patterns that you begin to detect in an organic fashion. Which provides a foundation for seeing complexity and interdependencies instead of surface meaning. I could go on about how it has revolutionized my thinking about thinking, making leaps to cover the sparse links. But then I am making the psychological point of this video.

    • @En1Gm4A
      @En1Gm4A День тому

      @@Gorto68 yeah its mindblowing - your right on the powerlaw. Thats a thing. i which i knew locally how well my links and notes represente powerlaw might be a sign for adding stuff or things to uncover

  • @DigitalDesignET
    @DigitalDesignET 2 дні тому +1

    Simply saying thanks, from Ethiopia.

    • @code4AI
      @code4AI  День тому

      You are welcome.

  • @ibongamtrang7247
    @ibongamtrang7247 2 дні тому +2

    The important of self improve and automatic agent is how to organize its memories. But in currently most system separate the llm (the brain) with it's memory using RAG. It is interesting though that we have a method merge them together as a true brain.
    But one part concerned me is latency and cost. Can we implement something like cache in this. Because we have thinking slow and fast. Can we somehow cache the slow into fast like human.

  • @42svb58
    @42svb58 День тому

    In practice, knowledge graphs will be full of relationships that do not matter since production data will simply take too long to map out. The real question is getting LLMs to understand the important relationships and optimize those explicit definitions/parameters and drop nodes without a heavy implementation

  • @fkxfkx
    @fkxfkx 2 дні тому +1

    imagine when the graph is contained in a quantum system

  • @pruff3
    @pruff3 2 дні тому

    You are so GIVEing