Imagine querying a chatbot with, "why is the sky blue", receiving the answer we all expect, but then from that point in the set it is possible to traverse all the way down the epistemic tree, or perhaps up, or to a different conceptual neighborhood entirely? What a beautiful vision. I cannot wait to have a conversation with the first philosopher since antiquity. Thank you for producing these videos!
I can't put my finger on it, but I sense there is something a bit naive or limiting with the knowledge graphs in the format as we think about them, as seen here. The sort of "things connected by a named relationship" model. However, there is something special about how semantic connections are stored in LLMs. I think the "graphness" of LLMs should be better studied and understood in order to build the sort of knowledge representations we need for reasoning. Also, the LLM hallucination issue might be solvable algorithmically. One key aspect to solve also is generalization/abstraction and analogization (which is abstraction in the sense of "seeing one thing as an instance of another").
I love that this is basically a codification of Luhmann’s Zettlekasten. A linked system in which you can enter from any direction to construct a new idea.
@@En1Gm4A Yes. I used Obsidian to construct a personal Zettlekasten that shows the emergence of new poles of thought in re-structured patterns of knowing. There is a Timelapse feature that visually shows the local connections that emerge into larger global structures. And sure enough, this emergence follows a power law where there a few large nodes/attractors that reflect deep symbolic meaning. One might then ask about the shifting patterns that you begin to detect in an organic fashion. Which provides a foundation for seeing complexity and interdependencies instead of surface meaning. I could go on about how it has revolutionized my thinking about thinking, making leaps to cover the sparse links. But then I am making the psychological point of this video.
@@Gorto68 yeah its mindblowing - your right on the powerlaw. Thats a thing. i which i knew locally how well my links and notes represente powerlaw might be a sign for adding stuff or things to uncover
The important of self improve and automatic agent is how to organize its memories. But in currently most system separate the llm (the brain) with it's memory using RAG. It is interesting though that we have a method merge them together as a true brain. But one part concerned me is latency and cost. Can we implement something like cache in this. Because we have thinking slow and fast. Can we somehow cache the slow into fast like human.
In practice, knowledge graphs will be full of relationships that do not matter since production data will simply take too long to map out. The real question is getting LLMs to understand the important relationships and optimize those explicit definitions/parameters and drop nodes without a heavy implementation
Imagine querying a chatbot with, "why is the sky blue", receiving the answer we all expect, but then from that point in the set it is possible to traverse all the way down the epistemic tree, or perhaps up, or to a different conceptual neighborhood entirely? What a beautiful vision. I cannot wait to have a conversation with the first philosopher since antiquity. Thank you for producing these videos!
So Wikipedia before ideological capture.
Does wikidata still fit the bill?
I can't put my finger on it, but I sense there is something a bit naive or limiting with the knowledge graphs in the format as we think about them, as seen here. The sort of "things connected by a named relationship" model. However, there is something special about how semantic connections are stored in LLMs. I think the "graphness" of LLMs should be better studied and understood in order to build the sort of knowledge representations we need for reasoning. Also, the LLM hallucination issue might be solvable algorithmically. One key aspect to solve also is generalization/abstraction and analogization (which is abstraction in the sense of "seeing one thing as an instance of another").
you're on to something. graphs are static until they're changed, updated yet there is likely to be many potential changes available to be included.
This is history being written here 🤯🤯😎👍
Talk about a CLIFFHANGER!!
I can’t wait for Part 2!!!
I love that this is basically a codification of Luhmann’s Zettlekasten. A linked system in which you can enter from any direction to construct a new idea.
You r 100 % right. I found out about this graph AI structure when I was building my own in Obsidian. Its amazing. 🎉
@@En1Gm4A Yes. I used Obsidian to construct a personal Zettlekasten that shows the emergence of new poles of thought in re-structured patterns of knowing. There is a Timelapse feature that visually shows the local connections that emerge into larger global structures. And sure enough, this emergence follows a power law where there a few large nodes/attractors that reflect deep symbolic meaning. One might then ask about the shifting patterns that you begin to detect in an organic fashion. Which provides a foundation for seeing complexity and interdependencies instead of surface meaning. I could go on about how it has revolutionized my thinking about thinking, making leaps to cover the sparse links. But then I am making the psychological point of this video.
@@Gorto68 yeah its mindblowing - your right on the powerlaw. Thats a thing. i which i knew locally how well my links and notes represente powerlaw might be a sign for adding stuff or things to uncover
Simply saying thanks, from Ethiopia.
You are welcome.
The important of self improve and automatic agent is how to organize its memories. But in currently most system separate the llm (the brain) with it's memory using RAG. It is interesting though that we have a method merge them together as a true brain.
But one part concerned me is latency and cost. Can we implement something like cache in this. Because we have thinking slow and fast. Can we somehow cache the slow into fast like human.
In practice, knowledge graphs will be full of relationships that do not matter since production data will simply take too long to map out. The real question is getting LLMs to understand the important relationships and optimize those explicit definitions/parameters and drop nodes without a heavy implementation
imagine when the graph is contained in a quantum system
You are so GIVEing