Can LLMs Clarify the Concept of God?

Поділитися
Вставка
  • Опубліковано 12 гру 2024

КОМЕНТАРІ • 73

  • @tolgonainadyrbekkyzy2159
    @tolgonainadyrbekkyzy2159 6 місяців тому +2

    99 names of God 🫶 loving your videos, hello from Kyrgyzstan!

  • @whatwilliswastalkingabout
    @whatwilliswastalkingabout 6 місяців тому +10

    “Truck and… guns?”
    Lol. Damn right, brother.

  • @IM2L84F8
    @IM2L84F8 6 місяців тому +13

    How about ineffable, ineffable, and ineffable? Now that would be something to ponder. But instead we're left with garbage in, garbage out.

    • @notloki3377
      @notloki3377 5 місяців тому

      You can keep pondering while other people actually find answers, lol.

  • @Decocoa
    @Decocoa 6 місяців тому +11

    God have mercy = Law, God bless you = abundance, God help me = will

    • @bellingdog
      @bellingdog 6 місяців тому +2

      I would say God have mercy is more akin to health and healing. Κύριε ελέησον comes from the idea of έλαιον "olive oil". It's a balm for those who have injuries. The Great Physician is another name we give Christ along with the title φιλάνθρωπος.

  • @Decocoa
    @Decocoa 6 місяців тому +5

    @13:46 Yeah you nailed it. Just because certain words cluster together in the LLM calcified (parametric) knowledge, which is a function of the totality of the text you feed it, doesn’t mean it’s a definition or even close. Adding or removing text will alter which words cluster/coalesce. This is paradigmatic of using LLMs and reading way too much into it. They are sub-ordinate to our language but they do not accurately represent what our knowledge conveys.

  • @Decocoa
    @Decocoa 6 місяців тому +7

    @2:31 The concept of Prince isn’t being ‘understood’. The co-ordinates the model ‘learns’ and ‘places’ them in this abstract co-ordinate field (which is higher than 3d), is more of a function of the corpus of the many sequence of words the model is trained on. Within its training data, prince can also appear in other contexts next to other words like Prince (the musician), Prince of Persia etc. So whilst boy+heir+king should be equal to prince, nothing is being learnt-it’s merely being memorised and calcified into the model. Only if you fed the model training data in which princes were refered to as boy, heir, and to the king explicitly, then the co-ordinates will be pretty much as close as they can get. But you’ve essentially spotted that under the hood how these things are really representing the knowledge they’re fed and how they respond into being queried. There isn’t any reasoning being done. In fact there is no reasoning mechanism. Only interpolation is occurring

  • @Decocoa
    @Decocoa 6 місяців тому +4

    @11:36 These models are language models after-all. Not knowledge models with all the reasoning maps one needs to deal with novelty and create new knowledge to adapt to said novelty. They model language-which one can exaggerate to mean that they can model a human mind. Hence the polemic used there at the end. Just my thoughts :)

  • @SalvoColli
    @SalvoColli 6 місяців тому +2

    Guenon and Evola's critic about science (in "The Reign of Quantity..." and "Ride the Tiger") provide an answer to this issue.
    LLM can't be of much help in the research for God or Truth because the analysis is biased by its quantitative approach and by the methodology of basing the results on some sort of statistic data coming out of a corpus of texts. It may shed some light on a bunch of other things which are human, all too human.

  • @balderbrok6438
    @balderbrok6438 6 місяців тому +2

    Peterson's formulation reveals his misstep:
    You simply can't "define" the sacred in human language

  • @Th3Fallen1
    @Th3Fallen1 6 місяців тому +2

    In The Brothers Karamazov, Ivan’s Grand Inquisitor claims humans can satisfy their faith in God when they are allowed to have miracles (fortune), mystery (awe), and authority (law). Seems Dostoevsky had the best linguistic understanding of what we think of when we talk about God and faith no?

  • @NessieAndrew
    @NessieAndrew 6 місяців тому +1

    These are exactly the questions we should be asking.
    Worth looking into how vector spaces behind LLMs nail "understanding" beyond the limit of language.

  • @matthewgaulke8094
    @matthewgaulke8094 6 місяців тому +1

    I don't really get what Jordan Peterson is up to but they say God meets you where you are at and I don't pretend to know where Jordan Peterson is at in his head space. In my experience I sometimes wonder how to even reach some people on this topic because I'm reminded of the saying you can't fill a full cup. A lot of people's conversion to God is first preceeded by their cup being knocked off the table. It's written God chastised those He loves and so getting your cup knocked off the table is something we try to avoid but may be exactly what God wants for us before He can work with us.

  • @martenscs
    @martenscs 3 місяці тому

    There is no meaning from a machine without human observation.
    As Mr. Peterson says "God is the point that our sight meets our consciousness"

  • @NA-di3yy
    @NA-di3yy 5 місяців тому +1

    In machine learning, the word "bias" has a well-established meaning. Although perhaps in the context of a not entirely scientific article, one of the everyday meanings of this word was meant - e.g. in the sense that both corpora are not complete and therefore provide a different ontology, also incomplete. Or even prejudice regarding race, gender and so on, this is a hot topic in language models, if you know what I mean)
    As for the generally accepted academic meaning - the presumption is that there is some function whose product is the data, and we select an objective function that is as close as possible to this unknown. Since there is a limited amount of data, and data can be noisy (inaccurate measurements), we can never be absolutely sure that we have found the very ideal function, but we can check how it can predict points that were not used in training (selection of the target function ).
    roughly speaking, we have several points, and we want to get an equation whose graph will pass through these points. we can choose a complex formula that will pass **exactly** through all these points, but if after that we look at some other points that we saved for the test and did not use during training, it turns out that our function does not pass through them at all, not even close. this is a case of low (or even no) bias and high variance (bad choice). we can find a very simple formula that will work equally poorly on both the training and control dataset data (high bias, relatively low variance, also bad, low predictive ability)
    If we’re lucky, we can find a formula of average complexity that will pass **approximately** through the points of the training dataset, and **approximately** through the points from the test dataset (low bias, low variance) - this is a good result.
    But I got the impression that the article is talking about the fact that the training was carried out on a biased corpus, which, however, leaves open the question of how the authors imagine a non-biased corpus...

  • @RoyalistKev
    @RoyalistKev 6 місяців тому +5

    I just realized that the LLMs use the same reasoning behind the Torah codes.

    • @regnbuetorsk
      @regnbuetorsk 6 місяців тому +1

      can you elaborate more? this thing you said is tingling my curiosity

    • @DensityMatrix1
      @DensityMatrix1 5 місяців тому +1

      @@regnbuetorsk Hebrew doesn't use vowels. So let's take a made up word: "THT". In Hebrew it might be read as TOHOT, or TAHAT or TOHAT. Each word would have a different meaning. So if you have a sentence, that sentence has multiple meanings. It's not entirely different than assigning to each word multiple meanings, but it's structurally different.
      So each full realized word such as TOHOT and TOHAT are going to have a base word THT, that they are closest to mathematically. It's the average.
      It's more technical than that but it's the gist for a laymen.

    • @TheSeeking2know
      @TheSeeking2know 4 місяці тому

      @@DensityMatrix1Very interesting….

  • @__napcakes__1911
    @__napcakes__1911 4 місяці тому

    Bias here in this context refers to the training of the LLM. Those vectors in space and their proximity to each other are built and tuned during training. Depending on what text you train on you could get very different vectors.

  • @Smegead
    @Smegead 6 місяців тому +1

    The God Vector is a kickass name for a novel.

  • @y.v.8803
    @y.v.8803 6 місяців тому

    Although western civilisation appears to change, the inability to separate God and man remains:
    - Greeks: Gods behaving like humans
    - Christianity: God becoming a man
    - Neo-liberalism: Humans becoming Gods (determining the law)
    Whereas, Islam succeeds in the clear separation between God and man.
    Can’t remember where I read this.

  • @urbrandnewstepdad
    @urbrandnewstepdad 6 місяців тому +2

    Imagine if Terry Davis was still around

    • @eddof13
      @eddof13 6 місяців тому

      TempleGPT

  • @danskiver5909
    @danskiver5909 6 місяців тому +1

    LLM’s are showing that the collective mind is doing more then just expressing with language, it’s also trying to solve the puzzle of the human condition. It’s hard to recognize this because we only use linear language and the puzzle of the human condition is multidimensional.

  • @liradorfeu
    @liradorfeu 6 місяців тому

    I might be going a bit off-topic but I think there's a simple and very useful deduction based on Hermetic principles that we can all use to recognize the ontological nature of God.
    Assuming the Whole is God and that the Whole cannot contain in itself less than any of Its parts; every part should therefore be contained in It.
    If that's the case, we as individuals (parts of the whole) and possessors of consciousness, should then forcibly assume consciousness to be an attribute of the Whole.

  • @iron5wolf
    @iron5wolf 6 місяців тому +6

    The problem is that the vector space of LLMs has an incredibly high number of dimensions, and this sort of reductive analysis projects only the faintest and most distorted shadow of what that space contains into a few words.

    • @NessieAndrew
      @NessieAndrew 6 місяців тому

      Can a vector space only be explained through another vector space? Rather than transposing it into 2 dimensional words.
      In other words, can we experiment with understanding without using words?

    • @iron5wolf
      @iron5wolf 6 місяців тому +1

      @@NessieAndrew Yes of course you can experiment. And you might even learn something. But anyone who claims that they *know* what a number, vector, or position “means” or what it “means” to tweak anything like that should immediately be met with suspicion.

    • @NessieAndrew
      @NessieAndrew 6 місяців тому

      @@iron5wolf Absolutely, it's a black box. But it's sort of like superposition. Once you look at it it's gone. Once you translate the vector space into language, you lose all the complexity of the vector space.
      It's a kind of understanding that is beyond language and does not intersect meaningfully with language.

    • @iron5wolf
      @iron5wolf 6 місяців тому +1

      @@NessieAndrew it’s the nuance that’s lost when you “collapse” (project) a vector space into lower dimensions. I’m warning against doing that and then saying you “understand”
      It. Mostly, you don’t.

    • @NessieAndrew
      @NessieAndrew 6 місяців тому

      @@iron5wolf That is what I'm saying. You can't collapse it.
      It is "understanding" in higher dimensions and that is by definition inaccessible to us.

  • @IM2L84F8
    @IM2L84F8 6 місяців тому +2

    What about 'gratitude'

  • @HiJackShepherd
    @HiJackShepherd 4 місяці тому +1

    "Tangible Concepts"? The first paragraph confesses that some terms lack.a tangible referent. The next paragraph then boasts of finding "tangible concepts". Notice any contradiction?

  • @mr.coolmug3181
    @mr.coolmug3181 6 місяців тому +1

    Most people never get close to God because they can't accept the ambiguity. It's a failure of understanding not reasoning.

  • @Shaarawim
    @Shaarawim 5 місяців тому +1

    I find it difficult to take this attempt to define or its possible outcome seriously. It seems easier to redefine than bringing god closer to greed or law as power of authority

  • @grosbeak6130
    @grosbeak6130 6 місяців тому

    I saw the debate some years ago with Peterson and Zizek regarding Marxism. It was remarkably embarrassing for Peterson, who basically gave a sophomoric book report renditions of Karl Marx and Marxism.
    And I saw his debate with Matt Dillahunty. Again, embarrassing for Peterson.
    I just never saw what a lot of his Fanboys seem to see about him.

  • @Epiousios18
    @Epiousios18 5 місяців тому

    Ipsum Esse Subsistens - I fail to see what else more needs to be clarified in regard to a base definition. This is a fascinating topic nowadays simply because it exists, but outside of the seemingly inevitable semantic games that people like to play, I fail to see why the definitions that were formulated hundreds of years ago don't suffice.
    The fact that "being" isn't one of the main words that clusters is interesting to me though.

  • @lewreed1871
    @lewreed1871 6 місяців тому

    Maybe a bucket of cold water for Jordan Peterson...?

  • @mazyar_
    @mazyar_ 3 місяці тому

    Putting aside the vexed question of whether Semantic Core Clarification (SCC) comes anywhere close to modeling the inner workings of Dasein, these LLMs also have a very limited definition of god. Both models produced 3 word combitations that has a 'law' vector in common. A law-centric god is central and limited to the Abrahamic religions, in particular Judaism and Islam. Christianity too defines itself as a break from the old law. Indo-European, East Asian and Animistic religions, be it Hinduism, Zoroastrianism, Buddhism, Norse religions or Shinto... are not law centric. I think these interesting models say nothing about god but reflects the cultural background and biases of the people who made them.

  • @notloki3377
    @notloki3377 6 місяців тому +1

    Semitic core clarification

  • @mattbennett277
    @mattbennett277 6 місяців тому

    How could concepts of God not be biased!? Are they expecting to get some “objective” perspective of God. Seems like hubris. I think the more realistic question is whose biases are on display, the selected corpus or the engineers doing the fine tuning?
    Greed = mammon. This word seems to stand out the most as possibly reflecting a “bias”, but there is no way of knowing what that bias is without knowing the contexts in which the word is embedded. If an LLM could genuinely point out a blind spot, instead of reinforcing a particular ideological norm, then there could be value in realizing our implicit biases. However, I haven’t seen any indication that LLMs can do that yet.
    “Hugely over promising and under delivering.” Agreed! Why did the go through the exercise of defining god with three words only to reduce those words to their banal interpretation! Ideally those words were selected from the corpus because the meaning extended out in many directions. To Peterson’s comment to Musk, the three words are meaningless unless elucidated by someone who has had experiences with God.
    If their goal was to provide insight into biases, I think they failed. Also, they failed to contribute to AI ethics and explain how AI models “see” the world. This article doesn’t ease concerns that our computer programmers are making a Faustian bargain.

  • @verisimlitudesque
    @verisimlitudesque 4 місяці тому

    It seems like all those concepts can also be applied to religion in which case greed would be somewhat logical

  • @mrkristiangutt
    @mrkristiangutt 3 місяці тому

    I do think this type of AI-based analysis can be useful as it may provide narrow arguments against materialist objections to god and religion, which seems to be the main project of JBP.
    The fact that potentially engaging understandings of god can be logically derived from language per se, may cause a persuasive mystifying experience for materialist atheists.
    Language is kind of like mathematics, which is recognised as objectively true by some materialists. So, using LLMs, one can sort of suggest a materialist basis that sort of hints at the validity of the idea of god.
    Although I appreciate JBP’s style of symbolic intellectual reasoning, as well as his Christian Orthodox associates J. & M. Pageau, (e.g., it seems to me highly consistent with Heidegger’s notion of Dasein), I don’t believe it can provide solid counter arguments to the materialist negation of the spiritual or supernatural. I disagree with Peterson and don’t see how this, and his other philosophical methods, is relevant to the ontological material status of abstract notions of god.
    I would very much like any argument that overturns ontological realism or materialism, but I don’t have one.
    That’s how I see it in any case.

  • @depiction3435
    @depiction3435 6 місяців тому +2

    This is too language and culturally dependent to produce anything tangibly definite.

  • @Laotzu.Goldbug
    @Laotzu.Goldbug Місяць тому

    This reminds me of the classic short story _The Nine Billion Names of God_ by Arthur C. Clarke.
    More generally, there's this incredibly strange mix of hubris but also infantilism in this idea, and most of the latest Peterson shticks.
    He definitely seems like the sort of guy who _should- be at least smart enough to know that LLMs and specific, and AI™ in general, is almost entirely hollow and fake, but not quite of high enough character that if he didn't know he wouldn't still pretend to be impressed by it in order to get more engagement online.

  • @pplprsn
    @pplprsn 5 місяців тому

    I don't normally comment, but your conclusion reminds me of how hard I facepalm when conservatives naively and clumsily reference DNA/science in order to define "man" and "woman". People knew what a "man" and a "woman" were for as long as they existed. It's not as if they were confused about the matter up until less than a century ago when DNA was discovered.
    It's using a derivative of a tacit knowledge to reify that same knowledge in order to sexy up the obvious, albeit not fully clear or articulable.
    The atheist mind demands the same satisfaction as the religious in filling the gaps with answers. However, with a complete lack of self awareness, they fill the gaps with something equally "unprovable" yet, unlike the mystic, something stale, mechanistic, and uninspiring.

  • @VM-hl8ms
    @VM-hl8ms 6 місяців тому

    treading through language so carefully that even bringing up language itself is taboo, because god forbit, let's not admit that we are dependant on/shaped by language just like those others, barbarians, or pagans (or whatevers) are. looks like an issue only possible within abrahamic religions.

  • @phillipvillani9061
    @phillipvillani9061 6 місяців тому

    Funny that LLMs treats language how Derrida said it functions

  • @pantsonfire2216
    @pantsonfire2216 6 місяців тому +1

    “Hey guys I inserted a couple of terms into my useless AI and it made word soup with no context”
    😮😮😮😮😮😮

  • @saimbhat6243
    @saimbhat6243 5 місяців тому +1

    Now train the model in Mandarin on mandarin texts or in sanskrit on Sanskrit texts and you will get to know what chinese or indians say about stuff. This is just a frequency analysis of words used together in sentences and paragraphs. Isn't it a statistics of words people write and what they write about these words. As far as I can see, it is just a descriptive statistics of vocabulary and its usage. It does NOT show any hidden causations or hidden meanings, it shows what most people already talk about. Jeez, this AI fever is getting out of control, are we gonna have AI lords soon in future?
    It is just a description of literally culture/texts. I have no idea why would you think that you found something new in LLMs?

  • @areyoutheregoditsmedave
    @areyoutheregoditsmedave 6 місяців тому +1

    Peterson taking the ultimate protestant take on scripture. haha gross. he really needs to stop.

  • @carstenmanz302
    @carstenmanz302 6 місяців тому

    There are people who have had concrete experiences(!) with God and have therefore become believers - and then there are such populist philosophical chatterboxes like Jordan Peterson who give their expertise almost daily on EVERY topic in the world without any personal EXPERIENCES, let alone spiritual insights . Philosophers and psychologists have never really understood religion, and the more widely educated they were, the less so.

  • @kittenlang333
    @kittenlang333 5 місяців тому

    I never think of God/higher power/creator and law. Law if not of the one vibration from the beginning but came along later to maintain order.
    Frankly, I go with the super old Sumerian Tablets. Literacy is DIFFICULT much less mediums that last.
    But I subscribe to Marcionite Christianity. The Torah/Old Testament make no sense. A jealous One Almighty? Of whom?
    So much more like that. As if circumcisions win wars 🙄

  • @morganp7238
    @morganp7238 5 місяців тому +1

    Yes, LLMs are "concordancers" on mega-steroids.

  • @chralexNET
    @chralexNET 6 місяців тому +2

    This video made me realize that Dr. Jordan is delusional. I paid attention to his works sometime back in 2017-19, and back then he seemed very insightful and to have some good points, but that he thinks that this thing here is anything useful or worthwhile is beyond me. To me it actually seems like BS.

    • @n0vitski
      @n0vitski 5 місяців тому

      Peterson has been rapidly spiralling in the last few years, anything of value he had to say is long past him. By refusing to actually face and address the ideas that challenge his preconceived liberal notions he has completely joined the controlled neocon oppostion of the regime

  • @NA-di3yy
    @NA-di3yy 5 місяців тому +1

    Peterson is a funny guy, but in my opinion he is a life coach, not a political philosopher or scholar. Sometimes witty, sometimes cringy, but that jacket with icons of his - in my opinion, that's something beyond taste 🤦‍♂

  • @Smegead
    @Smegead 6 місяців тому

    Greed associated with hebrew.... in the training data.

    • @opposingshore9322
      @opposingshore9322 6 місяців тому +1

      no disrespect meant to actual autistic people, but this just feels so…autistic. needing an overly literal ‘langauge equation’ as an attempt to dumb down and capture the depth, mystery, and complex meanings of the sacred is something a computer does…but it does not feel human or helpful to make us look at language and the sacred as reducible by an unfeeling machine. this over-enthusiasm about AI, technology, and the need to include elon musk in everything does make me nauseous. if there is a Way, a Truth, and a Life, this ain’t it.

    • @Anhedonxia
      @Anhedonxia 5 місяців тому +1

      ​@@opposingshore9322 this 👏

  • @clancynielsen6800
    @clancynielsen6800 6 місяців тому +1

    Man, this is some sketchy shrubbery

  • @tolgonainadyrbekkyzy2159
    @tolgonainadyrbekkyzy2159 6 місяців тому +2

    99 names of God 🫶 loving your videos, hello from Kyrgyzstan!