I would say God have mercy is more akin to health and healing. Κύριε ελέησον comes from the idea of έλαιον "olive oil". It's a balm for those who have injuries. The Great Physician is another name we give Christ along with the title φιλάνθρωπος.
@13:46 Yeah you nailed it. Just because certain words cluster together in the LLM calcified (parametric) knowledge, which is a function of the totality of the text you feed it, doesn’t mean it’s a definition or even close. Adding or removing text will alter which words cluster/coalesce. This is paradigmatic of using LLMs and reading way too much into it. They are sub-ordinate to our language but they do not accurately represent what our knowledge conveys.
@2:31 The concept of Prince isn’t being ‘understood’. The co-ordinates the model ‘learns’ and ‘places’ them in this abstract co-ordinate field (which is higher than 3d), is more of a function of the corpus of the many sequence of words the model is trained on. Within its training data, prince can also appear in other contexts next to other words like Prince (the musician), Prince of Persia etc. So whilst boy+heir+king should be equal to prince, nothing is being learnt-it’s merely being memorised and calcified into the model. Only if you fed the model training data in which princes were refered to as boy, heir, and to the king explicitly, then the co-ordinates will be pretty much as close as they can get. But you’ve essentially spotted that under the hood how these things are really representing the knowledge they’re fed and how they respond into being queried. There isn’t any reasoning being done. In fact there is no reasoning mechanism. Only interpolation is occurring
@11:36 These models are language models after-all. Not knowledge models with all the reasoning maps one needs to deal with novelty and create new knowledge to adapt to said novelty. They model language-which one can exaggerate to mean that they can model a human mind. Hence the polemic used there at the end. Just my thoughts :)
Guenon and Evola's critic about science (in "The Reign of Quantity..." and "Ride the Tiger") provide an answer to this issue. LLM can't be of much help in the research for God or Truth because the analysis is biased by its quantitative approach and by the methodology of basing the results on some sort of statistic data coming out of a corpus of texts. It may shed some light on a bunch of other things which are human, all too human.
In The Brothers Karamazov, Ivan’s Grand Inquisitor claims humans can satisfy their faith in God when they are allowed to have miracles (fortune), mystery (awe), and authority (law). Seems Dostoevsky had the best linguistic understanding of what we think of when we talk about God and faith no?
These are exactly the questions we should be asking. Worth looking into how vector spaces behind LLMs nail "understanding" beyond the limit of language.
I don't really get what Jordan Peterson is up to but they say God meets you where you are at and I don't pretend to know where Jordan Peterson is at in his head space. In my experience I sometimes wonder how to even reach some people on this topic because I'm reminded of the saying you can't fill a full cup. A lot of people's conversion to God is first preceeded by their cup being knocked off the table. It's written God chastised those He loves and so getting your cup knocked off the table is something we try to avoid but may be exactly what God wants for us before He can work with us.
In machine learning, the word "bias" has a well-established meaning. Although perhaps in the context of a not entirely scientific article, one of the everyday meanings of this word was meant - e.g. in the sense that both corpora are not complete and therefore provide a different ontology, also incomplete. Or even prejudice regarding race, gender and so on, this is a hot topic in language models, if you know what I mean) As for the generally accepted academic meaning - the presumption is that there is some function whose product is the data, and we select an objective function that is as close as possible to this unknown. Since there is a limited amount of data, and data can be noisy (inaccurate measurements), we can never be absolutely sure that we have found the very ideal function, but we can check how it can predict points that were not used in training (selection of the target function ). roughly speaking, we have several points, and we want to get an equation whose graph will pass through these points. we can choose a complex formula that will pass **exactly** through all these points, but if after that we look at some other points that we saved for the test and did not use during training, it turns out that our function does not pass through them at all, not even close. this is a case of low (or even no) bias and high variance (bad choice). we can find a very simple formula that will work equally poorly on both the training and control dataset data (high bias, relatively low variance, also bad, low predictive ability) If we’re lucky, we can find a formula of average complexity that will pass **approximately** through the points of the training dataset, and **approximately** through the points from the test dataset (low bias, low variance) - this is a good result. But I got the impression that the article is talking about the fact that the training was carried out on a biased corpus, which, however, leaves open the question of how the authors imagine a non-biased corpus...
@@regnbuetorsk Hebrew doesn't use vowels. So let's take a made up word: "THT". In Hebrew it might be read as TOHOT, or TAHAT or TOHAT. Each word would have a different meaning. So if you have a sentence, that sentence has multiple meanings. It's not entirely different than assigning to each word multiple meanings, but it's structurally different. So each full realized word such as TOHOT and TOHAT are going to have a base word THT, that they are closest to mathematically. It's the average. It's more technical than that but it's the gist for a laymen.
Bias here in this context refers to the training of the LLM. Those vectors in space and their proximity to each other are built and tuned during training. Depending on what text you train on you could get very different vectors.
Although western civilisation appears to change, the inability to separate God and man remains: - Greeks: Gods behaving like humans - Christianity: God becoming a man - Neo-liberalism: Humans becoming Gods (determining the law) Whereas, Islam succeeds in the clear separation between God and man. Can’t remember where I read this.
LLM’s are showing that the collective mind is doing more then just expressing with language, it’s also trying to solve the puzzle of the human condition. It’s hard to recognize this because we only use linear language and the puzzle of the human condition is multidimensional.
I might be going a bit off-topic but I think there's a simple and very useful deduction based on Hermetic principles that we can all use to recognize the ontological nature of God. Assuming the Whole is God and that the Whole cannot contain in itself less than any of Its parts; every part should therefore be contained in It. If that's the case, we as individuals (parts of the whole) and possessors of consciousness, should then forcibly assume consciousness to be an attribute of the Whole.
The problem is that the vector space of LLMs has an incredibly high number of dimensions, and this sort of reductive analysis projects only the faintest and most distorted shadow of what that space contains into a few words.
Can a vector space only be explained through another vector space? Rather than transposing it into 2 dimensional words. In other words, can we experiment with understanding without using words?
@@NessieAndrew Yes of course you can experiment. And you might even learn something. But anyone who claims that they *know* what a number, vector, or position “means” or what it “means” to tweak anything like that should immediately be met with suspicion.
@@iron5wolf Absolutely, it's a black box. But it's sort of like superposition. Once you look at it it's gone. Once you translate the vector space into language, you lose all the complexity of the vector space. It's a kind of understanding that is beyond language and does not intersect meaningfully with language.
@@NessieAndrew it’s the nuance that’s lost when you “collapse” (project) a vector space into lower dimensions. I’m warning against doing that and then saying you “understand” It. Mostly, you don’t.
"Tangible Concepts"? The first paragraph confesses that some terms lack.a tangible referent. The next paragraph then boasts of finding "tangible concepts". Notice any contradiction?
I find it difficult to take this attempt to define or its possible outcome seriously. It seems easier to redefine than bringing god closer to greed or law as power of authority
I saw the debate some years ago with Peterson and Zizek regarding Marxism. It was remarkably embarrassing for Peterson, who basically gave a sophomoric book report renditions of Karl Marx and Marxism. And I saw his debate with Matt Dillahunty. Again, embarrassing for Peterson. I just never saw what a lot of his Fanboys seem to see about him.
Ipsum Esse Subsistens - I fail to see what else more needs to be clarified in regard to a base definition. This is a fascinating topic nowadays simply because it exists, but outside of the seemingly inevitable semantic games that people like to play, I fail to see why the definitions that were formulated hundreds of years ago don't suffice. The fact that "being" isn't one of the main words that clusters is interesting to me though.
Putting aside the vexed question of whether Semantic Core Clarification (SCC) comes anywhere close to modeling the inner workings of Dasein, these LLMs also have a very limited definition of god. Both models produced 3 word combitations that has a 'law' vector in common. A law-centric god is central and limited to the Abrahamic religions, in particular Judaism and Islam. Christianity too defines itself as a break from the old law. Indo-European, East Asian and Animistic religions, be it Hinduism, Zoroastrianism, Buddhism, Norse religions or Shinto... are not law centric. I think these interesting models say nothing about god but reflects the cultural background and biases of the people who made them.
How could concepts of God not be biased!? Are they expecting to get some “objective” perspective of God. Seems like hubris. I think the more realistic question is whose biases are on display, the selected corpus or the engineers doing the fine tuning? Greed = mammon. This word seems to stand out the most as possibly reflecting a “bias”, but there is no way of knowing what that bias is without knowing the contexts in which the word is embedded. If an LLM could genuinely point out a blind spot, instead of reinforcing a particular ideological norm, then there could be value in realizing our implicit biases. However, I haven’t seen any indication that LLMs can do that yet. “Hugely over promising and under delivering.” Agreed! Why did the go through the exercise of defining god with three words only to reduce those words to their banal interpretation! Ideally those words were selected from the corpus because the meaning extended out in many directions. To Peterson’s comment to Musk, the three words are meaningless unless elucidated by someone who has had experiences with God. If their goal was to provide insight into biases, I think they failed. Also, they failed to contribute to AI ethics and explain how AI models “see” the world. This article doesn’t ease concerns that our computer programmers are making a Faustian bargain.
I do think this type of AI-based analysis can be useful as it may provide narrow arguments against materialist objections to god and religion, which seems to be the main project of JBP. The fact that potentially engaging understandings of god can be logically derived from language per se, may cause a persuasive mystifying experience for materialist atheists. Language is kind of like mathematics, which is recognised as objectively true by some materialists. So, using LLMs, one can sort of suggest a materialist basis that sort of hints at the validity of the idea of god. Although I appreciate JBP’s style of symbolic intellectual reasoning, as well as his Christian Orthodox associates J. & M. Pageau, (e.g., it seems to me highly consistent with Heidegger’s notion of Dasein), I don’t believe it can provide solid counter arguments to the materialist negation of the spiritual or supernatural. I disagree with Peterson and don’t see how this, and his other philosophical methods, is relevant to the ontological material status of abstract notions of god. I would very much like any argument that overturns ontological realism or materialism, but I don’t have one. That’s how I see it in any case.
This reminds me of the classic short story _The Nine Billion Names of God_ by Arthur C. Clarke. More generally, there's this incredibly strange mix of hubris but also infantilism in this idea, and most of the latest Peterson shticks. He definitely seems like the sort of guy who _should- be at least smart enough to know that LLMs and specific, and AI™ in general, is almost entirely hollow and fake, but not quite of high enough character that if he didn't know he wouldn't still pretend to be impressed by it in order to get more engagement online.
I don't normally comment, but your conclusion reminds me of how hard I facepalm when conservatives naively and clumsily reference DNA/science in order to define "man" and "woman". People knew what a "man" and a "woman" were for as long as they existed. It's not as if they were confused about the matter up until less than a century ago when DNA was discovered. It's using a derivative of a tacit knowledge to reify that same knowledge in order to sexy up the obvious, albeit not fully clear or articulable. The atheist mind demands the same satisfaction as the religious in filling the gaps with answers. However, with a complete lack of self awareness, they fill the gaps with something equally "unprovable" yet, unlike the mystic, something stale, mechanistic, and uninspiring.
treading through language so carefully that even bringing up language itself is taboo, because god forbit, let's not admit that we are dependant on/shaped by language just like those others, barbarians, or pagans (or whatevers) are. looks like an issue only possible within abrahamic religions.
Now train the model in Mandarin on mandarin texts or in sanskrit on Sanskrit texts and you will get to know what chinese or indians say about stuff. This is just a frequency analysis of words used together in sentences and paragraphs. Isn't it a statistics of words people write and what they write about these words. As far as I can see, it is just a descriptive statistics of vocabulary and its usage. It does NOT show any hidden causations or hidden meanings, it shows what most people already talk about. Jeez, this AI fever is getting out of control, are we gonna have AI lords soon in future? It is just a description of literally culture/texts. I have no idea why would you think that you found something new in LLMs?
There are people who have had concrete experiences(!) with God and have therefore become believers - and then there are such populist philosophical chatterboxes like Jordan Peterson who give their expertise almost daily on EVERY topic in the world without any personal EXPERIENCES, let alone spiritual insights . Philosophers and psychologists have never really understood religion, and the more widely educated they were, the less so.
I never think of God/higher power/creator and law. Law if not of the one vibration from the beginning but came along later to maintain order. Frankly, I go with the super old Sumerian Tablets. Literacy is DIFFICULT much less mediums that last. But I subscribe to Marcionite Christianity. The Torah/Old Testament make no sense. A jealous One Almighty? Of whom? So much more like that. As if circumcisions win wars 🙄
This video made me realize that Dr. Jordan is delusional. I paid attention to his works sometime back in 2017-19, and back then he seemed very insightful and to have some good points, but that he thinks that this thing here is anything useful or worthwhile is beyond me. To me it actually seems like BS.
Peterson has been rapidly spiralling in the last few years, anything of value he had to say is long past him. By refusing to actually face and address the ideas that challenge his preconceived liberal notions he has completely joined the controlled neocon oppostion of the regime
Peterson is a funny guy, but in my opinion he is a life coach, not a political philosopher or scholar. Sometimes witty, sometimes cringy, but that jacket with icons of his - in my opinion, that's something beyond taste 🤦♂
no disrespect meant to actual autistic people, but this just feels so…autistic. needing an overly literal ‘langauge equation’ as an attempt to dumb down and capture the depth, mystery, and complex meanings of the sacred is something a computer does…but it does not feel human or helpful to make us look at language and the sacred as reducible by an unfeeling machine. this over-enthusiasm about AI, technology, and the need to include elon musk in everything does make me nauseous. if there is a Way, a Truth, and a Life, this ain’t it.
99 names of God 🫶 loving your videos, hello from Kyrgyzstan!
“Truck and… guns?”
Lol. Damn right, brother.
How about ineffable, ineffable, and ineffable? Now that would be something to ponder. But instead we're left with garbage in, garbage out.
You can keep pondering while other people actually find answers, lol.
God have mercy = Law, God bless you = abundance, God help me = will
I would say God have mercy is more akin to health and healing. Κύριε ελέησον comes from the idea of έλαιον "olive oil". It's a balm for those who have injuries. The Great Physician is another name we give Christ along with the title φιλάνθρωπος.
@13:46 Yeah you nailed it. Just because certain words cluster together in the LLM calcified (parametric) knowledge, which is a function of the totality of the text you feed it, doesn’t mean it’s a definition or even close. Adding or removing text will alter which words cluster/coalesce. This is paradigmatic of using LLMs and reading way too much into it. They are sub-ordinate to our language but they do not accurately represent what our knowledge conveys.
@2:31 The concept of Prince isn’t being ‘understood’. The co-ordinates the model ‘learns’ and ‘places’ them in this abstract co-ordinate field (which is higher than 3d), is more of a function of the corpus of the many sequence of words the model is trained on. Within its training data, prince can also appear in other contexts next to other words like Prince (the musician), Prince of Persia etc. So whilst boy+heir+king should be equal to prince, nothing is being learnt-it’s merely being memorised and calcified into the model. Only if you fed the model training data in which princes were refered to as boy, heir, and to the king explicitly, then the co-ordinates will be pretty much as close as they can get. But you’ve essentially spotted that under the hood how these things are really representing the knowledge they’re fed and how they respond into being queried. There isn’t any reasoning being done. In fact there is no reasoning mechanism. Only interpolation is occurring
Machavelli ungloved.
@11:36 These models are language models after-all. Not knowledge models with all the reasoning maps one needs to deal with novelty and create new knowledge to adapt to said novelty. They model language-which one can exaggerate to mean that they can model a human mind. Hence the polemic used there at the end. Just my thoughts :)
Guenon and Evola's critic about science (in "The Reign of Quantity..." and "Ride the Tiger") provide an answer to this issue.
LLM can't be of much help in the research for God or Truth because the analysis is biased by its quantitative approach and by the methodology of basing the results on some sort of statistic data coming out of a corpus of texts. It may shed some light on a bunch of other things which are human, all too human.
Peterson's formulation reveals his misstep:
You simply can't "define" the sacred in human language
In The Brothers Karamazov, Ivan’s Grand Inquisitor claims humans can satisfy their faith in God when they are allowed to have miracles (fortune), mystery (awe), and authority (law). Seems Dostoevsky had the best linguistic understanding of what we think of when we talk about God and faith no?
These are exactly the questions we should be asking.
Worth looking into how vector spaces behind LLMs nail "understanding" beyond the limit of language.
I don't really get what Jordan Peterson is up to but they say God meets you where you are at and I don't pretend to know where Jordan Peterson is at in his head space. In my experience I sometimes wonder how to even reach some people on this topic because I'm reminded of the saying you can't fill a full cup. A lot of people's conversion to God is first preceeded by their cup being knocked off the table. It's written God chastised those He loves and so getting your cup knocked off the table is something we try to avoid but may be exactly what God wants for us before He can work with us.
There is no meaning from a machine without human observation.
As Mr. Peterson says "God is the point that our sight meets our consciousness"
In machine learning, the word "bias" has a well-established meaning. Although perhaps in the context of a not entirely scientific article, one of the everyday meanings of this word was meant - e.g. in the sense that both corpora are not complete and therefore provide a different ontology, also incomplete. Or even prejudice regarding race, gender and so on, this is a hot topic in language models, if you know what I mean)
As for the generally accepted academic meaning - the presumption is that there is some function whose product is the data, and we select an objective function that is as close as possible to this unknown. Since there is a limited amount of data, and data can be noisy (inaccurate measurements), we can never be absolutely sure that we have found the very ideal function, but we can check how it can predict points that were not used in training (selection of the target function ).
roughly speaking, we have several points, and we want to get an equation whose graph will pass through these points. we can choose a complex formula that will pass **exactly** through all these points, but if after that we look at some other points that we saved for the test and did not use during training, it turns out that our function does not pass through them at all, not even close. this is a case of low (or even no) bias and high variance (bad choice). we can find a very simple formula that will work equally poorly on both the training and control dataset data (high bias, relatively low variance, also bad, low predictive ability)
If we’re lucky, we can find a formula of average complexity that will pass **approximately** through the points of the training dataset, and **approximately** through the points from the test dataset (low bias, low variance) - this is a good result.
But I got the impression that the article is talking about the fact that the training was carried out on a biased corpus, which, however, leaves open the question of how the authors imagine a non-biased corpus...
I just realized that the LLMs use the same reasoning behind the Torah codes.
can you elaborate more? this thing you said is tingling my curiosity
@@regnbuetorsk Hebrew doesn't use vowels. So let's take a made up word: "THT". In Hebrew it might be read as TOHOT, or TAHAT or TOHAT. Each word would have a different meaning. So if you have a sentence, that sentence has multiple meanings. It's not entirely different than assigning to each word multiple meanings, but it's structurally different.
So each full realized word such as TOHOT and TOHAT are going to have a base word THT, that they are closest to mathematically. It's the average.
It's more technical than that but it's the gist for a laymen.
@@DensityMatrix1Very interesting….
Bias here in this context refers to the training of the LLM. Those vectors in space and their proximity to each other are built and tuned during training. Depending on what text you train on you could get very different vectors.
The God Vector is a kickass name for a novel.
Although western civilisation appears to change, the inability to separate God and man remains:
- Greeks: Gods behaving like humans
- Christianity: God becoming a man
- Neo-liberalism: Humans becoming Gods (determining the law)
Whereas, Islam succeeds in the clear separation between God and man.
Can’t remember where I read this.
Imagine if Terry Davis was still around
TempleGPT
LLM’s are showing that the collective mind is doing more then just expressing with language, it’s also trying to solve the puzzle of the human condition. It’s hard to recognize this because we only use linear language and the puzzle of the human condition is multidimensional.
I might be going a bit off-topic but I think there's a simple and very useful deduction based on Hermetic principles that we can all use to recognize the ontological nature of God.
Assuming the Whole is God and that the Whole cannot contain in itself less than any of Its parts; every part should therefore be contained in It.
If that's the case, we as individuals (parts of the whole) and possessors of consciousness, should then forcibly assume consciousness to be an attribute of the Whole.
The problem is that the vector space of LLMs has an incredibly high number of dimensions, and this sort of reductive analysis projects only the faintest and most distorted shadow of what that space contains into a few words.
Can a vector space only be explained through another vector space? Rather than transposing it into 2 dimensional words.
In other words, can we experiment with understanding without using words?
@@NessieAndrew Yes of course you can experiment. And you might even learn something. But anyone who claims that they *know* what a number, vector, or position “means” or what it “means” to tweak anything like that should immediately be met with suspicion.
@@iron5wolf Absolutely, it's a black box. But it's sort of like superposition. Once you look at it it's gone. Once you translate the vector space into language, you lose all the complexity of the vector space.
It's a kind of understanding that is beyond language and does not intersect meaningfully with language.
@@NessieAndrew it’s the nuance that’s lost when you “collapse” (project) a vector space into lower dimensions. I’m warning against doing that and then saying you “understand”
It. Mostly, you don’t.
@@iron5wolf That is what I'm saying. You can't collapse it.
It is "understanding" in higher dimensions and that is by definition inaccessible to us.
What about 'gratitude'
"Tangible Concepts"? The first paragraph confesses that some terms lack.a tangible referent. The next paragraph then boasts of finding "tangible concepts". Notice any contradiction?
Most people never get close to God because they can't accept the ambiguity. It's a failure of understanding not reasoning.
I find it difficult to take this attempt to define or its possible outcome seriously. It seems easier to redefine than bringing god closer to greed or law as power of authority
I saw the debate some years ago with Peterson and Zizek regarding Marxism. It was remarkably embarrassing for Peterson, who basically gave a sophomoric book report renditions of Karl Marx and Marxism.
And I saw his debate with Matt Dillahunty. Again, embarrassing for Peterson.
I just never saw what a lot of his Fanboys seem to see about him.
Ipsum Esse Subsistens - I fail to see what else more needs to be clarified in regard to a base definition. This is a fascinating topic nowadays simply because it exists, but outside of the seemingly inevitable semantic games that people like to play, I fail to see why the definitions that were formulated hundreds of years ago don't suffice.
The fact that "being" isn't one of the main words that clusters is interesting to me though.
Maybe a bucket of cold water for Jordan Peterson...?
Putting aside the vexed question of whether Semantic Core Clarification (SCC) comes anywhere close to modeling the inner workings of Dasein, these LLMs also have a very limited definition of god. Both models produced 3 word combitations that has a 'law' vector in common. A law-centric god is central and limited to the Abrahamic religions, in particular Judaism and Islam. Christianity too defines itself as a break from the old law. Indo-European, East Asian and Animistic religions, be it Hinduism, Zoroastrianism, Buddhism, Norse religions or Shinto... are not law centric. I think these interesting models say nothing about god but reflects the cultural background and biases of the people who made them.
Semitic core clarification
How could concepts of God not be biased!? Are they expecting to get some “objective” perspective of God. Seems like hubris. I think the more realistic question is whose biases are on display, the selected corpus or the engineers doing the fine tuning?
Greed = mammon. This word seems to stand out the most as possibly reflecting a “bias”, but there is no way of knowing what that bias is without knowing the contexts in which the word is embedded. If an LLM could genuinely point out a blind spot, instead of reinforcing a particular ideological norm, then there could be value in realizing our implicit biases. However, I haven’t seen any indication that LLMs can do that yet.
“Hugely over promising and under delivering.” Agreed! Why did the go through the exercise of defining god with three words only to reduce those words to their banal interpretation! Ideally those words were selected from the corpus because the meaning extended out in many directions. To Peterson’s comment to Musk, the three words are meaningless unless elucidated by someone who has had experiences with God.
If their goal was to provide insight into biases, I think they failed. Also, they failed to contribute to AI ethics and explain how AI models “see” the world. This article doesn’t ease concerns that our computer programmers are making a Faustian bargain.
It seems like all those concepts can also be applied to religion in which case greed would be somewhat logical
I do think this type of AI-based analysis can be useful as it may provide narrow arguments against materialist objections to god and religion, which seems to be the main project of JBP.
The fact that potentially engaging understandings of god can be logically derived from language per se, may cause a persuasive mystifying experience for materialist atheists.
Language is kind of like mathematics, which is recognised as objectively true by some materialists. So, using LLMs, one can sort of suggest a materialist basis that sort of hints at the validity of the idea of god.
Although I appreciate JBP’s style of symbolic intellectual reasoning, as well as his Christian Orthodox associates J. & M. Pageau, (e.g., it seems to me highly consistent with Heidegger’s notion of Dasein), I don’t believe it can provide solid counter arguments to the materialist negation of the spiritual or supernatural. I disagree with Peterson and don’t see how this, and his other philosophical methods, is relevant to the ontological material status of abstract notions of god.
I would very much like any argument that overturns ontological realism or materialism, but I don’t have one.
That’s how I see it in any case.
This is too language and culturally dependent to produce anything tangibly definite.
This reminds me of the classic short story _The Nine Billion Names of God_ by Arthur C. Clarke.
More generally, there's this incredibly strange mix of hubris but also infantilism in this idea, and most of the latest Peterson shticks.
He definitely seems like the sort of guy who _should- be at least smart enough to know that LLMs and specific, and AI™ in general, is almost entirely hollow and fake, but not quite of high enough character that if he didn't know he wouldn't still pretend to be impressed by it in order to get more engagement online.
I don't normally comment, but your conclusion reminds me of how hard I facepalm when conservatives naively and clumsily reference DNA/science in order to define "man" and "woman". People knew what a "man" and a "woman" were for as long as they existed. It's not as if they were confused about the matter up until less than a century ago when DNA was discovered.
It's using a derivative of a tacit knowledge to reify that same knowledge in order to sexy up the obvious, albeit not fully clear or articulable.
The atheist mind demands the same satisfaction as the religious in filling the gaps with answers. However, with a complete lack of self awareness, they fill the gaps with something equally "unprovable" yet, unlike the mystic, something stale, mechanistic, and uninspiring.
treading through language so carefully that even bringing up language itself is taboo, because god forbit, let's not admit that we are dependant on/shaped by language just like those others, barbarians, or pagans (or whatevers) are. looks like an issue only possible within abrahamic religions.
Funny that LLMs treats language how Derrida said it functions
“Hey guys I inserted a couple of terms into my useless AI and it made word soup with no context”
😮😮😮😮😮😮
Now train the model in Mandarin on mandarin texts or in sanskrit on Sanskrit texts and you will get to know what chinese or indians say about stuff. This is just a frequency analysis of words used together in sentences and paragraphs. Isn't it a statistics of words people write and what they write about these words. As far as I can see, it is just a descriptive statistics of vocabulary and its usage. It does NOT show any hidden causations or hidden meanings, it shows what most people already talk about. Jeez, this AI fever is getting out of control, are we gonna have AI lords soon in future?
It is just a description of literally culture/texts. I have no idea why would you think that you found something new in LLMs?
Peterson taking the ultimate protestant take on scripture. haha gross. he really needs to stop.
There are people who have had concrete experiences(!) with God and have therefore become believers - and then there are such populist philosophical chatterboxes like Jordan Peterson who give their expertise almost daily on EVERY topic in the world without any personal EXPERIENCES, let alone spiritual insights . Philosophers and psychologists have never really understood religion, and the more widely educated they were, the less so.
I never think of God/higher power/creator and law. Law if not of the one vibration from the beginning but came along later to maintain order.
Frankly, I go with the super old Sumerian Tablets. Literacy is DIFFICULT much less mediums that last.
But I subscribe to Marcionite Christianity. The Torah/Old Testament make no sense. A jealous One Almighty? Of whom?
So much more like that. As if circumcisions win wars 🙄
Yes, LLMs are "concordancers" on mega-steroids.
This video made me realize that Dr. Jordan is delusional. I paid attention to his works sometime back in 2017-19, and back then he seemed very insightful and to have some good points, but that he thinks that this thing here is anything useful or worthwhile is beyond me. To me it actually seems like BS.
Peterson has been rapidly spiralling in the last few years, anything of value he had to say is long past him. By refusing to actually face and address the ideas that challenge his preconceived liberal notions he has completely joined the controlled neocon oppostion of the regime
Peterson is a funny guy, but in my opinion he is a life coach, not a political philosopher or scholar. Sometimes witty, sometimes cringy, but that jacket with icons of his - in my opinion, that's something beyond taste 🤦♂
Greed associated with hebrew.... in the training data.
no disrespect meant to actual autistic people, but this just feels so…autistic. needing an overly literal ‘langauge equation’ as an attempt to dumb down and capture the depth, mystery, and complex meanings of the sacred is something a computer does…but it does not feel human or helpful to make us look at language and the sacred as reducible by an unfeeling machine. this over-enthusiasm about AI, technology, and the need to include elon musk in everything does make me nauseous. if there is a Way, a Truth, and a Life, this ain’t it.
@@opposingshore9322 this 👏
Man, this is some sketchy shrubbery
99 names of God 🫶 loving your videos, hello from Kyrgyzstan!