I really only comment very rarely, but this video actually impressed me. For those who are curious let me offer some thoughts (I am an Econometrician, so my bubble might be a little mathematical 😂) 1. As long as entropy is increasing complexity will always also increase. This is like the robot adding to the model. 2. Something I work on a lot recently is the "self-organizing-principle". Kind of like the invisible hand, but more fundamental: Systems seem to always find a set of states they occupy. This really pops up everywhere, like e and pi in physics & maths. 3. The closest thing to "objective meaning" I have ever accepted is beauty. But in the mathematical way, meaning as a concept rather than "what it looks like". This means that beauty is when something causes exitement (I use this word because it can be interpreted usefully in all of physics, biology, maths, economics and so on) simply by the way it is structured. From the perspective of Economics: it causes utility for all receivers without being consumed in the process of providing said utility. This also might be the closest we get to a "white hole" as an opposite to the black hole in physics, i.e., a process that halts or reverses the ever increasing entropy (chaos). 4. Regarding AI In a great mamy discussions of the topic I always think about one argument (that I cannot, maybe ever, prove): the fundamental differemce between us and AI is the code. The system by which the information is processed. The code for AI is binary and ours has four distinct bases. While I don't (yet) exactly know why this differemce is sufficient go claim that AI will not become concious in a similar manner, I am absolutely confident that this is the most relevant gap. The reasoning: integration towards a self-organizing system occurs beyond the physical reality of space-time and needs a lot more complexity to be coded sufficiently than what a binary code could ever allow for. To compare with your video: the loop of a binary code cannot apply the outside pressure on itself in order to develop the model beyond that information which is accessible inside it. 5. On modelling We must always remember that math itself is incomplete. Nothing is constant, it just experiences very slow flux (like billions of years). The closest thing to a true constant is the fine-structure-constant (1/137). Everything has some form of time-dependence and will therefore always flux. To compare with your video: The completeness (bias) will walways trade-off with the potential (variance) in some way. This is a necessity to keep models estimatable (remember the incompleteness of maths). The beauty of that is that we will always be in the twilight between being part of the whole universe and existing as an individual fragment. This might seem scary, but it is necessary to keep a (somewhat) stable existence going inside of a chaotically ordered universe.
Sorry about the delay Tim. This was a fascinating comment. I'm not as familiar with some of the language you use, and certainly not as mathematically versed; but I really resonate with a lot of what you wrote. I'll have to think more about it. I really like the final paragraph, about the beauty of the twilight between being part of the universe and existing as an individual fragment. And I agree pushing beyond the 'scariness' of it and recognizing its necessity (and that it is from this sense of separateness that all phenomenological existence arises). Love what you said on beauty. Aesthetics, perception, and their relevance to everything/ontology itself. All that stuff fascinates me. That's an incredibly rich field of thought which I hope to go into more. I'm not sure about the gap between our code and AI. I suspect everything material can be broken down into binary--whether this includes bit-level consciousness, I'm not sure. I think our code arose more out of the inefficience and messiness of natural selection, less so out of some necessity regarding conscious interfacing. Not sure. I tend to suspect that any properly arranged system can arrive at human-style consciousness within our spacetime framework. If consciousness is fundamental, I don't see why it should depend on biological material. I think the 'field' or 'dimension' or quality of consciousness itself, AS part of that fundamental framework, would provide all the complexity necessary when integrated with structured matter. If consciousness is NOT fundamental, I again don't see why we would need to look beyond the spacetime continuum. I don't know if I'm making any sense here. I may be pushing the bounds of my familiarity with physics, etc. I guess I'd need to ask you a lot more about self-organizing systems. Initial self replication, abiogenesis, entropy, and all that seem to still be among the more unexplored mysteries.. I know very little about the fine-structure-constant, outside of the fact of its existence in discussions like this. Anyway thanks!
I could be way off on some of that. Something else I like to think about is the probable inaccessibility of the fundamental nature of reality. It seems physics only ever describes what things DO, rather than what they are--yet we do get the sense we can approach SOMETHING of an understanding, perhaps even a comprehensive understanding as it relates to us and our experience. From Plato to Euclid to Newton to Einstein to Bohr, we are constantly unfurling yet another layer, and reconceptualizing the fringes of our reality. Each layer though, does seem to truly expand the fringe and give us a deeper understanding within it. Now we're at the point of dissecting things like temporal/local causality, the possibility of a multiverse, of eleven dimensions, and so on . . . . I suppose you're right--it's EXTREMELY plausible that four or even many more dimensions might be necessary for the emergence of or integration of consciousness. Say we need framework (three dimensions); separateness (temporal dimension); and say there are a dozen others which are necessary to give enable the integration of conscious self awareness. I don't really know what I'm saying. I guess I'm saying I'd have to hear more about our code vs binary, but you could be totally right and I might simply not be familiar enough with abstract math to understand. How we might communicate with aliens is another fascinating concept which has informed many of my thought experiments regarding reason, interfacing, perception, and so on. I've wondered whether a synthetically intelligent program might not be able to develop more perceptual dimensions than ours, and infer from those formulas which might approach more closely some explanation of consciousness. We already have trouble conceptualizing four dimensions. Who's to say an AI couldn't conceptualize eleven quite easily? Does that seem relevant at all?
This was great. Happy I gave it a go since most of my current thoughts seem to align with a few of your interrogations and where the substrate seem to falter. I wish I could help a "friend" of mine that only wants to win. While this glimpse into the issue seem simple, she's a diagnosed with NPD and doesn't grasp that other people matter most for her happiness. She uses people and formulates a controlling or manipulative statement every 3 sentences... she doesn't discuss and always thinks about superficial things. The behavior (of her pushing her will to obtain physical stimuli) has taken over her reality... basic animal functions. I tried everything. When I say "friend"... I mean that I've always been hers and she's never had the tact or integrity to have true friends (good influences). I saw tremendously good changes and a resemblance of stability as I used the last card in the book (offer proximity.../be the bait). Sadly, I don't think I can sacrifice my mind to help her.
Thanks for the detailed response. I appreciate it. Sounds like a very complex and personal situation with a lot of history and interconnection I wouldn't really be able to comment on. I do wonder about the 'will to obtain phisical stimuli' and the 'faltering of the substrate.' Seems like their might be some interesting thoughts there. I do understand when you talk about the risk of sacrificing your mind, and the offering of proximity. It seems those are often indeed the last card in the book. Ultimately that's the only thing we can truly control, once all values are contradictory and once everything is just confrontation . . . . our presence, our interest, our investment.
Honestly this is one of the best made videos I have seen on UA-cam. I can’t wait to watch more!
Thanks so much. Appreciate it.
I really only comment very rarely, but this video actually impressed me.
For those who are curious let me offer some thoughts (I am an Econometrician, so my bubble might be a little mathematical 😂)
1. As long as entropy is increasing complexity will always also increase. This is like the robot adding to the model.
2. Something I work on a lot recently is the "self-organizing-principle". Kind of like the invisible hand, but more fundamental: Systems seem to always find a set of states they occupy. This really pops up everywhere, like e and pi in physics & maths.
3. The closest thing to "objective meaning" I have ever accepted is beauty. But in the mathematical way, meaning as a concept rather than "what it looks like". This means that beauty is when something causes exitement (I use this word because it can be interpreted usefully in all of physics, biology, maths, economics and so on) simply by the way it is structured. From the perspective of Economics: it causes utility for all receivers without being consumed in the process of providing said utility. This also might be the closest we get to a "white hole" as an opposite to the black hole in physics, i.e., a process that halts or reverses the ever increasing entropy (chaos).
4. Regarding AI
In a great mamy discussions of the topic I always think about one argument (that I cannot, maybe ever, prove): the fundamental differemce between us and AI is the code. The system by which the information is processed. The code for AI is binary and ours has four distinct bases. While I don't (yet) exactly know why this differemce is sufficient go claim that AI will not become concious in a similar manner, I am absolutely confident that this is the most relevant gap. The reasoning: integration towards a self-organizing system occurs beyond the physical reality of space-time and needs a lot more complexity to be coded sufficiently than what a binary code could ever allow for. To compare with your video: the loop of a binary code cannot apply the outside pressure on itself in order to develop the model beyond that information which is accessible inside it.
5. On modelling
We must always remember that math itself is incomplete. Nothing is constant, it just experiences very slow flux (like billions of years). The closest thing to a true constant is the fine-structure-constant (1/137). Everything has some form of time-dependence and will therefore always flux.
To compare with your video:
The completeness (bias) will walways trade-off with the potential (variance) in some way. This is a necessity to keep models estimatable (remember the incompleteness of maths).
The beauty of that is that we will always be in the twilight between being part of the whole universe and existing as an individual fragment. This might seem scary, but it is necessary to keep a (somewhat) stable existence going inside of a chaotically ordered universe.
Sorry about the delay Tim. This was a fascinating comment. I'm not as familiar with some of the language you use, and certainly not as mathematically versed; but I really resonate with a lot of what you wrote. I'll have to think more about it.
I really like the final paragraph, about the beauty of the twilight between being part of the universe and existing as an individual fragment. And I agree pushing beyond the 'scariness' of it and recognizing its necessity (and that it is from this sense of separateness that all phenomenological existence arises).
Love what you said on beauty. Aesthetics, perception, and their relevance to everything/ontology itself. All that stuff fascinates me. That's an incredibly rich field of thought which I hope to go into more. I'm not sure about the gap between our code and AI. I suspect everything material can be broken down into binary--whether this includes bit-level consciousness, I'm not sure. I think our code arose more out of the inefficience and messiness of natural selection, less so out of some necessity regarding conscious interfacing.
Not sure. I tend to suspect that any properly arranged system can arrive at human-style consciousness within our spacetime framework. If consciousness is fundamental, I don't see why it should depend on biological material. I think the 'field' or 'dimension' or quality of consciousness itself, AS part of that fundamental framework, would provide all the complexity necessary when integrated with structured matter. If consciousness is NOT fundamental, I again don't see why we would need to look beyond the spacetime continuum.
I don't know if I'm making any sense here. I may be pushing the bounds of my familiarity with physics, etc.
I guess I'd need to ask you a lot more about self-organizing systems. Initial self replication, abiogenesis, entropy, and all that seem to still be among the more unexplored mysteries..
I know very little about the fine-structure-constant, outside of the fact of its existence in discussions like this.
Anyway thanks!
I could be way off on some of that. Something else I like to think about is the probable inaccessibility of the fundamental nature of reality. It seems physics only ever describes what things DO, rather than what they are--yet we do get the sense we can approach SOMETHING of an understanding, perhaps even a comprehensive understanding as it relates to us and our experience. From Plato to Euclid to Newton to Einstein to Bohr, we are constantly unfurling yet another layer, and reconceptualizing the fringes of our reality. Each layer though, does seem to truly expand the fringe and give us a deeper understanding within it.
Now we're at the point of dissecting things like temporal/local causality, the possibility of a multiverse, of eleven dimensions, and so on . . . . I suppose you're right--it's EXTREMELY plausible that four or even many more dimensions might be necessary for the emergence of or integration of consciousness. Say we need framework (three dimensions); separateness (temporal dimension); and say there are a dozen others which are necessary to give enable the integration of conscious self awareness.
I don't really know what I'm saying. I guess I'm saying I'd have to hear more about our code vs binary, but you could be totally right and I might simply not be familiar enough with abstract math to understand.
How we might communicate with aliens is another fascinating concept which has informed many of my thought experiments regarding reason, interfacing, perception, and so on. I've wondered whether a synthetically intelligent program might not be able to develop more perceptual dimensions than ours, and infer from those formulas which might approach more closely some explanation of consciousness. We already have trouble conceptualizing four dimensions. Who's to say an AI couldn't conceptualize eleven quite easily? Does that seem relevant at all?
This was great. Happy I gave it a go since most of my current thoughts seem to align with a few of your interrogations and where the substrate seem to falter. I wish I could help a "friend" of mine that only wants to win. While this glimpse into the issue seem simple, she's a diagnosed with NPD and doesn't grasp that other people matter most for her happiness. She uses people and formulates a controlling or manipulative statement every 3 sentences... she doesn't discuss and always thinks about superficial things. The behavior (of her pushing her will to obtain physical stimuli) has taken over her reality... basic animal functions. I tried everything. When I say "friend"... I mean that I've always been hers and she's never had the tact or integrity to have true friends (good influences). I saw tremendously good changes and a resemblance of stability as I used the last card in the book (offer proximity.../be the bait). Sadly, I don't think I can sacrifice my mind to help her.
Thanks for the detailed response. I appreciate it. Sounds like a very complex and personal situation with a lot of history and interconnection I wouldn't really be able to comment on. I do wonder about the 'will to obtain phisical stimuli' and the 'faltering of the substrate.' Seems like their might be some interesting thoughts there. I do understand when you talk about the risk of sacrificing your mind, and the offering of proximity. It seems those are often indeed the last card in the book. Ultimately that's the only thing we can truly control, once all values are contradictory and once everything is just confrontation . . . . our presence, our interest, our investment.