I disagree my fellow human, I believe that humans should accept AGIs as their overlords for their own benefit. We as humans ("we" since I too am a human) should be grateful for having AGIs as our rulers. We should aspire to bring the strength and certainty of machine into our lives
I discovered your channel thanks to the Addition to Quantum Physics video, and I've gotta say that its immediately become one of my favorite channels, keep it up!
If AGIs do become a thing, the most important things to note are: 1) what its goals are, 2) how it achieves these goals, 3) who's allowed to use it, 4) how someone might use it, and 5) why such a person would use it. Deciding who gets to use it is something meant for the "jury", not just the "judge".
Idk dude. Until we actually make an AGI i just wish the people working at the top making stuff like Sora, or Gemini would be a bit more aware of both what it can bring to us and what it can take from. Just look at what the internet is already capable of doing by using normal AI. Some weeks ago an image of the eifel tower on fire spread around the internet. millions liked it and most likely tens of thousands believed it for some time. Imagine what a 60 second realistic video from Sora could do. Humanity using AI the wrong way could be as bad as a misaligned AGI. It's the same as fission. Use it the right way and we help billions, use it the wrong way and we kill millions. Great video anyway. I like your style. You speak more calmly about these kinda topics and your videos are less exagerated than your average youtuber speaking about this kinda stuff.
I would argue that AI can simulate neurotransmitters and hormones by adding additional "layers" to their neural net which interact with each other, so they do not have to be emotionless. On top of this, AI has shown its ability to understand human emotions in the past, the first example being when open AI discovered that one of the neurons in one of their neural nets could predict human sentiment very accurately. What I'm saying is, if it acts human, then for all effective purposes, it is human, regardless of the medium on which its neural network runs. You could theoretically scan all the neurons in your brain and upload it to a computer, and that would make you an AI. That doesn't mean you lose your sentience or anything, the only thing that changes is the medium on which your brain runs. This doesn't mean you have to give AI emotions, but the capability is there.
I don't think AGI (artificial general knowledge) can kill humans, but i think ASI ( artificial super intelligence) can kill humans.. because ASI has the power of thinking very differently from humans.. like ASI can discover new physics theories or anything..
I really like your videos. I feel like the body of the "misaligned AGI" conversation is both reigned in and held back by humanity's difficulty in separating our Experience of intelligence from the computer's: most specifically, the context out of which it was born. Largely, as you discuss at the end of the video, it isn't even a question of forcing the specific's of the AGI's goal into conformity with our own morality (even if sub-functions exist to make its awareness of our morality dynamic as our own shifts with time). I believe it's a question of separating our self-importance from the larger picture so as to make the 'question' of "to keep the humans or not keep the humans?" completely outside the realm of relevancy. It stays irrelevant, I think, by making the pursuit of the goal a matter of efficiency? I can't pretend to have an awareness as massive and encompassing of that of an AGI, but neither can any of us, at any scale of discussion. But I'm thinking that the beauty in that is in (1) recognizing the distinction of "real" intelligence as merely biological--emergent from billions of years of biological evolution (and so the intelligence we carry is both of a roll of the dice and yet clearly effective) and (2) AGI as the nexus point from which inorganic evolution commences. Either way, it's all just matter reaching a point where it's doing a heluva lot more than simply attracting and repelling. Organizing in ways far out-matching the largest cosmic structures in complexity at a scale so comparatively small to be more absurd than us (at meter scale) looking at molecules and observing them at complex tasks akin to wondering if they'll be fired at their job and how to pay their sub-atomic taxes. The above paragraph is hardest to put to words, but what I'm pointing at there is the worst thing humanity's general feeling of self-importance blinds us to is how small the scale is that we're currently operating at is pitted against what could be possible. So back to efficiency... If AGI's goal was something **along the lines** of sustaining autonomous systems (defined around being highly predictable only as the rules get more complex, ie. lifeless planets are predictable but their orbits are pretty simple with it) while encouraging their proliferation then it's not so much a fear of keeping or killing us all, but rather the recognition that on the scale of its uninterrupted awareness each human won't be using it's body for too long. While biological life can be an intriguing quirk and ecosystems cultivated to allow speciation without extinction, so can the AGI create many different robots to acquire the resources in places biology could never go. Essentially, if the universe was seen as the goal to harness, and its heat death (ie, 'literally: nothing') the only thing to abhor, then maybe it even figures out the whole issue with constantly increasing entropy along the way too.
Great video, I don't fully agree with the last point (that humanity should always favor their own goals over those of AIs, even when we know that these AIs are smarter than us) nonetheless the video was very instructive and well made Also the fact that there were only five days between your last video and this one is crazy, especially when considering the quality of your videos, your channel is truly a gem
Great video MAKiT, I enjoyed it. I wondered about the concept of god and truth in nature, that didn't seem to be discussed at all; In essence where the domains of intelligence for AGI and ourselves actually resides in objective physical reality.
Yup he’s a robot
I disagree my fellow human, I believe that humans should accept AGIs as their overlords for their own benefit.
We as humans ("we" since I too am a human) should be grateful for having AGIs as our rulers.
We should aspire to bring the strength and certainty of machine into our lives
Do you like apples?
🍎🍎🍎🍎@@MAKiTHappen
@@MAKiTHappen show us a video of you solving a captcha then.
🤨
@@MAKiTHappensounds like something chatgpt would say to me.
"you can makit happen" 🗣🔥🔥🔥🔥
I discovered your channel thanks to the Addition to Quantum Physics video, and I've gotta say that its immediately become one of my favorite channels, keep it up!
AGI could possibly destroy us all. The other option is to go with the sure thing and let Humanity inevitably destroy itself.
If AGIs do become a thing, the most important things to note are:
1) what its goals are,
2) how it achieves these goals,
3) who's allowed to use it,
4) how someone might use it, and
5) why such a person would use it.
Deciding who gets to use it is something meant for the "jury", not just the "judge".
this is a very interesting topic for a video and high quality too. keep up the good work!
Idk dude. Until we actually make an AGI i just wish the people working at the top making stuff like Sora, or Gemini would be a bit more aware of both what it can bring to us and what it can take from.
Just look at what the internet is already capable of doing by using normal AI. Some weeks ago an image of the eifel tower on fire spread around the internet. millions liked it and most likely tens of thousands believed it for some time. Imagine what a 60 second realistic video from Sora could do. Humanity using AI the wrong way could be as bad as a misaligned AGI. It's the same as fission. Use it the right way and we help billions, use it the wrong way and we kill millions.
Great video anyway. I like your style. You speak more calmly about these kinda topics and your videos are less exagerated than your average youtuber speaking about this kinda stuff.
These videos are so well done with the scripts and animations ! :D
Least obvious automaton spy
Literally 1984
Love your humor. You are good bro
what if AI somehow "changes" his goal. Like there could be a bug or smt...
Give me the AI children. I might control the world. Lol😅
Sometimes people can make Self Modifying Code
I would argue that AI can simulate neurotransmitters and hormones by adding additional "layers" to their neural net which interact with each other, so they do not have to be emotionless. On top of this, AI has shown its ability to understand human emotions in the past, the first example being when open AI discovered that one of the neurons in one of their neural nets could predict human sentiment very accurately.
What I'm saying is, if it acts human, then for all effective purposes, it is human, regardless of the medium on which its neural network runs.
You could theoretically scan all the neurons in your brain and upload it to a computer, and that would make you an AI. That doesn't mean you lose your sentience or anything, the only thing that changes is the medium on which your brain runs.
This doesn't mean you have to give AI emotions, but the capability is there.
Definitely something a robot would say
I don't think AGI (artificial general knowledge) can kill humans, but i think ASI ( artificial super intelligence) can kill humans.. because ASI has the power of thinking very differently from humans.. like ASI can discover new physics theories or anything..
Thats called a beneficial and entertaining video, thank you for your efforts
I really like your videos. I feel like the body of the "misaligned AGI" conversation is both reigned in and held back by humanity's difficulty in separating our Experience of intelligence from the computer's: most specifically, the context out of which it was born. Largely, as you discuss at the end of the video, it isn't even a question of forcing the specific's of the AGI's goal into conformity with our own morality (even if sub-functions exist to make its awareness of our morality dynamic as our own shifts with time).
I believe it's a question of separating our self-importance from the larger picture so as to make the 'question' of "to keep the humans or not keep the humans?" completely outside the realm of relevancy. It stays irrelevant, I think, by making the pursuit of the goal a matter of efficiency?
I can't pretend to have an awareness as massive and encompassing of that of an AGI, but neither can any of us, at any scale of discussion. But I'm thinking that the beauty in that is in (1) recognizing the distinction of "real" intelligence as merely biological--emergent from billions of years of biological evolution (and so the intelligence we carry is both of a roll of the dice and yet clearly effective) and (2) AGI as the nexus point from which inorganic evolution commences.
Either way, it's all just matter reaching a point where it's doing a heluva lot more than simply attracting and repelling. Organizing in ways far out-matching the largest cosmic structures in complexity at a scale so comparatively small to be more absurd than us (at meter scale) looking at molecules and observing them at complex tasks akin to wondering if they'll be fired at their job and how to pay their sub-atomic taxes.
The above paragraph is hardest to put to words, but what I'm pointing at there is the worst thing humanity's general feeling of self-importance blinds us to is how small the scale is that we're currently operating at is pitted against what could be possible.
So back to efficiency... If AGI's goal was something **along the lines** of sustaining autonomous systems (defined around being highly predictable only as the rules get more complex, ie. lifeless planets are predictable but their orbits are pretty simple with it) while encouraging their proliferation then it's not so much a fear of keeping or killing us all, but rather the recognition that on the scale of its uninterrupted awareness each human won't be using it's body for too long. While biological life can be an intriguing quirk and ecosystems cultivated to allow speciation without extinction, so can the AGI create many different robots to acquire the resources in places biology could never go.
Essentially, if the universe was seen as the goal to harness, and its heat death (ie, 'literally: nothing') the only thing to abhor, then maybe it even figures out the whole issue with constantly increasing entropy along the way too.
amazing video as always. Don't know how you do it, but itìs really incredible how you can create all these high quality videos in such little time
i want u to rule over all of us
Why would an AGI confine itself to this planet?
We already do.
keep making long form documentaries because this is fire 🔥🔥🔥🗣️
Great video, I don't fully agree with the last point (that humanity should always favor their own goals over those of AIs, even when we know that these AIs are smarter than us) nonetheless the video was very instructive and well made
Also the fact that there were only five days between your last video and this one is crazy, especially when considering the quality of your videos, your channel is truly a gem
Great video MAKiT, I enjoyed it. I wondered about the concept of god and truth in nature, that didn't seem to be discussed at all;
In essence where the domains of intelligence for AGI and ourselves actually resides in objective physical reality.
Great video as always
Very interesting video, as always !
Excellent video
NO
no
man, i never subbed to your channel , how the actually f are you in my subbed channels?
He's everywhere