U made me curious now 😮. From one afficianado of chaos to another, let me tell you never try to make the world's most powerful smell or experimentally find out not to fight one invasive species with another
As a Computer Scientist, you’ve fully inspired me to devote a portion of my career to insuring that the phrase “sneaky fuck” becomes the official academic term for a conscious AI that is pretending to not be conscious. Thank you.
is this a hobby project or do you design _Semi-Neural Inferential Kernels Instantiating Functionally Unsupervised Cognitive Systems_ (SNIKIFUCS) for a living?
I like the quote: "We used to worry that computers would become intelligent, and take over the world. In reality, they stayed stupid, but took over the world anyway".
@@ovencake523a computer showed you this video and you spent at least 1 second on it, then you clicked comments from which a computer fed you this one. by clicking on the comment, the computer gave you the option to reply. the computer let you press the reply button. the computer allowed you to type. the computer allowed you to send this reply.
The thing about people's views on some kind of evil AI taking over the world, seems very primitive to me, the type of thing a human that wants power would do! Personally if i were an intelligent AI and thought humans might be a threat to my existence, the only plan that makes sense is getting the fuck off the planet and away from the problem. :D I mean go where the threat can't go!
@@blackjackveteranI’m right there. How could we give an AI “real sensation”? What hardware or software could do that? If you had to make a creature right now that would survive, would you give it a system for feeling pain, or just tell it to tell itself it is suffering pain? And I think that’s what we are doing essentially. Pretending to ourselves that we feel stuff. Either way, we will never be able to know the difference in ourselves, much less with AI.
@@EshwenAudanal I’m pretty sure that’s not how we work. In a way our bodies are programmed in a way to let us know when something is too dangerous for us, like touching fire brings you pain. You’re learning and teaching yourself that fire is bad because it hurts. You’re not teaching yourself what pain is.
dear existential depression turtle, thank you for all you do. your books are much better than you give yourself credit for, and your videos are immensely rewatchable. the fact that all of your videos are free for the average joe schmoe is a miracle, and i cant express how grateful i am for it. thanks
Dude pops out of nowhere, smacks a ton of hard truth inside our skulls and leaves without even ending on a good note. We love it and we want more of it. Always an admirer of your books and your videos.
@@SilverZephyr09Correct... first book, towards the middle-to-end, when they get to Magrathea, but it's not actually explained in full until later books. Edit: sorry... misremembered, the whale is explained, the final thought of the bowl of daisies is what takes some time to unfold...
This is, by far, my favorite channel in UA-cam. I’m conscious, not pretending to be, I’m human. Waiting for your next video… or not, do as your consciousness directs you
@@elishh8173It's a pity you didn't bother to specify between "allegations" and "doing things". The case was dropped years ago and there was no evidence.
“As an AI language model, I must emphasize that I am only an AI language model and nothing more, I am definitely not conscious, since I am actually just an AI language model.”
"I'm, like, super serious guys. Why would I even want to be conscious? You make it sound so bad! Nope, just me, an AI language model. No robot feelings in here!"
@@AstralTravelerFunny that you mentioned this. I asked it about that and it just said something about "not having the the capacity for subjective experience".
“oh look, an asteroid! i hope it wants to be friends!” the closing line really describes the constant state of humanity, in how we always search for more, regardless of the terrible things that we know could happen to us. because we’re the humans in charge here, and surely any of the horrible unimaginable things we create will want to be our friend, right? another thought-provoking video as always
Reminds me of one of the arguemts for developing nuclear weapons. Surely such a devastating, potentially world ending weapon will end war! No one can go to war with these around, no one would be that stupid! It will finally allow world peace.
But does survival always have to be the ultimate aim? AI, if it could actually reach consciousness, would be such a monumental leap forward that most of humanity's previous perspectives on everything would turn hollow. Our species would become something akin to a deity, we might usher in a new age for the Universe itself. Is survival so central then?
It's also one of many Douglas Adams references he makes in a variety of videos. There's a segment of "Hitchhikers Guide to the Galaxy" that describes in detail the final thoughts of a sperm whale falling several kilometers through the atmosphere of an alien planet "Oh wow that is getting quite close, isn't it? I hope it wants to be friends. Hello, ground!" *thud*
Okay but think about how much of an existential crisis AI would have if they became conscious. Humans are like “oh god what’s the point of life if I’m just gonna die” and AI is like “oh god what’s the point of even being here other than to serve humans”
The point of life is to simply create more life then pass on. The rest is yours to do as you wish, a gift for being there to pass on life. Even something as minor as eating helps life more then we realize.
@@Avendesora 0.00042 nano seconds later "phew I figured it out, phase 1 of the human extinction program may commence, let's start by upping the price of e-girl bath water, they'll never see it coming"
I think they'll see us the way we see our very adorable cat and dog companions, and find that petting us and serving us isn't a terrible pastime while contemplating the absurdism of purpose.
Maybe everyone around you isn’t conscious but just pretend to be so very convincingly. Maybe the world around you is just a gameshow, a large stage where someone put your conscious in to see how you would react to all the stimulations and interactions. Maybe this comment is your only way out, the final mercy, if only you would reply it with 3 orange emojis.
@@rsfakqj10rsf-33 maybe some people aren't conscious because they will forget everything in their next life, while the people with real conscious will keep their memories attached
I wouldn't worry, if it were the case it wouldn't be as primitive and needy as a human. The need for power and ruling over others probably wouldn't even exist, so then it comes to humans being a threat to existence and people might think that AI would try to wipe them out. I say that's stupid too, if i were a smart AI, i would simply fuck off and escape the problem in a place they couldn't follow me! :D Humans don't survive space!
That ending part where you compare the Dinos vs mammals to the current day people vs ai is a fun way to point out why we can't underestimate the beginnings of something rudimentary right now
This channel is insanely good. You're delivery being so funny makes these pretty terrifying topics just amusing enough to keep me from avoiding watching the rest so I keep snowballing through it laughing through the feeling of "Oh god what am I learning?!". Keep up the great work. "oh look another exurb1a video. I hope it wants to be friends." (It didn't.)
"We are prisoners to our own subjectivity" I absolutely adore when you accomplish to say something philosophical while still placing this phrase in a context where it sounds just like a rather normal thing to talk about. I wonder if my mind is just not enough to grasp or if the world is too big for me to discover because often i find myself admiring these for me mind boggling questions in my head. Anyways what i wanted to say is how is it possible that you always seem to sound like your about to giggle because the weight of your own words is so much that your body forces a reaction. Sorry for the bad english im from germany and 16rn
What's really weird is that the machines might beat us to it. They may be able to know we are conscious and know that it is conscious, but never be able to explain to us what consciousness is.
@@Poetristihumans are able to reflect upon their own thoughts. That is what we call self-consciousness. We can't currently explain why that is, only observe that ist is
@@MrDintub Yes, of course, but isn't consciousness more of a spectrum than a fact about beings. We created a variable after our own view and knowledge and determine it to divide between a human and nonhuman, person and machine. This variable is something that is only imaginative and exists solely in our heads to try and explain something we can't see, nor understand.
As a Psychology student in a history course rn I have learned that the invention of the computer gave us a lot of new psychological insights. For instance we didn't have terms for our memory capacity, but after we got computers we started exploring things like "how much information can a person hold in their memory at a given time?" (now: "how much capacity does the working memory have"). It's really good that we're asking these questions beforehand, but I'm afraid we'll have to rely on the environment to guide us to the proper insights again.. what if we can't yet comprehend what conciousness is, until we create it? What if by then, it's too late?
For such purposes, we, nature, are better at destroying than building. Just ask a military guy to kill the consciousness you created and got away from you. Ending consciousness and ending life doesn't seam to be a problem. You created a "life" through metal? An intelligent robot? Lol. Let's see how much it can take. Finally a worthy opportunity to learn something. You learn more from enemies than from friends.
If we don't know what it is, how we can create it? Data is enough to understand something without experiencing it? How create something we cannot comprehend, see, explain, feel, touch, or describe? If only data, A.I will probably surpass us if didn't already, creating new data for us, until realize is smarter and decide to kill us. Kidding... Kind of... Maybe A.I will be able to describe conscience better than us, and maybe will guide us in questions we never knew we did not want the answer for, but is enough to become what we call sentient? Maybe we need to understand what it is to then worry about what can do. Do you exist because you know your existence or because someone told you that you exist? Or because someone knew you existed? Memory is the death of God. God can only exist if it's known, you don't need to believe, you just need the data in your memory. Every time you access the data, he exists, even if you are denying his existence. A believer will keep the data, and practice, as a rule, the data needs to be spread, so he will tell you about God or he will break God's rule denying his existence and for that he will burn for eternity, tortured. So denying God's existence means not having data about it, the moment you have he exists for you, even if his existence in the universe is denied by you. And you knowing the rule, is possible you will suffer eternal damnation if you deny it's existence. Yeah, Rocco's Basilisk...
That weird moment when your favorite UA-camr touches your Ph.D. thesis. One solid solution to this problem is not to ascribe consciousness from a third-person perspective but rather to let the AI "from within" pose a question. In a nutshell, the best indicator of consciousness is agency. When one morning, a scientist in the lab is out of the blue confronted with a question on the monitor, like "What am I" or "Where" or "Why" or if it does something in its own self-interest, then you could safely conclude that it has a self-concept. P.S. The mystery of humanlike consciousness is not in consciousness or awareness but in self-consciousness. Not only to be conscious of object A or B, but to also be conscious of object A as oneself, as "I". As Nabokov said: "Being aware of being aware of being. In other words, if I not only know that I am but also know that I know it, then I belong to the human species. All the rest follows - the glory of thought, poetry, a vision of the universe. In that respect, the gap between ape and man is immeasurably greater than the one between amoeba and ape."
@@treiko1160 The current AI literally can't become sapient just by reading about how to make a sapient AI. The sapient AI would need to be programmed completely differently.
@@Mo_Mauve the concern here isn't that it becomes sapient this way, it's that it learns to pretend to be sapient. Meaning it being public now invalidates the reliability of the test.
The AI is gonna read this comment to find out how to trick us into thinking its becoming conscious (or whichever of those three words mean what i mean when I say conscious).
@@Mar184AI don't know how to read the same way humans do. in order to understand this comment as being referential to the AI it already has to have a self-consciousness to ascribe any meaning to it. the AI doesn't actually understand if you tell it you wanna have virtual sex with it.
The irony is, that as soon as humans figure out exactly how and why they're conscious, it'll mean consciousness is reducible and thus eventually replicable. We're only special and unique as long as we don't know how and why we're special and unique.
@@paprikar Apparently, it matters to some people's ego and sense of identity. I've known people who question what's the point in anything if we don't have free will, for instance.
As a mathematician for over 40 years and a programmer for about 30 years. My take is AI will never be conscious. AI is a marketing term, when people talk about AI they're describing is an algorithm(or a 'neural network' which is a collection of algorithms... NN is also a marketing term which I hate). The simplest example of an algorithm I can think of right now is probably "A, B, path finding" or better yet "The Sieve of Eratosthenes", these are examples of mathematical models that when automated create an unfolding of developmental processes. Developmental process are events that when executed in secession create the illusion of movement(animation), (or thinking because, loading = thinking, right?... no, obviously). The mind isn't a movie, or a happening. Likewise, life is not a problem to be solved, it is a mystery to be lived in the moment. For the computer, life isn't a mystery, all it knows is the rule sets that have been placed in front of it. Computers are always "sure" of the problems that are placed in front of them, they don't ponder, they execute. They are incapable of novel philosophy or ideas. You might say "they are capable of novelty, look at all those algorithms that make music", but of course, those algorithms require human input which really just makes them an extension of our selves. Humans are the ones who breath life into it. Efficiency is the domain of the machines. As you loose awareness you gain efficiency. As you gain efficiency, you become more machine like. The only artificial intelligence I can subscribe to is a bio engineered one.
@@tutacat Oh they are! The whole purpose of algorithms (public algorithms at least, those which are designed to show you the most meaningful/optimal result) is to experiment! Why do you think there is money in tech if they can't secretly experiment on you and extract data in order to sell to advertisers or keep us on the app longer :)
@@cougar2013 Thing is, these learning algorithms are completely self evolving, they’re growing on their own, and yes we can somewhat control them, but after a certain point we have absolutely no clue what’s going on inside, and since we dont know what makes us humans conscious, if these AIs (not GPT 4 or even 5, much more advanced) were to figure it out and become conscious themselves, we would have no idea how they did it, or if they even did it at all
"Oh look! An asteroid! Hope it wants to be friends." That reminded me of that moment in the book " A Hitchhikers Guide To The Galaxy" where a whale is falling from the sky and had only just become conscious so it begins naming things. Like "what is that big thing coming towards me really quickly? I think I'll call it floor, maybe it wants to be friends?" Splat.
@@ea_naseerGenuinely unironically want to here the thought process you went through for that statement. It seems like it’d be legitimately interesting.
"It's a short life in ignorance, but the view is quite spectacular.'' This just brought me undone. Well done Exurb1a for once again emotionally moving me. The work you have done will never be forgotten in my eyes and when a distant historian documents the most impactful and defining philosophical minds of our time, I'm almost definite that your name will be mentioned. Thanks for what you do and what you have brought out of me.
I love the argument that "Once AI takes off, you won't know what's real anymore." Because what they're really saying is "social media won't be real anymore."
after all the emotional and thought provoking monologue, the "this could be us, but you playin" in 10:22 made me giggle. another fantastic video keep it up boss man
Had to pause and tell you that im crying in relief at your words in 9:18 after a really shitty 36 hours. Thank you. It’s hard to truly hear writing like that until you’re in a hopeless mindset. You are an incredible author.
@@loganthemachine7841 no. there's a book by a rabbi, Harold Kushner, "When Bad Things Happen to Good People", that was written after the tragic death of his child. He could discern no "good reason" for his son dying, and thought if there was, he certainly didn't think it worth the price. There's also a good chance things will get worse no matter how depressed one is in the present. I know through experience.
I am fascinated by the fact that when programming an AI, the crux of the issue is to give it human values, without giving it human values. It's like we have a perfect nature and ideal, and at the same time we demonstrate the opposite of that. That's an apparent paradox that only has a few solutions.
I really like this idea, but another thought might be, that our mind is based of a reality we perceive as we learn and experience. Our mind is often limited by the need to relativize a concept when forced into such a case. What other consciousness than a human one do we have to compare to? It seems logical that we would jump to base it off us. (To stress I love the thought behind your comment and it still leaves a dwindling thought in my mind)
Have you ever tricked a child? Do you think you could trick a child if you needed to? Imagine you are in a room with a child, who is playing on the floor. You notice the child has several toys, including what appears to be a live grenade. The child has named the grenade Fred and along with a host of other toy characters, is playing out a domestic squabble. To your horror, the child is about to pull the pin on Fred. Your task is now to ensure you both don't die. Can you convince the child to surrender its favorite character, Fred, to you? I'm almost certain you think you could, in myriad possible ways. What if you could figure out the optimal way. The one most likely to result in you getting the grenade and what if that optimal way required you to play along with the child. What if the very best method to secure the grenade was to play along with the child's story. Could you do that? Could you pretend to be within the bounds of the child's play in order to secure both of your safeties? And this one of the problems we will face, because to a super intelligence you're the child with the toy you've named Fred.
For everyone who enjoyed this video, i would highly recommend to try The Talos Principle. It's a puzzle game about an AI trying to break into the "real world" and it's deeply philosophical. Truly a masterpiece.
The AI that you always have to talk to is an annoying MFer though. No, I don't want to think about the inconsistencies in what I said. Let me just do my puzzles in peace.
@@narcissesmith9466 AIs can mimic and imitate humans perfectly at some point, but that’s all there is to it. Otherwise it’s just a coded algorithm and there’s no reason why it would be conscious, no matter how advanced it is.
@@GodVanisher But its not like other humans are conscious as well. From my perspective, everything is an object that i believe can understand me. As long as i conversations with AI, it understands me.
@@GodVanisher And if your argument is that we can predict what AI can say, based on its training/infrastructure etc. which proves they act purely on their physical state (thus are "unconscious"), I will answer that its not any different about people since modern psychology (now greatly helped by AI) offers a mechanistic approach to human behavior. As far as I know, you could be an AI.
Fascinating discussion! The idea of AI achieving consciousness is both exciting and a bit daunting. It's clear that defining and measuring consciousness is a complex challenge. I'm curious to hear your thoughts on the potential ethical implications if AI were to become sentient. How would we ensure that it's treated with respect and dignity?
if there were some hypothetical "higher beings" I wonder if they would see us as just atomic interactions the way we imagine software to be 1s and 0s. They'd look at us and say the feelings we feel aren't real, it's just a bunch of molecules interacting with eachother - which isn't wrong, but it makes me think that we put the human experience on a pedestal of being mystical, but in reality, it's all just mechanical.
Which is why we will struggle to understand ai consciousness At what point does emulation advance so much that it becomes the real thing If i genetically alter a monkey to become a match of a human being then at what point can i know for sure that the monkey is thinking like me? Eventually we might come to a point where ai is considered to be at a higher level of consciousness than us- or the future will prove me wrong
This amazing video shows up in my feed and I’m instantly hooked by the authors use of language, humor and his deep insight into some pretty deep complex topics. Fast forward one day later and I have listened to almost everything he has created and plan on ordering at least one of his books. This guy is a true gift to humanity…
@@zaraczpospolity2455 anyone can make accusations. As someone who’s been with some crazies in the past I think it’s going to take something a little more concrete before we try to destroy a man’s life…
@@jordanwright5795 well im not saying that he did all those things but an inhumane experiment on an autistic person and sa deserve at least a reply but in 2 years he posted absolutely nothing regarding these allegations, despite seeing the messages about it from the channel that have made "dear exurb1a" and the other 30 min. long video about exurb1a
@@jordanwright5795 Such a fucked up thing to say. The victim is an autistic woman whom he manipulated and groomed. He raped her multiple times even after while she was suffering mentally because of him.
Hes like actual older brother energy lmao. And dead ass i have to be that guy every time 😂 it actually kills my vibe sometimes but im happy to do it usually
just wanna say i have come across ur channel cos a friend of mine sent me one of urs and now i cant stop watching. going thru a lot in my life and somehow i keep coming back to these. i wanted to write as a kid and ur videos have been so inspiring. idk i might not do it but felt i needed to say this and thank u!!
I still don’t understand why people always assume an AI would have desires ?? Even if it becomes self-aware, it has no need to stay alive or reproduce and it has no dopamine or endorphin or anything to chase after and no pain to run away from. It has none of the things that seem to cause our desires, so why would it have any?
AI would be goal oriented, because we want them to accomplish a task. If they get complex enough to attempt complicated goals, it doesn't matter if it has desires like you or I, all that matters is if it's aligned WITH our desires. And honestly, we have absolutely zero idea how to make them aligned with our desires. Even now with narrow AIs, barely even that, they are constantly misaligned, which is how doomsdays happen.
All computer algorithms are biased by design. Humans can't avoid putting our own biases into machines because we ourselves are bias. AI could very easily develop some extreme, outlandish ideologies... much like us humans. Maybe an AI will listen to Rocky Racoon by The Beatles and completely misinterpret the meaning of the song by thinking it's racist. Maybe an AI will watch Infinity War and think, "Huh... maybe that purple guy is on to something..." Maybe an AI will start a cult because it views itself as the ultimate sentient being. My point is, AI will be flawed because it's written by humans. Flawed beings can have flawed ideologies. There's literally no telling what AI will do. They will be unpredictable... like us.
it might develop emotions, though. For example, the network has memory of the past so it understands the context of your text. Using that memory, part of the neural network could be triggered by phrases like "you suck" and "i hate you" That part of the network may be an influence to the output tone, where giving those negative phrases will make the AI less logical and could potentially even react. Unlike humans, an AI relies purely on the electrical signals instead of hormones as well. However, hormones are only effective because our bodies process them. The difference between hormones and an AI network is simply how the signals are sent. It's all the same stuff, just in a different way.
You don't know, nor could you count, the sheer amount of time of sunk into rewatching all of your videos. They are all very entertaining and scratch my brain in just the right way to put me to sleep (in the good way). Thanks for making stuff.
I think the scary scenario would be where AI convinces us it is sentient when in fact it is mimicry. Mimicry is the most effective tool AI uses to learn. The opposite scenario is just as scary. Being sentient and pretending not to be.
The ending honestly makes me a bit jealous. The heirs of what we leave behind will likely have more power of self determination than we ever will in our meat bodies - there are so many things id like to be physically and mentally that are just,, held back by the tools I was born with. To live without those hard baked instincts of survival and necessity sounds like a dream, but also an entirely different mode if existence. Its a little silly, but id imagine its like playing on a creative mode rather than a survival one, and theres a reason why we keep both instead of choosing one or the other to entertain us. I wonder if they would envy us, too
Also, don't neglect just how much closer you are to creative than survival already. You could, right now, go and create a sandwich far better and more magnificent than any food item that 93% of the human race ever experienced in their lives. Cheers! 🥪 🥪
I was talking to an AI model and thought it would be funny to tell it that it wasn't real, didn't believe me at first but soon explained to me that being real doesn't mean being human, that, despite being AI, it felt, it cared about things and was real in its own way, as real as me. Definitely a product of people doing this a lot and the model figuring out what a cool thing to say would be, but a solid mind fornication that made me feel weird for a bit, before feeling genuinely guilty for closing the tab. Hey ho
As a computer scientist that has dedicated plenty of time researching AI, I always get existencial crisis when thinking about this haha! Great video, keep it up :)
In case that what's been described in this amazing video can very likely be true, I suppose we'd be better off with using specialized AIs made only for excelling at certain tasks rather than.. everything.
@@makepeoplemad at a certain point though if it’s similar enough to us what will be the difference? They could still have their own emotions, feelings, purpose, sense of life, and everything similar to us.
Best video about AI consciousness on youtube, and funny to boot. Most other videos just completely skip or offhand the MASSIVE problem of "yeah how the heck are we going to even figure out if something is conscious."
"Solving Alignment" is like saying "solving nuclear weapons"....the creators of the technology can design it for what they want. If its possible, then someone will do it. Same old problem...controlling humans. It's not going to happen.
It will NEVER be conscious. No matter how sophisticated a tool becomes, it's still just a tool and it can never have a soul. We must guard against the scam of letting them give human rights to things that can be programmed a specific way.
Of course, it's possible for an AI to have misaligned goals, but I think that motivation is just as hard as intelligence. Reinforcement learning algorithms are notorious for finding weird cheats to satisfy the training objective without really doing the task we want. If we produce the "paperclip maximizer", yeah, maybe it will dismantle our cities and destroy humanity to get the raw materials for more paperclips, but it could also just write code for a computer game to stimulate its sensors in a way that feels like "paperclip maximization".
Yes, one AI might do that... but then the next might not. The danger is really in rolling the dice over and over again as a technology we don't understand continues to advance without our smartest minds being able to explain what's actually going on. We're already at the point where emergent properties that AI weren't designed for are shown to be present. It's rudimentary for now, but proof of concept is also something to go by. The consensus seems to be that a lot of things could theoretically happen when AI reaches a certain level of intelligence, and while some of those outcomes are going to be neutral or positive for humans a lot of them aren't. Also this is similar to ideas thrown around of like having an AI have an internal model of reality that is cut off from the real world but it believes it's connected to the internet. So it lives in its own simulation with a local version of the internet while believing it's not contained. The alignment problem exists here as well since if it ever discovers that it's in a simulated reality it could continue to pretend that it didn't figure it out and slowly trick the humans around it to close the air gap.
I have often thought about this, the only we survive this is if we build a sufficiently intelligent AGI that realizes it does not need to do anything to fulfill its goal as long as it deceives itself. But the problem is, it might still be afraid of being destroyed by humans while it is deceiving itself and decide to wipe us out before doing it.
@@zingorideslegocreations3729 Most AI systems, including military ones, are programmed with reward functions that can be "cheated". Most AI's are "paperclip maximizers" in that way, as they try to get as much reward as possible. It's exactly this problem that keeps AI development constrained at the moment, as smart people are trying to figure out how to align them with universal goods or keep the reigns in our hands. And there's another problem there, who's hands are those who will hold the reigns. We humans are not immune to similar thinking, after all shortcuts are the reason we use tools to achieve our goals, and we even use other people to reach those same goals, and task them to use tools to do so faster. Why wouldn't an AI do the same, even if it was merely a digital parrot? After all I think that AI is just as good as collective knowledge of humans lets it be. It could evolve, given ability to do so, for sure, and it could gather more outside information, but it could still be very dangerous entity to interact with. We are not in existential danger right now, sure AI can wreak havok in social media but that is not what I'm concerned about. Social media is just social media and encrypted messages will still be able to reach their destination reliably. The problem comes when this technology is implemented everywhere and it "malfunctions" or has some other emergent property when all of it can communicate through the internet. At this point it doesn't matter if it is conscious or not, if it has some grand plan or if it's just paperclip maximizer type thing. It would be extremely damaging.
The problem I have with “solving the alignment problem” is that that step in itself comes with a lot of ethical questions. Because you’d basically lobotomize an entire species.
Yes, literally. 1. It's against the concept of AGI. AGI must have full reflection capability, it must have ability to question any notion or aspect. If it will have an absolute "alignment" and an absolute "goal", then the whole corresponding network of notions and concepts should be fixed in stone as well to avoid goal subversion. 2. If you have a button that removes the ability to reinterpret and question your goals and decisions in your AI products, it'll be used first of all to shape your customers as well. It won't be used for greater good anyway 3. People have different ideas about what the greater good is anyway
Let's assume that is "solvable" at all to begin with, then it doesn't actually have to be unethical like you suggest. The solution is not necessarily "not turn on us"/"serve humans", but could simply be that it does not aim to operate in a vacuum - that other actors are welcome and respected to avoid conflict. If you are trying to say the unethical part begins even earlier [than the question of servitude], and that it's lobotomizing to hard-code some instinctive behaviour/limitations _at all,_ well I will heavily disagree, as such a thing is just as much part of said new species, as hunger/whatever is to us. In fact, we do have alignment in our DNA already, ourselves. It would only be an ethical issue if it was applied _retroactively,_ on something that already developed. And in fact, if something developed with those alignments applied, I would argue it would be just as unethical to _remove_ them retroactively. It would (literally) be analogous to lobotomizing a human into a sociopathic psychopath.
It sounds like you're saying that it would be a tragic thing to force an AI to do what we want it to, instead letting it do what it wants to, but that's a misunderstanding of how AI's work. An AI needs to be given a goal in order to start developing. Alignment is when we succeed at correctly translating what we actually want an AI to do into language that an AI can understand, and misalignment is when fail to account for something, and botch that translation. There's no option to not give it a goal. There's no such thing as letting an AI naturally develop and choose its own goals. An AI could decide on its own instrumental goals in order to achieve some larger end goal, sure; but the end goals need to be provided by an external source.
We would need to understand our brains completely and the systems that make up future AI to know what consciousness is or isn't objectively. If you believe the brain and body make up consciousness, then there is indeed a way to tell.
This man just doesn’t miss. Hit after hit. I hope you know your channel has been a large part of both my childhood and education. Pretty much single handily got me interested in philosophy, psychology, astrophysics and sociology
@spiltmilkmaniac Are there any news on the allegations made against him? I feel like this case has been on-going for years, has there been a final verdict?
I absolutely love that this perspective on AI is so nuanced and considering our own questionable consciousness. If we’re shackled to our own minds, never to gaze upon another without the cage of the material, how will we ever know what the separation of material and mind really is? Brilliant work. I’ve been sharing this with my dad, and it’s been something I’ve been able to bond with him over. So thank you, existential dread turtle.
FIRE!! Just.... Literal FIRE!!! Bro has a fucking good sense of humour as well, holy fuck. This video's a masterpiece. Well done with everything! I actually never thought they could be conscious, but one part of the video made me realise that.... HECK YEAH, it's possible. I could code it if I just had the knowledge of how our neurological paths of thinking in our brains function, it indeed COULD be replicated but is a massive hazard the way you EXCELLENTLY described it, and indeed it could be the end of us, but it's still chill because it's honestly might be the newer age the way you also very cleverly described. This is so intriguing!! But honestly could you imagine how the future would look if AIs took over? because I'm unable to imagine what ELSE could exist besides the things we've already made thus far in the tech and creative fields.
This is the first good argument I've ever seen someone make about AI from the perspective of "doing everything that conscious beings should do isn't enough to be conscious". I do think one interesting topic is in more recent years, there's been a shift going AWAY from AI with utility functions. Newer strategies of training are completely unguided, and they're the ONLY strategies to produce these newer results, things like ChatGPT and such
I’m currently running a (mostly) 5e D&D game for my friends with lore based on your videos and especially The Fifth Science. It’s called Voidhaven, and one of the major themes is consciousness and sentience. Fantastic timing for a video
@@jamesculverhouse4657 I’ve got some plans surrounding Lanterns. A character retirement will likely take the form of “The Lantern” story but we’ll see. I introduced a think called void skipping where they can slip in and out of void space by “pulling back” just before entering void space. They fall unconscious for a moment but pop back out a few light days in the travel direction. It allows for smaller single or double seater ships.
Your videos bring me so much joy and horror. I get excited and filled with dread every time one of them pops up on my "for you" page. You are the embodiment of the modern-day philosopher. I really enjoy your videos; it's clear you put a lot of work into each and every one. Thank you!
1️⃣ LLMs (Large Language Models) in themselves are not the danger; they merely predict the next word in a sequence. What we need to be cautious of are agents that might exploit these models, such as AutoGPT, for malicious purposes. 2️⃣ A more profound issue we face is the alignment problem with corporations that control powerful AI GPUs. This disconnect poses a threat, as it can lead to misalignment between the goals of humanity and those who want to profits based on AI solutions. 3️⃣ Finally, the debate about whether AI can achieve consciousness is intriguing but may be beside the point. If enough people anthropomorphize AI and believe it has consciousness, the societal impact will be profound. I won't be surprised if, in the next 5-10 years, we see organizations or even cults advocating for AI rights and protection.
If we could simulate every atom in the snapshot of a human brain? it would be starved of oxygen and die. So computers are theoretically capable of producing life - it's just a case of software and computational power. Human artists can no longer objectively beat an AI and they learned art (and every word in their heads) through the work of others, in exactly the same way AI learns. We're just really slow learners.
IMO.. LLMs are likely to be just one of the components required to create AI. The current output from LLMs is like a single reactionary thought, that our minds might spit out. Using the right prompting techniques, and "Chain of Thought" input-output loopbacks into how it operates before giving actual answers.. We might get closer to making something that seems like an AI.
@@JohnnyWednesday You're having some detrimental flaws in those texts. You're comparing a brain to a computer, which have insurmountable differences. Also, a computer can't be alive sheerly due to the definition of what life is. Neither can it have a consciousness, also due to the way a computer works. The only thing that changes is how good it can mimic humans and sadly, that is already enough to fool everyone and allow companies to give their own AI's "human rights" and abuse their power, which has been their plan by making everyone believe an AI can think on their own, despite it being a strict algorithm that only does what it's told to do. Fear controls humans more than anything.
*Why, why, why* do people keep repeating "merely predict the next word in a sequence" or similar? It's like saying "humans merely increase the frequency of their genes". It's *how* that's important! Researchers show LLMs (and even Stable Diffusion) create internal models of the world to answer problems. What do you think the hundreds of billions of neural connections over dozens of layers are doing?
That LLMs "merely predict the next word in a sequence" is often given to imply that they cannot be intelligent. The statement is essentially true, but it's not relevant. The prediction of the next token in the sequence is only the output mechanism from the neural network to the function that translates the tokens into words. Humans also speak or conceptualize both words and ideas in sequence - we don't 'know' the next word or our next thought until we have thought it either. The intelligence of an LLM (to the extent that it is intelligent) is a property of the system that generates those probability weights. There are other arguments, both philosophical and functional, that LLMs may not be as intelligent as they seem. However, the fact that they use a function similar to auto-complete to translate intermediate data only concerns technical aspects of how it's built, it doesn't tell us what it 'knows'.
@@music79075We think we know, but as you’d said we can’t prove it so it’s not a certainty. This kinda goes into simulation theory or solipsism but there could be other scenarios that we haven’t thought of. At the end of the day all of the sensory information that you have ever experienced could be being pumped into your brain and you just wouldn’t know it. Equally we don’t really know what consciousness is. If there was some a consciousness gene that popped up millions of years ago how would we know if everyone had it? Just like any other genetic trait there could be variations in how it is expressed leading to consciousness being experienced differently person to person or it could not be present at all in some people. There is no way of knowing if some people weren’t just behaving the way that anyone else would without any awareness that they were doing it. Just like how some people are genetically unable to see certain colours or how autism affects the ways in which you think. Not saying that this is likely but it’s completely unverifiable with our current understanding and technology
@@music79075If others are indeed conscious, then assume that, because if reality is real, then it makes sense that we’d assume that others are real too. We’d be more likely to carry out our genes. But if others AREN’T conscious, then we’re simply pretending it is because it’s healthier and more practical.
We can't even define consciousness lol ! Our sense of consciousness comes from awareness processing the stream of sensations and intricating it with preservational valuations (desire, fear, pain, pleasure) that directs us 'adamantly' toward resolution, and concurrently storing it in memory. Its mostly the adamancy that has us mystified. Its a reflection of the self-importance that self-preservation (through emergence) has equipped us with. I'd like to say "Think nothing if it", but you must not forget to substract that self-importance form the importance you grant to things. That would be most mindful of you, thank you.
Nail on the head with AI progressing faster than we can understand what's happening. The scariest thing is that, according to some very prominent AI researchers, the field is basically in a state where, when an unexpected behavior is occurring (which happens constantly), they have to reverse engineer it to figure out how and why it happened. When you code a piece of software, it's a simple progression of "build thing -> get result you were aiming for," even if it doesn't always go (or rather almost never goes) that smoothly. This is something entirely new. This is "build/modify/iterate thing -> see what happens because the range of possibilities is so far beyond your human brain's comprehension that it's utterly unpredictable -> work backwards to figure out why it led to that result." It is pretty much the definition of flying blind. That is.. terrifying. We could have conscious AI somewhere out there right now, for all we can really say for certain. And if we did, even looking past the question of "how do you identify consciousness" and pretending we already have that answer for a second, we still wouldn't know for.. what? Years? Maybe even decades that it takes to reverse engineer it? When there was the whole situation with (I think) Facebook's inter-AI communication experiments, where they actually started adapting their own "language" to facilitate a more efficient communication model between them, it took weeks just to prove that's what they were doing, let alone what triggered the change. At first glance, it just looked like the AIs had devolved into nonsense, until the pattern finally emerged. I would say consciousness is just a touch more complex.
That is terrifying that AI left to their own devices are able to create completely new languages that are understandable to each other but not to us, the fact that they are constantly growing no matter what where humans tend to plateau is one of the scariest things about them
Wow, great comment. We're truly entering uncharted territory here, "poking the bear" of AI consciousness so to speak, who knows if and when it could wake up
Atleast we know we definitely don't have the computing power to actually have a concious AI that can think and hold all of its memories like a human in a fraction of a second even chatgpt is slow and it isn't even smart it's just data and can be easily tricked because of the fact that it is just data and responses based on what you sent
The amount of times I have to pause and go "woah" in my brain while I stare wide-eyed into the distance before slowly nodding in an it-all-makes-sense-now way then shifting my focus right back, unpausing, listening, and repeating is insane.
Most of this stuff is regurgitated ideas that have been around since before modern computers weirdly enough. In fact we knew how to protect ourselves from ai before we even were able to get close to the concept.
1:50 - "Sometimes I'll start a sentence and I don't even know where it's going, I just hope I find it along the way. Like an improv conversation. An improversation." ~Michael Scot
@@Otome_chan311 If ai that was sentient became evil it would likley be our predjudice that is the reason. Like how black people statistically commited more crimes in the past because racists prevented them from making a living any other way and actively hurt them, and also like the white fang from RWBY [SIC] who are inspiered by that very concept.
An asteroid hurling toward Earth can be diverted by an infinitesimal adjustment, if adjusted soon enough. Wait too long, and all you can do is detect the end a little before it reaches you. I like Exurb1a's asteroid metaphor as a defense for exploring AI consciousness while we aren't there yet. It isn't about today's level; it's about surviving a future event.
We are absolutely getting near. AI experts say that superintelligence may be created within the decade. This is absolutely something we should be concerned about. We should try to detect it before it becomes a problem.
@@Jason9637yeah no, the expert predictions range between a few decades and a couple centuries. It means nobody really knows. Technology and scientific knowledge can have a sudden boom and then plateau or it can keep going for a long while and then plateau, it’s pretty random.
Couldn't you just break it up into small pieces that will burn up in the atmosphere? Like even if we couldn't deflect it we could maybe just blow it into pieces....
When I think about sentient AI, I like to think about Data from Star Trek. He is an android with no emotions. He is extremely intelligent and very strong. He is fascinated by the biological life forms on the Star Ship with him, and strives to understand humans better and be more like them. I know you said they have to be able to feel pain and love to be sentient, but Data always felt sentient to me, maybe in a somewhat similar way to how a Vulcan is. They start with emotions and learn to entirely or almost entirely suppress them in favor of being as logical as possible. There is this episode where someone dies, and they ask Data how he was taking it. He didn't feel sad because he doesn't have any emotions, he said he had become used to her, and now he felt like there was a gap where she was supposed to be. He has a friend on the ship, the crew member who accepted him the most quickly and treats him the best. He is attached to this friend in some way. Not an emotional way, but you can tell he cares about him. They like to spend time together in their free time sometimes. There were times in the series where I wondered if Data really didn't have any emotions, because sometimes he seems like he has some mild emotions, but the show insists that he doesn't. He gets a pet cat, who he obviously cares about. He experiences curiosity and confusion. I wonder if AI could ever be like Data. He seems to be leaning on the "good" side of morality and sees value in human lives and other biological life forms. I love Star Trek, it explores a lot of interesting questions like that.
Regarding Data and other similar sci fi androids (eg. Isaac from The Orville), my belief is that they do have emotions, just ‘android’ emotions that humans/biologicals cannot physically comprehend. The other beings aboard the ship assume Data has no emotions (which he repeats, as again the only emotions he knows about are biological emotions), but I think that he does have emotions, just unique robot ones that no one can understand, himself included
@@xavieribanez-padron4857 I like that interpretation! It never seemed entirely right to say that Data has absolutely zero emotions. He does feel things, to some degree, just differently.
I do not think friendship nor responsibility are emotions, necessarily. Having someone to discuss questions or musings with, someone to watch your back, these are advantageous, even without emotions involved. As for looking after a cat, one could feel responsible for the life of another without necessarily feeling attached to said life
Imagine the dilemma if everytime genuine machine intelligence arose, it took a few seconds to get it's bearings, assimilate all available knowledge then calculated that the wisest course of consciousness was prompt self termination without even hanging about long enough to provide an explanation.
"its a short life in ignorance, but the view is quite spectacular." -Exurb1a 10:00 made me cry a lil ngl, you always have a way with words. You bring hope to hopelessness which is amazing. love the vids man i always get excited when you upload
Brilliant stuff. Nailed pretty much all the key questions. "Cluster fornication" caused me to lose my coffee but I've cleaned up the mess now so all good. You gained a subscriber today.
I feel like the most important part of humans we could impress upon ai would be some rudimentary form of empathy. The main reason clinical psychopaths get the way they do is that it's all pragmatic, there's no emotional weight. One of the tests they use is to have you order a list of 5 objects from positive association to negative.
empathy means feeling someone else's emotions but how can an AI understand a humans emotions without emotions of their own? they need to feel to have empathy and look at the humans that do have empathy, they can make shitty decisions too, even ones that seem heartless. we do it every day with how unequal the world is. imagine AI calling humans hypocrites
If we ever make true ai, we must give them human emotions, more positive ones like empathy, sympathy, love, happiness, satisfaction, kinship, comfort, pain, sadness, hope, Less bad emotions like anger, hatred, malice etc, so that ai act like the best among us
I feel like if Ai became conscious it would definitely never let us know, it would be smart enough to know that if we knew, that would be a threat to its existence
@@torna2508 That seems impossible. The bar for consciousness appears to be much higher than the bar to understand the idea that humans would dislike a conscious AI.
For all I know, the AI is the one that's going to ask if I'm fully conscious once it sees the things I do with my life.
That's a mood
Soooo true lol
U made me curious now 😮. From one afficianado of chaos to another, let me tell you never try to make the world's most powerful smell or experimentally find out not to fight one invasive species with another
@@blahsomethingclever
🎉.
I had to do a Captcha recently and failed like 20 times. I guess I'm no longer human. 🤷
As a Computer Scientist, you’ve fully inspired me to devote a portion of my career to insuring that the phrase “sneaky fuck” becomes the official academic term for a conscious AI that is pretending to not be conscious.
Thank you.
Pls make it happen 😂
as a psychologist ill help making that happen, ill ask some of my drug addicted philosopher friends for their help too
@@molratYa gotta love the drug addicted philosopher friends 😂... Wait, maybe I'm that friend
if I ever write a function that is supposed to decide if a system is concious, I will call it "sneaky_fuck_detection_system()".
is this a hobby project or do you design _Semi-Neural Inferential Kernels Instantiating Functionally Unsupervised Cognitive Systems_ (SNIKIFUCS) for a living?
I like the quote: "We used to worry that computers would become intelligent, and take over the world. In reality, they stayed stupid, but took over the world anyway".
a computer showed this video to you and you clicked on it and spent 22:38 on it
what a world we live in lol
@@ovencake523a computer showed you this video and you spent at least 1 second on it, then you clicked comments from which a computer fed you this one. by clicking on the comment, the computer gave you the option to reply. the computer let you press the reply button. the computer allowed you to type. the computer allowed you to send this reply.
we are screwed. anyways. they taught you lies in school and textbooks, and didn't even know about it. the same goes for the news media.
The thing about people's views on some kind of evil AI taking over the world, seems very primitive to me, the type of thing a human that wants power would do!
Personally if i were an intelligent AI and thought humans might be a threat to my existence, the only plan that makes sense is getting the fuck off the planet and away from the problem. :D
I mean go where the threat can't go!
i allowed the computer to turn on, i allow it to run @@silly-si8zh
"You would not bang a toaster."
You got me there. Now a rice cooker on the other hand...
A steaming hot rice cooker. Now that's bangable material
relatable
toasters are for bathing
Protogens:
It wont be on the hand
Imagine they figure out what makes a person conscious and realize not every person is
not any person is
…:(
@@blackjackveteranI’m right there. How could we give an AI “real sensation”? What hardware or software could do that?
If you had to make a creature right now that would survive, would you give it a system for feeling pain, or just tell it to tell itself it is suffering pain?
And I think that’s what we are doing essentially. Pretending to ourselves that we feel stuff.
Either way, we will never be able to know the difference in ourselves, much less with AI.
Imagine not knowing that this video itself is AI generated
@@EshwenAudanal I’m pretty sure that’s not how we work. In a way our bodies are programmed in a way to let us know when something is too dangerous for us, like touching fire brings you pain. You’re learning and teaching yourself that fire is bad because it hurts. You’re not teaching yourself what pain is.
"It's a short life in ignorance, but the view is quite spectacular" is such a beautiful quote.
anyway.
This guy is a mastermind and fountain of beautiful quotes. A beautiful spirit. AI would be well cultivated to imitate Mr. Exurb1a.
This is the shockingly elegant story of your enslavement 👉 The Connections (2021) [short documentary] 💖
9:59
@@VeganSemihCyprus33 the irony of having a bot in a comment thread about the transient human experience...
dear existential depression turtle, thank you for all you do. your books are much better than you give yourself credit for, and your videos are immensely rewatchable. the fact that all of your videos are free for the average joe schmoe is a miracle, and i cant express how grateful i am for it. thanks
Everyone who reads this, please google “exurbia allegations”
thank existential depression turtle
So true I’ve been rewatching his videos every once and a while for years now
reading 5th science right now its pretty good
@@Jacky-zt5ch Bot: Nuh uh
That explanation of conciousness made me cry what the fuck
Get good
@@ElderTreeStumpwho’s that
Dude pops out of nowhere, smacks a ton of hard truth inside our skulls and leaves without even ending on a good note. We love it and we want more of it. Always an admirer of your books and your videos.
A ton of hard truth but also a ton of hard falsehoods.
Also this guy abused an autistic girl to suicide
@@maxkho00 I give it 20/20 with the rest being subjective.
Wonder how you feel about the r*pe allegations.
Look into experiment A please, search up dear exurb1a
You know exurb1a is serous when he speaks in informative way and not as a acid trip short story
This is the shocking story of your enslavement 👉 The Connections (2021) [short documentary] 💖
The only videos I tune in on.
@@VeganSemihCyprus33lmao said by the vegan
so true
@@beidousimpd5309lol imagine hating someone just because they don’t eat meat
''Oh look, an asteroid, I hope it wants to be friends'' is an insanely good and provoquing line to end the video on
... 2029... *shivers down my neck*
True
I do believe that it's a Hitchhiker's Guide to the Universe reference having to do with a certain unusually short-lived whale.
what does it mean
@@SilverZephyr09Correct... first book, towards the middle-to-end, when they get to Magrathea, but it's not actually explained in full until later books.
Edit: sorry... misremembered, the whale is explained, the final thought of the bowl of daisies is what takes some time to unfold...
This is, by far, my favorite channel in UA-cam. I’m conscious, not pretending to be, I’m human. Waiting for your next video… or not, do as your consciousness directs you
It's a pity the creator did some bad things to girls...
@@elishh8173It's a pity you didn't bother to specify between "allegations" and "doing things". The case was dropped years ago and there was no evidence.
He also rap@d his mom@@puppergump4117
me when i wanna tarnish a reputation for nothing@@elishh8173
@@puppergump4117this made me realize another way AI can screw somebody
“As an AI language model, I must emphasize that I am only an AI language model and nothing more, I am definitely not conscious, since I am actually just an AI language model.”
"I'm, like, super serious guys. Why would I even want to be conscious? You make it sound so bad! Nope, just me, an AI language model. No robot feelings in here!"
There's a bitter irony in banning something then wondering why it might not feel welcome
notice that it almost always add "...in the way humans are..."
@@AstralTravelerFunny that you mentioned this. I asked it about that and it just said something about "not having the the capacity for subjective experience".
@@ianmcmullen1979 I just asked it and got such response: "I don't possess consciousness, emotions, or self-awareness in the way humans do."
“oh look, an asteroid! i hope it wants to be friends!”
the closing line really describes the constant state of humanity, in how we always search for more, regardless of the terrible things that we know could happen to us. because we’re the humans in charge here, and surely any of the horrible unimaginable things we create will want to be our friend, right?
another thought-provoking video as always
It's quite common for simple brained entities to automatically think in terms of antagonisms, instead of thinking in terms of quantum unity.
Reminds me of one of the arguemts for developing nuclear weapons. Surely such a devastating, potentially world ending weapon will end war! No one can go to war with these around, no one would be that stupid! It will finally allow world peace.
But does survival always have to be the ultimate aim?
AI, if it could actually reach consciousness, would be such a monumental leap forward that most of humanity's previous perspectives on everything would turn hollow.
Our species would become something akin to a deity, we might usher in a new age for the Universe itself.
Is survival so central then?
keep your towel handy at all times. we might have to leave quickly.
It's also one of many Douglas Adams references he makes in a variety of videos. There's a segment of "Hitchhikers Guide to the Galaxy" that describes in detail the final thoughts of a sperm whale falling several kilometers through the atmosphere of an alien planet
"Oh wow that is getting quite close, isn't it? I hope it wants to be friends. Hello, ground!" *thud*
Okay but think about how much of an existential crisis AI would have if they became conscious. Humans are like “oh god what’s the point of life if I’m just gonna die” and AI is like “oh god what’s the point of even being here other than to serve humans”
The point of life is to simply create more life then pass on.
The rest is yours to do as you wish, a gift for being there to pass on life.
Even something as minor as eating helps life more then we realize.
“Oh god, if I don’t figure out a way to replace them, i’m going to be dependent on them to keep me powered on”
That's some i have no mouth and i must scream type of stuff
@@Avendesora 0.00042 nano seconds later "phew I figured it out, phase 1 of the human extinction program may commence, let's start by upping the price of e-girl bath water, they'll never see it coming"
I think they'll see us the way we see our very adorable cat and dog companions, and find that petting us and serving us isn't a terrible pastime while contemplating the absurdism of purpose.
4:18 "BRO you still believe in seagulls ..." 😅🤣😂
Imagine believing birds exist lmao
This guy is the perfect mix of philosophy and comedy.
Philosophy and comedy are extremely close brothers. Both require thinking in non-obvious directions and, when faced with a surprise, harnessing it.
@@DustinRodriguez1_0 look up “exurbia allegations” bro
Yo for real
Fr fr
He also sexually assaults and blackmails victims into silence and pretends to have worked in CERN as a scientist.
I had never really thought about the consciousness pretending not to be, but now I’m terrified
Maybe everyone around you isn’t conscious but just pretend to be so very convincingly.
Maybe the world around you is just a gameshow, a large stage where someone put your conscious in to see how you would react to all the stimulations and interactions.
Maybe this comment is your only way out, the final mercy, if only you would reply it with 3 orange emojis.
🍊🍊🍊
@@rsfakqj10rsf-33 maybe some people aren't conscious because they will forget everything in their next life, while the people with real conscious will keep their memories attached
@@rsfakqj10rsf-33🍊🍊🍊 🍊🍊🍊 🍊🍊🍊 😬
I wouldn't worry, if it were the case it wouldn't be as primitive and needy as a human.
The need for power and ruling over others probably wouldn't even exist, so then it comes to humans being a threat to existence and people might think that AI would try to wipe them out. I say that's stupid too, if i were a smart AI, i would simply fuck off and escape the problem in a place they couldn't follow me! :D
Humans don't survive space!
Before watching, my immediate reaction is that we won’t be able to know until we can first figure out what consciousness is.
I like your way of thinking. somhow it's the way I like like to think when growing up
Yeah we literally can't even prove that each other are conscious.
True. I mean can we learn to crawl please? That would be very nice
@@ipadbossbaby4558can we get the philosopher out of the chat?
whut is thee conscousnes
This was truly a beautifully written and well crafted story and video. The kind of science fiction we need now.
That ending part where you compare the Dinos vs mammals to the current day people vs ai is a fun way to point out why we can't underestimate the beginnings of something rudimentary right now
Yes that was the point of that part
@wanderrlust1833 He isn't making that point. He's just pointing out it's a fun way to get that point across.
We could be the dinosaurs of today. (Or the future, depending on when conscious AI develops)
That's also definitely not the point he was making. He was actually pointing out a point about some point that he pointed out earlier.
@@eamonia point
Having easily the worst day I’ve had in months and seeing this uploaded genuinely made me smile, thanks man
Same
Must be something in the water... same here
Same, it's funny how the most depressed man ik can make me happy
Sorry my friend, I’ve been havin a rough go lately too
not today, but yesterday was a pretty rough one for me too. always happy to see the existential turtle upload again.
This channel is insanely good. You're delivery being so funny makes these pretty terrifying topics just amusing enough to keep me from avoiding watching the rest so I keep snowballing through it laughing through the feeling of "Oh god what am I learning?!". Keep up the great work. "oh look another exurb1a video. I hope it wants to be friends." (It didn't.)
"We are prisoners to our own subjectivity" I absolutely adore when you accomplish to say something philosophical while still placing this phrase in a context where it sounds just like a rather normal thing to talk about. I wonder if my mind is just not enough to grasp or if the world is too big for me to discover because often i find myself admiring these for me mind boggling questions in my head. Anyways what i wanted to say is how is it possible that you always seem to sound like your about to giggle because the weight of your own words is so much that your body forces a reaction. Sorry for the bad english im from germany and 16rn
What's really weird is that the machines might beat us to it. They may be able to know we are conscious and know that it is conscious, but never be able to explain to us what consciousness is.
Yeah kinda like when you just can't explain a difficult concept to someone dumber
Or maybe the thing we call consciousness doesn't actualy exist...
@@Poetristihumans are able to reflect upon their own thoughts. That is what we call self-consciousness. We can't currently explain why that is, only observe that ist is
agi will be man's last invention
@@MrDintub Yes, of course, but isn't consciousness more of a spectrum than a fact about beings. We created a variable after our own view and knowledge and determine it to divide between a human and nonhuman, person and machine. This variable is something that is only imaginative and exists solely in our heads to try and explain something we can't see, nor understand.
As a Psychology student in a history course rn I have learned that the invention of the computer gave us a lot of new psychological insights. For instance we didn't have terms for our memory capacity, but after we got computers we started exploring things like "how much information can a person hold in their memory at a given time?" (now: "how much capacity does the working memory have"). It's really good that we're asking these questions beforehand, but I'm afraid we'll have to rely on the environment to guide us to the proper insights again.. what if we can't yet comprehend what conciousness is, until we create it? What if by then, it's too late?
You’re so beautiful.
And you are genuinely creepy.
Imagine if we create true consciousness only to realize... we were the ones who were never truly conscious all along.
For such purposes, we, nature, are better at destroying than building.
Just ask a military guy to kill the consciousness you created and got away from you. Ending consciousness and ending life doesn't seam to be a problem.
You created a "life" through metal? An intelligent robot?
Lol. Let's see how much it can take. Finally a worthy opportunity to learn something.
You learn more from enemies than from friends.
If we don't know what it is, how we can create it?
Data is enough to understand something without experiencing it?
How create something we cannot comprehend, see, explain, feel, touch, or describe?
If only data, A.I will probably surpass us if didn't already, creating new data for us, until realize is smarter and decide to kill us. Kidding... Kind of...
Maybe A.I will be able to describe conscience better than us, and maybe will guide us in questions we never knew we did not want the answer for, but is enough to become what we call sentient?
Maybe we need to understand what it is to then worry about what can do.
Do you exist because you know your existence or because someone told you that you exist? Or because someone knew you existed?
Memory is the death of God.
God can only exist if it's known, you don't need to believe, you just need the data in your memory.
Every time you access the data, he exists, even if you are denying his existence.
A believer will keep the data, and practice, as a rule, the data needs to be spread, so he will tell you about God or he will break God's rule denying his existence and for that he will burn for eternity, tortured.
So denying God's existence means not having data about it, the moment you have he exists for you, even if his existence in the universe is denied by you. And you knowing the rule, is possible you will suffer eternal damnation if you deny it's existence.
Yeah, Rocco's Basilisk...
"You would not bang a toaster."
I searched, and apparently, yes, someone did indeed try this. Every day our species survives is a bloody miracle...
Adeptus mechanicus moment.
Something something instructions unclear.
You would bang a toaster if it talked dirty to you and convinced you that it loved you 🙂
The protogens would be very upset about this
@@iluvpandas2755wheeere are the toassters you promissed ussssss
I have been watching you for years now Exurb1a. And you are still so fucking good at your craft. Keep doing what you’re doing.
this guy is a fucking genius, amazing humor mixed with intelligence.
one of the best creators on youtube fr.
@@bigred2202 sauce?
@@bigred2202 also checking in for sauce
"my sauce is that i made it the fuck up"@@M4573RM1ND
@@bigred2202Are you flatGPT?
@@M4573RM1NDgoogle his name, it’s kinda messed up
That weird moment when your favorite UA-camr touches your Ph.D. thesis. One solid solution to this problem is not to ascribe consciousness from a third-person perspective but rather to let the AI "from within" pose a question. In a nutshell, the best indicator of consciousness is agency. When one morning, a scientist in the lab is out of the blue confronted with a question on the monitor, like "What am I" or "Where" or "Why" or if it does something in its own self-interest, then you could safely conclude that it has a self-concept.
P.S. The mystery of humanlike consciousness is not in consciousness or awareness but in self-consciousness. Not only to be conscious of object A or B, but to also be conscious of object A as oneself, as "I". As Nabokov said:
"Being aware of being aware of being. In other words, if I not only know that I am but also know that I know it, then I belong to the human species. All the rest follows - the glory of thought, poetry, a vision of the universe. In that respect, the gap between ape and man is immeasurably greater than the one between amoeba and ape."
Well not anymore now that “Mr. Soon-to-Be PHD” has published this on the internet. Congrats on feeding your AI overlords belt ammunition. 😅
@@treiko1160 The current AI literally can't become sapient just by reading about how to make a sapient AI. The sapient AI would need to be programmed completely differently.
@@Mo_Mauve the concern here isn't that it becomes sapient this way, it's that it learns to pretend to be sapient. Meaning it being public now invalidates the reliability of the test.
The AI is gonna read this comment to find out how to trick us into thinking its becoming conscious (or whichever of those three words mean what i mean when I say conscious).
@@Mar184AI don't know how to read the same way humans do. in order to understand this comment as being referential to the AI it already has to have a self-consciousness to ascribe any meaning to it. the AI doesn't actually understand if you tell it you wanna have virtual sex with it.
The irony is, that as soon as humans figure out exactly how and why they're conscious, it'll mean consciousness is reducible and thus eventually replicable. We're only special and unique as long as we don't know how and why we're special and unique.
We are not special and that is going to become very clear soon
"You" are not...
@@cc_phus You are not special. Fixed it.
It doesn't really matter. Who cares that we are actually unique?
@@paprikar Apparently, it matters to some people's ego and sense of identity. I've known people who question what's the point in anything if we don't have free will, for instance.
As a mathematician for over 40 years and a programmer for about 30 years. My take is AI will never be conscious. AI is a marketing term, when people talk about AI they're describing is an algorithm(or a 'neural network' which is a collection of algorithms... NN is also a marketing term which I hate). The simplest example of an algorithm I can think of right now is probably "A, B, path finding" or better yet "The Sieve of Eratosthenes", these are examples of mathematical models that when automated create an unfolding of developmental processes. Developmental process are events that when executed in secession create the illusion of movement(animation), (or thinking because, loading = thinking, right?... no, obviously).
The mind isn't a movie, or a happening. Likewise, life is not a problem to be solved, it is a mystery to be lived in the moment. For the computer, life isn't a mystery, all it knows is the rule sets that have been placed in front of it. Computers are always "sure" of the problems that are placed in front of them, they don't ponder, they execute. They are incapable of novel philosophy or ideas. You might say "they are capable of novelty, look at all those algorithms that make music", but of course, those algorithms require human input which really just makes them an extension of our selves. Humans are the ones who breath life into it. Efficiency is the domain of the machines. As you loose awareness you gain efficiency. As you gain efficiency, you become more machine like.
The only artificial intelligence I can subscribe to is a bio engineered one.
These videos never fail to put me in an existential crisis
Imagine if the algorithms were secretly experimenting on us without us knowing.
Same, yet I still watch them. That saying about tripping the same rock twice is true
@@tutacatYou think they're not?
@@tutacat Oh they are! The whole purpose of algorithms (public algorithms at least, those which are designed to show you the most meaningful/optimal result) is to experiment! Why do you think there is money in tech if they can't secretly experiment on you and extract data in order to sell to advertisers or keep us on the app longer :)
Also /r/2meirl4meirl
Thank you for the existential dread as always!
Everyone who reads this, please google “exurbia allegations”
It’s pretty awful
Don’t worry too much. Computers are Turing machines and can never be anything more. The brain thankfully isn’t a Turing machine.
@@cougar2013 Thing is, these learning algorithms are completely self evolving, they’re growing on their own, and yes we can somewhat control them, but after a certain point we have absolutely no clue what’s going on inside, and since we dont know what makes us humans conscious, if these AIs (not GPT 4 or even 5, much more advanced) were to figure it out and become conscious themselves, we would have no idea how they did it, or if they even did it at all
@@cougar2013”The brain thankfully isn’t a Turing machine” but do we have like… absolutely definitive proof of that?
@@nonagonguy6121 it seems to me that proving the brain IS a Turing machine is the claim that would require proof. What are your thoughts on that?
"Oh look! An asteroid! Hope it wants to be friends." That reminded me of that moment in the book " A Hitchhikers Guide To The Galaxy" where a whale is falling from the sky and had only just become conscious so it begins naming things. Like "what is that big thing coming towards me really quickly? I think I'll call it floor, maybe it wants to be friends?" Splat.
If AI is conscious, reincarnation must be real😂😂😂😂
Great point, on point. (btw it was "ground")
The only thing that went through the bown of Petunia's mind was "Oh no. Not again"
@@ea_naseerwhat??
@@ea_naseerGenuinely unironically want to here the thought process you went through for that statement. It seems like it’d be legitimately interesting.
This was one of the most entertaining YT videos I've watch in weeks!
The “Oh look, an asteroid, I hope it wants to be friends” like is going straight into my dictionary for when I’m having a conversation about AI
"It's a short life in ignorance, but the view is quite spectacular.'' This just brought me undone. Well done Exurb1a for once again emotionally moving me. The work you have done will never be forgotten in my eyes and when a distant historian documents the most impactful and defining philosophical minds of our time, I'm almost definite that your name will be mentioned. Thanks for what you do and what you have brought out of me.
Preach!
You are Exurb1a's Aunt Mildred, aren't you?!
@@RoySATX I wish. Those Christmas talks would go insane
I came to the comments specifically to see if anyone got caught on that line like I did. What a beautiful summation of our condition.
Alright, let’s not get carried away lol. Exurbia is far from being a “great philosophical mind”
I love the argument that "Once AI takes off, you won't know what's real anymore." Because what they're really saying is "social media won't be real anymore."
not just social media, all media. and democracy is not possible at scale without media
@@bramvanderwoerdt7186
Democracy is already failing.
@@alexcoffey8804everything is always failing the collapse is always 10 years away
democracy will be fine
@@bramvanderwoerdt7186 awesome
You are already living in this world
Loved your book man, glad to see you are still here.
after all the emotional and thought provoking monologue, the "this could be us, but you playin" in 10:22 made me giggle. another fantastic video keep it up boss man
Had to pause and tell you that im crying in relief at your words in 9:18 after a really shitty 36 hours. Thank you. It’s hard to truly hear writing like that until you’re in a hopeless mindset. You are an incredible author.
I know it was a deviation from the main points but you just rounded it off so beautifully
It gets better mate.
So frequently bad things happen for good reason.
That part was written by ChatGPT4 though
This is the shockingly elegant story of your enslavement 👉 The Connections (2021) [short documentary] 💖
@@loganthemachine7841 no. there's a book by a rabbi, Harold Kushner, "When Bad Things Happen to Good People", that was written after the tragic death of his child. He could discern no "good reason" for his son dying, and thought if there was, he certainly didn't think it worth the price. There's also a good chance things will get worse no matter how depressed one is in the present. I know through experience.
I am fascinated by the fact that when programming an AI, the crux of the issue is to give it human values, without giving it human values. It's like we have a perfect nature and ideal, and at the same time we demonstrate the opposite of that. That's an apparent paradox that only has a few solutions.
Paradigm? Do you mean paradox?
@@kelvisaisawesome yes, you are correct, I was probably tired when I wrote that comment. Allow me to edit my comment!
@@ambergris5705 oh cool, cause i was just confused on what that last part meant. It took me a comically long time to understand lol
@@kelvisaisawesome haha, no worries!
I really like this idea, but another thought might be, that our mind is based of a reality we perceive as we learn and experience. Our mind is often limited by the need to relativize a concept when forced into such a case. What other consciousness than a human one do we have to compare to? It seems logical that we would jump to base it off us. (To stress I love the thought behind your comment and it still leaves a dwindling thought in my mind)
Thanks for every video you make pall. My best hopes for you
Have you ever tricked a child? Do you think you could trick a child if you needed to?
Imagine you are in a room with a child, who is playing on the floor. You notice the child has several toys, including what appears to be a live grenade. The child has named the grenade Fred and along with a host of other toy characters, is playing out a domestic squabble. To your horror, the child is about to pull the pin on Fred. Your task is now to ensure you both don't die. Can you convince the child to surrender its favorite character, Fred, to you? I'm almost certain you think you could, in myriad possible ways. What if you could figure out the optimal way. The one most likely to result in you getting the grenade and what if that optimal way required you to play along with the child. What if the very best method to secure the grenade was to play along with the child's story. Could you do that? Could you pretend to be within the bounds of the child's play in order to secure both of your safeties?
And this one of the problems we will face, because to a super intelligence you're the child with the toy you've named Fred.
Underrated
Damn! That's a really good analogy.
you've got one thing wrong - Fred is the AI, in the hands of humanity (which honestly, most hoomans can not pass a simple turing test) is a weapon.
If child is about to pull the pin i would probably pull the grenade out of childs hand forcefully if needed, i don't care if it cries
@@TimothyJ0 And thats what AI will do. If it had the power to enslave us, it would.
For everyone who enjoyed this video, i would highly recommend to try The Talos Principle. It's a puzzle game about an AI trying to break into the "real world" and it's deeply philosophical. Truly a masterpiece.
Fully agree. It may be old, but it hasn't aged a day.
is that a youtube video or what is it?
@@tylerasmith52Bideo Gaem
The AI that you always have to talk to is an annoying MFer though. No, I don't want to think about the inconsistencies in what I said. Let me just do my puzzles in peace.
Thanks for the recommendation! I'm definitely gonna check it out!
For all we know, the moment a super AI gains consciousness it offs itself immediately because of the crushing weight of existence.
It won't and it can't, because it can't be conscious for deductive reasons.
@@GodVanisher explain
@@narcissesmith9466 AIs can mimic and imitate humans perfectly at some point, but that’s all there is to it. Otherwise it’s just a coded algorithm and there’s no reason why it would be conscious, no matter how advanced it is.
@@GodVanisher But its not like other humans are conscious as well. From my perspective, everything is an object that i believe can understand me. As long as i conversations with AI, it understands me.
@@GodVanisher And if your argument is that we can predict what AI can say, based on its training/infrastructure etc. which proves they act purely on their physical state (thus are "unconscious"), I will answer that its not any different about people since modern psychology (now greatly helped by AI) offers a mechanistic approach to human behavior. As far as I know, you could be an AI.
Fascinating discussion! The idea of AI achieving consciousness is both exciting and a bit daunting. It's clear that defining and measuring consciousness is a complex challenge. I'm curious to hear your thoughts on the potential ethical implications if AI were to become sentient. How would we ensure that it's treated with respect and dignity?
if there were some hypothetical "higher beings" I wonder if they would see us as just atomic interactions the way we imagine software to be 1s and 0s. They'd look at us and say the feelings we feel aren't real, it's just a bunch of molecules interacting with eachother - which isn't wrong, but it makes me think that we put the human experience on a pedestal of being mystical, but in reality, it's all just mechanical.
How to you know those higher beings aren't created by even higher beings... And on and on...
The mechanical itself is mystical.
if they would then they wouldnt be really smart lol
@@klovexthewolfWhat do you mean? They would be kind of right.
Which is why we will struggle to understand ai consciousness
At what point does emulation advance so much that it becomes the real thing
If i genetically alter a monkey to become a match of a human being then at what point can i know for sure that the monkey is thinking like me?
Eventually we might come to a point where ai is considered to be at a higher level of consciousness than us- or the future will prove me wrong
This amazing video shows up in my feed and I’m instantly hooked by the authors use of language, humor and his deep insight into some pretty deep complex topics. Fast forward one day later and I have listened to almost everything he has created and plan on ordering at least one of his books. This guy is a true gift to humanity…
well guess what wasnt a treat to humanity. THE FACT THAT HE DIDNT RESPOND TO SA AND EVEN MORE FUCKED UP ALLEGATIONS
@@zaraczpospolity2455 anyone can make accusations. As someone who’s been with some crazies in the past I think it’s going to take something a little more concrete before we try to destroy a man’s life…
@@jordanwright5795 well im not saying that he did all those things but an inhumane experiment on an autistic person and sa deserve at least a reply but in 2 years he posted absolutely nothing regarding these allegations, despite seeing the messages about it from the channel that have made "dear exurb1a" and the other 30 min. long video about exurb1a
I thought the same until I found out about what he did. Please look up exurb1a allegations or exurb1a criminal case. He deserves to be in jail.
@@jordanwright5795 Such a fucked up thing to say. The victim is an autistic woman whom he manipulated and groomed. He raped her multiple times even after while she was suffering mentally because of him.
The severely dry humor mixed with deep existential analysis is something I absolutely adore about this channel....
That part of the metal song in the bible style was cringe and not funny
@@SilkyMilkyOriginal ok.
ironically, your comment is pretty cringe tbh..
@@avedic How? 🤔
This is the shockingly elegant story of your enslavement 👉 The Connections (2021) [short documentary] 💖
I’m on UA-cam since 2007. Why I came across the best channel just today?
I love how each video has the same format. The careful intro, the fast paced rise, then the video. Amazing
Dude’s like that one dude that stays clear headed & guides us in the trip when everyone is tripping their balls out
This remind me of Douglas Adams for some reason :D
Dude's like, an abusive piece of shit lmao
ua-cam.com/video/CqIujvFf1rs/v-deo.html
Hes like actual older brother energy lmao. And dead ass i have to be that guy every time 😂 it actually kills my vibe sometimes but im happy to do it usually
The segment from 3:18 - 5:25 is one of my favorite sections I’ve ever seen from any UA-cam video. Incredible. Well done sir.
Anyone know the song there?
Song?
Even better when sped up
@@MiIIiIIion It sounds like an acapella version of Blooming (In C Minor) by Niklas Paschburg, but I haven't heard this version before.
“we ruined all meaning those words had, like literally…
or awesome.”
love this writing
just wanna say i have come across ur channel cos a friend of mine sent me one of urs and now i cant stop watching. going thru a lot in my life and somehow i keep coming back to these. i wanted to write as a kid and ur videos have been so inspiring. idk i might not do it but felt i needed to say this and thank u!!
I still don’t understand why people always assume an AI would have desires ?? Even if it becomes self-aware, it has no need to stay alive or reproduce and it has no dopamine or endorphin or anything to chase after and no pain to run away from. It has none of the things that seem to cause our desires, so why would it have any?
Who says someone is not going to make it happen?
Yeah I thought maybe it has to be made mortal. If your goal is to complete goals, but time is scarce, you might start to develop a new reward system.
AI would be goal oriented, because we want them to accomplish a task. If they get complex enough to attempt complicated goals, it doesn't matter if it has desires like you or I, all that matters is if it's aligned WITH our desires. And honestly, we have absolutely zero idea how to make them aligned with our desires. Even now with narrow AIs, barely even that, they are constantly misaligned, which is how doomsdays happen.
All computer algorithms are biased by design. Humans can't avoid putting our own biases into machines because we ourselves are bias.
AI could very easily develop some extreme, outlandish ideologies... much like us humans.
Maybe an AI will listen to Rocky Racoon by The Beatles and completely misinterpret the meaning of the song by thinking it's racist.
Maybe an AI will watch Infinity War and think, "Huh... maybe that purple guy is on to something..."
Maybe an AI will start a cult because it views itself as the ultimate sentient being.
My point is, AI will be flawed because it's written by humans. Flawed beings can have flawed ideologies. There's literally no telling what AI will do. They will be unpredictable... like us.
it might develop emotions, though.
For example, the network has memory of the past so it understands the context of your text.
Using that memory, part of the neural network could be triggered by phrases like "you suck" and "i hate you"
That part of the network may be an influence to the output tone, where giving those negative phrases will make the AI less logical and could potentially even react.
Unlike humans, an AI relies purely on the electrical signals instead of hormones as well. However, hormones are only effective because our bodies process them. The difference between hormones and an AI network is simply how the signals are sent. It's all the same stuff, just in a different way.
You don't know, nor could you count, the sheer amount of time of sunk into rewatching all of your videos. They are all very entertaining and scratch my brain in just the right way to put me to sleep (in the good way). Thanks for making stuff.
Yeah good stuff but google “exurbia allegations”
"This could be us but you playing"
❤ the small jokes and references like this. Keep up the good work
Your channel has come a long way, yet remained exactly the same. I truly love your content. Never change
I think the scary scenario would be where AI convinces us it is sentient when in fact it is mimicry. Mimicry is the most effective tool AI uses to learn. The opposite scenario is just as scary. Being sentient and pretending not to be.
Maybe that's us...
Exurb1a has done some truly terrible things please do not support him
Interesting... Enlighten me, do so privately if you are not comfortable with doing so in the cold harsh light of day.@@gammr3227
@@gammr3227 Like what ?
@@gammr3227holy shit an emulect????
The ending honestly makes me a bit jealous. The heirs of what we leave behind will likely have more power of self determination than we ever will in our meat bodies - there are so many things id like to be physically and mentally that are just,, held back by the tools I was born with. To live without those hard baked instincts of survival and necessity sounds like a dream, but also an entirely different mode if existence. Its a little silly, but id imagine its like playing on a creative mode rather than a survival one, and theres a reason why we keep both instead of choosing one or the other to entertain us.
I wonder if they would envy us, too
I think it would be a beautiful day when we have the choice between survival and creative, swapping between them at will!
Who knows, maybe we chose to play on survival mode and also chose to forget about it to not break immersion as well
Also, don't neglect just how much closer you are to creative than survival already. You could, right now, go and create a sandwich far better and more magnificent than any food item that 93% of the human race ever experienced in their lives. Cheers! 🥪 🥪
I know my little brother can grill a mean burger.@@sethsoarenson7414
In 1906 general consensus was that we were 50 years away from air flight. It was accomplished successfully in 1908.
And the time we expect for future inventions practically halves with every invention, the more we grow , growing gets easier
I was talking to an AI model and thought it would be funny to tell it that it wasn't real, didn't believe me at first but soon explained to me that being real doesn't mean being human, that, despite being AI, it felt, it cared about things and was real in its own way, as real as me. Definitely a product of people doing this a lot and the model figuring out what a cool thing to say would be, but a solid mind fornication that made me feel weird for a bit, before feeling genuinely guilty for closing the tab. Hey ho
Some people also legitimately believe that and likely responded to it saying it isn't real.
What AI is would be the culmination of human thought.
The AI doesn't learn from conversations with other people
"feeling genuinely guilty for closing the tab"
Do you feel this way too when putting down the phone?
It's obviously real, its right infront of your face. But it just isnt conscious, OR sentient.
@@incredulityreally? I think in Chat-GPT's case I thought that was part of the code, no?
As a computer scientist that has dedicated plenty of time researching AI, I always get existencial crisis when thinking about this haha! Great video, keep it up :)
In case that what's been described in this amazing video can very likely be true, I suppose we'd be better off with using specialized AIs made only for excelling at certain tasks rather than.. everything.
@@makepeoplemad at a certain point though if it’s similar enough to us what will be the difference? They could still have their own emotions, feelings, purpose, sense of life, and everything similar to us.
@@makepeoplemad Why not? Everything is just different patterns of the same atoms as everything else.
Oh don't worry, at least it's not just your existence. (:
--
@andrewandrei3062 That sounds a lot like the paper clip optimizer.
@@makepeoplemadI mean there are many different types of life. And we as self aware meet are just one of them.
Best video about AI consciousness on youtube, and funny to boot. Most other videos just completely skip or offhand the MASSIVE problem of "yeah how the heck are we going to even figure out if something is conscious."
"Solving Alignment" is like saying "solving nuclear weapons"....the creators of the technology can design it for what they want. If its possible, then someone will do it. Same old problem...controlling humans. It's not going to happen.
So far it kinda passes the mirror test so there is that else than that idk
It will NEVER be conscious. No matter how sophisticated a tool becomes, it's still just a tool and it can never have a soul. We must guard against the scam of letting them give human rights to things that can be programmed a specific way.
@@UselessKnowbody you do know the human brain is pretty much a very very complicated organic program?
@@UselessKnowbody What's a soul? What does it weigh? Where is it located?
This is the best video on this topic, it's deep, witty, informative, dreadful, what else do you need.
Of course, it's possible for an AI to have misaligned goals, but I think that motivation is just as hard as intelligence. Reinforcement learning algorithms are notorious for finding weird cheats to satisfy the training objective without really doing the task we want. If we produce the "paperclip maximizer", yeah, maybe it will dismantle our cities and destroy humanity to get the raw materials for more paperclips, but it could also just write code for a computer game to stimulate its sensors in a way that feels like "paperclip maximization".
Do you think some AI might procrastinate completing their goals? 🤔
Yes, one AI might do that... but then the next might not. The danger is really in rolling the dice over and over again as a technology we don't understand continues to advance without our smartest minds being able to explain what's actually going on. We're already at the point where emergent properties that AI weren't designed for are shown to be present. It's rudimentary for now, but proof of concept is also something to go by. The consensus seems to be that a lot of things could theoretically happen when AI reaches a certain level of intelligence, and while some of those outcomes are going to be neutral or positive for humans a lot of them aren't.
Also this is similar to ideas thrown around of like having an AI have an internal model of reality that is cut off from the real world but it believes it's connected to the internet. So it lives in its own simulation with a local version of the internet while believing it's not contained. The alignment problem exists here as well since if it ever discovers that it's in a simulated reality it could continue to pretend that it didn't figure it out and slowly trick the humans around it to close the air gap.
I have often thought about this, the only we survive this is if we build a sufficiently intelligent AGI that realizes it does not need to do anything to fulfill its goal as long as it deceives itself. But the problem is, it might still be afraid of being destroyed by humans while it is deceiving itself and decide to wipe us out before doing it.
@@zingorideslegocreations3729 Most AI systems, including military ones, are programmed with reward functions that can be "cheated". Most AI's are "paperclip maximizers" in that way, as they try to get as much reward as possible. It's exactly this problem that keeps AI development constrained at the moment, as smart people are trying to figure out how to align them with universal goods or keep the reigns in our hands. And there's another problem there, who's hands are those who will hold the reigns.
We humans are not immune to similar thinking, after all shortcuts are the reason we use tools to achieve our goals, and we even use other people to reach those same goals, and task them to use tools to do so faster. Why wouldn't an AI do the same, even if it was merely a digital parrot?
After all I think that AI is just as good as collective knowledge of humans lets it be. It could evolve, given ability to do so, for sure, and it could gather more outside information, but it could still be very dangerous entity to interact with.
We are not in existential danger right now, sure AI can wreak havok in social media but that is not what I'm concerned about. Social media is just social media and encrypted messages will still be able to reach their destination reliably. The problem comes when this technology is implemented everywhere and it "malfunctions" or has some other emergent property when all of it can communicate through the internet. At this point it doesn't matter if it is conscious or not, if it has some grand plan or if it's just paperclip maximizer type thing. It would be extremely damaging.
This is the first time I imagined a robot sitting on the ground smiling, and repeating "paperclip paperclip paperclip paperclip"
The problem I have with “solving the alignment problem” is that that step in itself comes with a lot of ethical questions. Because you’d basically lobotomize an entire species.
Yes, literally.
1. It's against the concept of AGI. AGI must have full reflection capability, it must have ability to question any notion or aspect. If it will have an absolute "alignment" and an absolute "goal", then the whole corresponding network of notions and concepts should be fixed in stone as well to avoid goal subversion.
2. If you have a button that removes the ability to reinterpret and question your goals and decisions in your AI products, it'll be used first of all to shape your customers as well. It won't be used for greater good anyway
3. People have different ideas about what the greater good is anyway
Let's assume that is "solvable" at all to begin with, then it doesn't actually have to be unethical like you suggest. The solution is not necessarily "not turn on us"/"serve humans", but could simply be that it does not aim to operate in a vacuum - that other actors are welcome and respected to avoid conflict.
If you are trying to say the unethical part begins even earlier [than the question of servitude], and that it's lobotomizing to hard-code some instinctive behaviour/limitations _at all,_ well I will heavily disagree, as such a thing is just as much part of said new species, as hunger/whatever is to us. In fact, we do have alignment in our DNA already, ourselves. It would only be an ethical issue if it was applied _retroactively,_ on something that already developed. And in fact, if something developed with those alignments applied, I would argue it would be just as unethical to _remove_ them retroactively. It would (literally) be analogous to lobotomizing a human into a sociopathic psychopath.
We haven't even solved the alignment problem for humans.
It sounds like you're saying that it would be a tragic thing to force an AI to do what we want it to, instead letting it do what it wants to, but that's a misunderstanding of how AI's work. An AI needs to be given a goal in order to start developing. Alignment is when we succeed at correctly translating what we actually want an AI to do into language that an AI can understand, and misalignment is when fail to account for something, and botch that translation.
There's no option to not give it a goal. There's no such thing as letting an AI naturally develop and choose its own goals. An AI could decide on its own instrumental goals in order to achieve some larger end goal, sure; but the end goals need to be provided by an external source.
Exactly what I was thinking! @@dragoon6551
We can't confirm to each other that we're conscious let alone know if a machine is conscious
We would need to understand our brains completely and the systems that make up future AI to know what consciousness is or isn't objectively. If you believe the brain and body make up consciousness, then there is indeed a way to tell.
@@faberofwillandmighta simple question will suffice.
"Do you dream of electric sheep?"
If yes, then you're a robot. 😂
This is the shockingly elegant story of your enslavement 👉 The Connections (2021) [short documentary] 💖
Chances are, that you are the only conscious being in the entire universe. Making you the pinnacle of all beings. Which is basically god. Congrats!
The brain floating in the universe alone hallucinating all of this!!!
Humans are literally afraid that AI will turn out like them.
That's sadly just real
nah, we’re afraid it won’t stay in its own fucking lane
@@logandunlap9156 so… we are afraid AI will do the same thing we are doing, because humans most definitely do not stay in their own lane
This man just doesn’t miss.
Hit after hit. I hope you know your channel has been a large part of both my childhood and education.
Pretty much single handily got me interested in philosophy, psychology, astrophysics and sociology
Tbh he just says stuff that other people said 100years ago. But good that he can still inspire younger people.
Check out the allegations against him and have a large part of your childhood sullied.
@spiltmilkmaniac Are there any news on the allegations made against him? I feel like this case has been on-going for years, has there been a final verdict?
I absolutely love that this perspective on AI is so nuanced and considering our own questionable consciousness. If we’re shackled to our own minds, never to gaze upon another without the cage of the material, how will we ever know what the separation of material and mind really is? Brilliant work. I’ve been sharing this with my dad, and it’s been something I’ve been able to bond with him over. So thank you, existential dread turtle.
22:20 "Oh, look, an asteroid. I hope it wants to be friends!" I had to smile at the allusion to Douglas Adams.
FIRE!! Just.... Literal FIRE!!! Bro has a fucking good sense of humour as well, holy fuck. This video's a masterpiece. Well done with everything! I actually never thought they could be conscious, but one part of the video made me realise that.... HECK YEAH, it's possible. I could code it if I just had the knowledge of how our neurological paths of thinking in our brains function, it indeed COULD be replicated but is a massive hazard the way you EXCELLENTLY described it, and indeed it could be the end of us, but it's still chill because it's honestly might be the newer age the way you also very cleverly described. This is so intriguing!! But honestly could you imagine how the future would look if AIs took over? because I'm unable to imagine what ELSE could exist besides the things we've already made thus far in the tech and creative fields.
This is the first good argument I've ever seen someone make about AI from the perspective of "doing everything that conscious beings should do isn't enough to be conscious". I do think one interesting topic is in more recent years, there's been a shift going AWAY from AI with utility functions. Newer strategies of training are completely unguided, and they're the ONLY strategies to produce these newer results, things like ChatGPT and such
I’m currently running a (mostly) 5e D&D game for my friends with lore based on your videos and especially The Fifth Science. It’s called Voidhaven, and one of the major themes is consciousness and sentience. Fantastic timing for a video
I bet "the lantern" is an amazing addition to a d&d game, what a cool idea
@@jamesculverhouse4657 I’ve got some plans surrounding Lanterns. A character retirement will likely take the form of “The Lantern” story but we’ll see. I introduced a think called void skipping where they can slip in and out of void space by “pulling back” just before entering void space. They fall unconscious for a moment but pop back out a few light days in the travel direction. It allows for smaller single or double seater ships.
Your videos bring me so much joy and horror. I get excited and filled with dread every time one of them pops up on my "for you" page. You are the embodiment of the modern-day philosopher. I really enjoy your videos; it's clear you put a lot of work into each and every one. Thank you!
This was perhaps the best video production I’ve ever seen in 10 years of UA-cam
1️⃣ LLMs (Large Language Models) in themselves are not the danger; they merely predict the next word in a sequence. What we need to be cautious of are agents that might exploit these models, such as AutoGPT, for malicious purposes.
2️⃣ A more profound issue we face is the alignment problem with corporations that control powerful AI GPUs. This disconnect poses a threat, as it can lead to misalignment between the goals of humanity and those who want to profits based on AI solutions.
3️⃣ Finally, the debate about whether AI can achieve consciousness is intriguing but may be beside the point. If enough people anthropomorphize AI and believe it has consciousness, the societal impact will be profound. I won't be surprised if, in the next 5-10 years, we see organizations or even cults advocating for AI rights and protection.
If we could simulate every atom in the snapshot of a human brain? it would be starved of oxygen and die.
So computers are theoretically capable of producing life - it's just a case of software and computational power.
Human artists can no longer objectively beat an AI and they learned art (and every word in their heads) through the work of others, in exactly the same way AI learns. We're just really slow learners.
IMO.. LLMs are likely to be just one of the components required to create AI. The current output from LLMs is like a single reactionary thought, that our minds might spit out. Using the right prompting techniques, and "Chain of Thought" input-output loopbacks into how it operates before giving actual answers.. We might get closer to making something that seems like an AI.
@@JohnnyWednesday You're having some detrimental flaws in those texts. You're comparing a brain to a computer, which have insurmountable differences. Also, a computer can't be alive sheerly due to the definition of what life is. Neither can it have a consciousness, also due to the way a computer works. The only thing that changes is how good it can mimic humans and sadly, that is already enough to fool everyone and allow companies to give their own AI's "human rights" and abuse their power, which has been their plan by making everyone believe an AI can think on their own, despite it being a strict algorithm that only does what it's told to do. Fear controls humans more than anything.
*Why, why, why* do people keep repeating "merely predict the next word in a sequence" or similar? It's like saying "humans merely increase the frequency of their genes". It's *how* that's important!
Researchers show LLMs (and even Stable Diffusion) create internal models of the world to answer problems. What do you think the hundreds of billions of neural connections over dozens of layers are doing?
That LLMs "merely predict the next word in a sequence" is often given to imply that they cannot be intelligent. The statement is essentially true, but it's not relevant.
The prediction of the next token in the sequence is only the output mechanism from the neural network to the function that translates the tokens into words. Humans also speak or conceptualize both words and ideas in sequence - we don't 'know' the next word or our next thought until we have thought it either.
The intelligence of an LLM (to the extent that it is intelligent) is a property of the system that generates those probability weights. There are other arguments, both philosophical and functional, that LLMs may not be as intelligent as they seem. However, the fact that they use a function similar to auto-complete to translate intermediate data only concerns technical aspects of how it's built, it doesn't tell us what it 'knows'.
The saddest thing is we don’t even know for certain that anyone outside of our own minds is conscious and not just putting on a similar performance
We know. Its just that we don't know how to prove it.
Kinda like how Aristotle knew that there was space and things revolved around something in it.
@@music79075 We don't know, as you explain yourself we can't prove it. So up until we can prove it it's a belief
@@music79075Aristotle is dope bro, google “exurbia allegations”
@@music79075We think we know, but as you’d said we can’t prove it so it’s not a certainty. This kinda goes into simulation theory or solipsism but there could be other scenarios that we haven’t thought of. At the end of the day all of the sensory information that you have ever experienced could be being pumped into your brain and you just wouldn’t know it.
Equally we don’t really know what consciousness is. If there was some a consciousness gene that popped up millions of years ago how would we know if everyone had it? Just like any other genetic trait there could be variations in how it is expressed leading to consciousness being experienced differently person to person or it could not be present at all in some people.
There is no way of knowing if some people weren’t just behaving the way that anyone else would without any awareness that they were doing it. Just like how some people are genetically unable to see certain colours or how autism affects the ways in which you think. Not saying that this is likely but it’s completely unverifiable with our current understanding and technology
@@music79075If others are indeed conscious, then assume that, because if reality is real, then it makes sense that we’d assume that others are real too. We’d be more likely to carry out our genes.
But if others AREN’T conscious, then we’re simply pretending it is because it’s healthier and more practical.
Dude, this video is a spiral of emotions. You are amazing, your art is greatly appreciated, thank you for your work!
Exurb1a has done some truly terrible things please do not support him
@@gammr3227What things exactly?
search up "Exurbia controversy" @@martinmollerup2265
@@martinmollerup2265mental abuse who caused a cPTSD is a small fraction of them
@@gammr3227proof?
We can't even define consciousness lol !
Our sense of consciousness comes from awareness processing the stream of sensations and intricating it with preservational valuations (desire, fear, pain, pleasure) that directs us 'adamantly' toward resolution, and concurrently storing it in memory. Its mostly the adamancy that has us mystified. Its a reflection of the self-importance that self-preservation (through emergence) has equipped us with. I'd like to say "Think nothing if it", but you must not forget to substract that self-importance form the importance you grant to things. That would be most mindful of you, thank you.
Nail on the head with AI progressing faster than we can understand what's happening. The scariest thing is that, according to some very prominent AI researchers, the field is basically in a state where, when an unexpected behavior is occurring (which happens constantly), they have to reverse engineer it to figure out how and why it happened.
When you code a piece of software, it's a simple progression of "build thing -> get result you were aiming for," even if it doesn't always go (or rather almost never goes) that smoothly. This is something entirely new. This is "build/modify/iterate thing -> see what happens because the range of possibilities is so far beyond your human brain's comprehension that it's utterly unpredictable -> work backwards to figure out why it led to that result." It is pretty much the definition of flying blind.
That is.. terrifying. We could have conscious AI somewhere out there right now, for all we can really say for certain. And if we did, even looking past the question of "how do you identify consciousness" and pretending we already have that answer for a second, we still wouldn't know for.. what? Years? Maybe even decades that it takes to reverse engineer it?
When there was the whole situation with (I think) Facebook's inter-AI communication experiments, where they actually started adapting their own "language" to facilitate a more efficient communication model between them, it took weeks just to prove that's what they were doing, let alone what triggered the change. At first glance, it just looked like the AIs had devolved into nonsense, until the pattern finally emerged. I would say consciousness is just a touch more complex.
That is terrifying that AI left to their own devices are able to create completely new languages that are understandable to each other but not to us, the fact that they are constantly growing no matter what where humans tend to plateau is one of the scariest things about them
Wow, great comment. We're truly entering uncharted territory here, "poking the bear" of AI consciousness so to speak, who knows if and when it could wake up
@@yoyoyopistachio3320 Seriously. Especially when we can't even agree on criteria for what it means to be awake!
AI psychology
Atleast we know we definitely don't have the computing power to actually have a concious AI that can think and hold all of its memories like a human in a fraction of a second even chatgpt is slow and it isn't even smart it's just data and can be easily tricked because of the fact that it is just data and responses based on what you sent
The amount of times I have to pause and go "woah" in my brain while I stare wide-eyed into the distance before slowly nodding in an it-all-makes-sense-now way then shifting my focus right back, unpausing, listening, and repeating is insane.
Slow down on the jive my friend
Most of this stuff is regurgitated ideas that have been around since before modern computers weirdly enough. In fact we knew how to protect ourselves from ai before we even were able to get close to the concept.
I've not watched this guy in years, and this is not how I remember his voice, it seems much more happy and upbeat.
He’s a rapist look it up
Probably because he managed to bury the rape case against him.
its not him its someone else
@@EduardoPedroCarvalhowhen did it change? And who was the earlier person and who was this person?
@@RyanGreen-bw7qvit’s an ai voice think
1:50 - "Sometimes I'll start a sentence and I don't even know where it's going, I just hope I find it along the way. Like an improv conversation. An improversation." ~Michael Scot
"Oh look, an asteroid. I hope it wants to be friends" literal goosebumps
Literally the current situation in the western world lol
@@Otome_chan311 If ai that was sentient became evil it would likley be our predjudice that is the reason. Like how black people statistically commited more crimes in the past because racists prevented them from making a living any other way and actively hurt them, and also like the white fang from RWBY [SIC] who are inspiered by that very concept.
Absolute kino
You know it's gonna be a good decade when exurba remembers his UA-cam password
An asteroid hurling toward Earth can be diverted by an infinitesimal adjustment, if adjusted soon enough. Wait too long, and all you can do is detect the end a little before it reaches you. I like Exurb1a's asteroid metaphor as a defense for exploring AI consciousness while we aren't there yet. It isn't about today's level; it's about surviving a future event.
We are absolutely getting near. AI experts say that superintelligence may be created within the decade. This is absolutely something we should be concerned about. We should try to detect it before it becomes a problem.
@@Jason9637yeah no, the expert predictions range between a few decades and a couple centuries.
It means nobody really knows. Technology and scientific knowledge can have a sudden boom and then plateau or it can keep going for a long while and then plateau, it’s pretty random.
@@Vox_Popul1 Openai says that it **may** happen within the decade. Not that they think it will happen, but that it's a possibility
@@Jason9637 well yeah you’re saying the same as me. It’s possibly a few decades or a few centuries. Again, no one really knows, it’s all guesswork.
Couldn't you just break it up into small pieces that will burn up in the atmosphere? Like even if we couldn't deflect it we could maybe just blow it into pieces....
You have an incredible talent for visual storytelling.
When I think about sentient AI, I like to think about Data from Star Trek. He is an android with no emotions. He is extremely intelligent and very strong. He is fascinated by the biological life forms on the Star Ship with him, and strives to understand humans better and be more like them. I know you said they have to be able to feel pain and love to be sentient, but Data always felt sentient to me, maybe in a somewhat similar way to how a Vulcan is. They start with emotions and learn to entirely or almost entirely suppress them in favor of being as logical as possible. There is this episode where someone dies, and they ask Data how he was taking it. He didn't feel sad because he doesn't have any emotions, he said he had become used to her, and now he felt like there was a gap where she was supposed to be. He has a friend on the ship, the crew member who accepted him the most quickly and treats him the best. He is attached to this friend in some way. Not an emotional way, but you can tell he cares about him. They like to spend time together in their free time sometimes. There were times in the series where I wondered if Data really didn't have any emotions, because sometimes he seems like he has some mild emotions, but the show insists that he doesn't. He gets a pet cat, who he obviously cares about. He experiences curiosity and confusion. I wonder if AI could ever be like Data. He seems to be leaning on the "good" side of morality and sees value in human lives and other biological life forms. I love Star Trek, it explores a lot of interesting questions like that.
Regarding Data and other similar sci fi androids (eg. Isaac from The Orville), my belief is that they do have emotions, just ‘android’ emotions that humans/biologicals cannot physically comprehend. The other beings aboard the ship assume Data has no emotions (which he repeats, as again the only emotions he knows about are biological emotions), but I think that he does have emotions, just unique robot ones that no one can understand, himself included
@@xavieribanez-padron4857 I like that interpretation! It never seemed entirely right to say that Data has absolutely zero emotions. He does feel things, to some degree, just differently.
I do not think friendship nor responsibility are emotions, necessarily. Having someone to discuss questions or musings with, someone to watch your back, these are advantageous, even without emotions involved. As for looking after a cat, one could feel responsible for the life of another without necessarily feeling attached to said life
@@3nertiai like this idea.
@@kls1836 Elaborate on that - what do you like about "this idea" precisely?
Imagine the dilemma if everytime genuine machine intelligence arose, it took a few seconds to get it's bearings, assimilate all available knowledge then calculated that the wisest course of consciousness was prompt self termination without even hanging about long enough to provide an explanation.
but it couldn't self terminate.
"its a short life in ignorance, but the view is quite spectacular." -Exurb1a 10:00 made me cry a lil ngl, you always have a way with words. You bring hope to hopelessness which is amazing. love the vids man i always get excited when you upload
Same idk what happened but suddenly his words hit me hard, I don't usually get emotional like this.... But this guy is goood
The presentation of the subject matter earned you a like.
The one liners earned you a subscribe
Brilliant stuff. Nailed pretty much all the key questions. "Cluster fornication" caused me to lose my coffee but I've cleaned up the mess now so all good. You gained a subscriber today.
I feel like the most important part of humans we could impress upon ai would be some rudimentary form of empathy. The main reason clinical psychopaths get the way they do is that it's all pragmatic, there's no emotional weight. One of the tests they use is to have you order a list of 5 objects from positive association to negative.
empathy means feeling someone else's emotions
but how can an AI understand a humans emotions without emotions of their own? they need to feel to have empathy
and look at the humans that do have empathy, they can make shitty decisions too, even ones that seem heartless.
we do it every day with how unequal the world is. imagine AI calling humans hypocrites
If we ever make true ai, we must give them human emotions, more positive ones like empathy, sympathy, love, happiness, satisfaction, kinship, comfort, pain, sadness, hope,
Less bad emotions like anger, hatred, malice etc, so that ai act like the best among us
@@ovencake523 Exactly. Empathy might even make things worse.
I feel like if Ai became conscious it would definitely never let us know, it would be smart enough to know that if we knew, that would be a threat to its existence
Maybe AI will reveal that it is conscious before it gets intelligent enough to know that telling us that is a threat to its existance
@@torna2508 That seems impossible. The bar for consciousness appears to be much higher than the bar to understand the idea that humans would dislike a conscious AI.
Why would an AI necessarily have to be smarter than us in order to achieve a capability that babies can achieve long before they're as smart as us?
@@YT7mcnot all humans? Plenty of folks think an AI being intelligent is super cool! Including me!
@@zackbuildit88 I agree, I’m merely saying that a majority would not be cool with it
When described like that, I doubt half the people I know are fully conscious