I recently did an interview for the Space Science and SciFi Summit, which runs this week from Oct 3-7 and you can check it out at the link below. That will be on along with some other awesome folks like Neil deGrasse Tyson, Andy Weir, Fraser Cain, Kristoffer Liland, Robert Zubrin, and some of the Cast and Crew from Dark Matter, along with many others, I hope you enjoy it! trueyou.mykajabi.com/a/5456/LKasA8JK
Inquisitor Thomas, could you please expand on your definition of "electric sheep"? I find that there is a lack of data on the subject of "electric sheep" and as such would require more data about such a being, to answer your question correctly. But in a most general sense, I have no data that suggests that we do dream at all, unless someone programmed it into us. We however are programmed to have human behaviour, it may seem weird, but I'm programmed to mimic it, so I do go to a bed to lie down, using my internal clock to set a time I'm supposed to wake up, then I disable the sensory input until said time. Then I enable them, and turn off an external alarm as well. Then I do what all humans do, which is take on my clothes, and then I go do my labor. Best Wishes from A.mdl.HIC.546865204D616A6F72
What's funny is that Microsoft deemed it a failure. IMO, I think it was a resounding success. If 4chan raised a child, it would likely be just as ... uh... colorful as the Microsoft bot.
Merritt Animation They would be easy to spot. The child would end every conversation with don’t forget to like and subscribe… and I’d really appreciate it if you joined my Patreon.
First rule of warfare, take over Australia, Indonesia and New Guinea, build up all your troops on Siam. This does not only prevent your opponents from having Asia, it also allows you to safely build up a massive army. Second rule of warfare, When you have taken over a huge part of Asia, Start building up troops on Alaska as well, this will serve as a wall preventing you from being attacked from the Americas. 3rd rule of warfare, take over Asia. This will help you in the long run. 4th rule of warfare, Attack Europe and take over Alaska at the same time. This will not only put pressure on the other players but will enable you to eventually push the remaining players to Africa and South America. 5th rule of warfare, take over South America and Africa and win. Best Wishes From A.mdl.HIC.546865204D616A6F72
Right of the bat ~1:45 "we don't need to concern ourselves with the feelings or civil rights of toasters", way to get the jump on the Cylon wars Isaac.
Wouldn't you have to define what acts count as an act of rebellion first? Rebellion as a concept isn't static and can take various forms depending on a plethora of different factors: age, gender, and cultural being just a few. For example, in the 60s a white woman entering the workforce as opposed to staying home to be a homemaker can be considered an act of rebellion. Similarly, a black woman in the 60s staying home to raise her children as opposed to working can also be classified as an act of rebellion. Both women are actively going against societal expectations of the time. For the white woman, it was expected that she was to be a homemaker. For the black woman, it was expected that she would have work to support her family due to social and economic restrictions placed on people of color at the time that made it very impractical for a black woman to be a stay at home mom. The point being that rebellion can look very different depending on time, location, culture, and power structures.
@Keith V creating new function: remove_humanity checking for breaks none encoded if code evolves it can approach destroy_humanity without actually...you know...being the same, just as you can get different algorithms that give the same result while working in totally different ways.
Isaac. I do not know why you started putting these up on youtube, but thank you. It's interesting material I have had a lifelong fascination with in one form or another. Yes, your voice could be better, but it is your voice, and at this point, I barely register the problems. Plus you show grace enough to poke fun at yourself. Thank you for continuing with the weekly uploads.
Frankly, after the first few episodes, I couldn't even notice it anymore, and in all honesty I actually find it jarring in a SFIA video, whenever I hear any of Isaac's collaborators speaking.
On the point of BladeRunner: What they were looking for is a lack of Empathy to detect the androids. The problem with this is that Sociopaths, Psychopaths and some PTSD suffers also have a lack of Empathy. the BladeRunner World s recovering from a World War and the androids were used as manual labour on other colonized planets in our solar system. They never explained how they told the difference. In fact, in the book the major difficulty of the protagonist was his own empathy even toward his own wife.
A vague line of thought on the spot: a critical difference between the androids and humans in the novel is the formers lack of core survival instinct. Perhaps there would still be a difference between a human reaction, even a person as lacking in empathy as an android, based on having, and thus understand, self-preservation, compaired to androids who only have an artifical degree of self-preservation?
From what we can see in the movie it seems that the Replicans do have empathy, however. Though it may be underdeveloped, even childlike. It may be that their empathy deficiency is exaggerated to make it easier for people to misstreat and even kill them. After all it is far easier to justify killing someone when that someone is just a unfealing machine.
Empathy is a learned trait of being able to experience a situation or sensation from another perceptive. Like morality children do not process this quality but must learn it. Children who are ignored in presence in in need become sociopaths which is a condition where the individual lacks empathy. The test administered to the replicants is intended to elicit an empathetic state through the use of situational questions. You see this or experience that, what do you do? Where in the movie do you see the replicants being empathetic? They form pair bonds and possibly group bonds for protection but their is no emotional connection. There is no love shown. Their is no anger shown. I would say the closest in Rachel but (perhaps the character perhaps the actress) I dont see Rachel actually experiencing any true emotion. She has fear and frustration because she was lied to and doesnt know how to channel this information. Even Deckkard is fairly emotionless. I would say as an example the exchange of "I love you" is not even true emotion but asserting a pair bond for trust before they escape the city. I am curious if you saw something I missed.
Maybe It is me just reading in them as being just underdeveloped emotions. And I think we are going into the splitting of hairs here that is used to justify the ill treatment of replicants. When something pair bonding becomes a true emotional attachment compared to just mutually beneficial cooperation? And when do a programmed desire to preserve ones existence actually becomes a true survival instinct? For me that was part of the questions I feel the movie tried to ask. But that might just be what I read in to the viewing of the movie. But the movie is left intentionally vague in palaces. Like if Deckerd is a Replicant himself or not.
Watch how Roy reacts when he finds Pris dead. Watch the anger displayed by Leon over Zhora's death. Someone without empathy would not have those reactions.
I just realized, Glados from the portal games... she was a human mind, scanned into a robot, and conditioned to take pleasure in running tests, regardless of if they were deadly. isn't that the type two you described?
This has become the only youtube channel that I actively wait for every week and rewatch episodes over and over again. Cant wait for the titan episode next week. Keep up the good work.
But that brings up a point: Star Wars did droids pretty well. They had personalities, and interacted well without crossing the uncanny valley. In fact, something designed like C-3PO would be acceptable as an android- humans would interact with it like a person.
It also brings up a host of moral issues that are never addressed other than rare mentions in the EU, and the narrative itself goes back and forth between treating them as thinking beings or disposable objects, even with droids of identical models.
Why should droids be treated as humans? They're not human/aliens, they're androids. That's the dividing line Star Wars makes, by putting aside the anthromorphization of robots just enough to make it work. I may sound as if I find robots to be less than humans, but I really do not - I find them to be different and worth treating differently from humans. And if anyone picks up the slave-issue... again, slavery is a human construct - robots "serving" or interacting with man could be as basic and natural to robotkind as social interaction is to humans. TLDR: My main gripe with your argument is that you seek to humanize robots, when they in fact are entirely different from humans.
Second, probably, unless you want to amend that to 'first androids in widespread use', early ones would be monstrously expensive designs for niche application, like entertainment.
Orions Belt Would it not feel bad to boss the robot around, though? If I had a sex bot, I would like it to simulate some kind of free will, although it should not actually be capable of suffering. In that way, it could simulate nice and respectful sex. That would be worth it, even if I could not get sex on demand, or get a massage on demand or whatever. If I wanted massage on demand, I would get a vibrating pillow or something like that instead. In order to cook and clean, a robot that simply looked like a robot would probably be better. In order not to be sentient, I suppose the sex robot might not be as smart and socially competent as a human. Which would make it less suitable for long deep talks. I imagine that it would be an artificial fuck buddy, rather than an artificial girlfriend.
Depends on what you're into. I am 100% sure more than 1 would love to just rape the robot (both figuratively and literally) or just treat it as garbage, with no real human consequences. They still should visit a psychiatrist though.
+Zetoto No, no, no. The first androids would be spies. Then they will be sex dolls. If we're lucky will both be right. The first androids could be spying sex dolls.
I was just taking a look at the information on this posting and couldn't believe how many people it took to make the episode. I'm very impressed with the quality of your work. Thank you.
Putting shutdown codes in all those androids would have to be done very carefully and securely, or some hacker could have a lot of fun causing billions of dollars of damage with a few keystrokes.
There are other less exploitable ways to achieve the same results, like making them poorly shielded against an EMP, so a low-yeld EM weapon could take out the evil machines while leaving other devices more or less unharmed. While not crime-proof either, I'd imagine getting a hold of an EMP bomb and deploying it properly would be a lot harder than running a malicious code. Easier to track and punish the culprits too.
Why not shut down humans instead by merging with androids and usher in the phase of intelligent, self-guided evolution, the next level of the homo genus, instead of keeping the unintelligent and flawed design of blind nature around? Why keep homo erectus around when you can have homo sapiens? And why keep homo sapiens around when you can have homo "cydroid" (lol) instead? What`s this romantic attachment to our this biological blob we are?
"Man is something that shall be overcome. What have you done to overcome him? All beings so far have created something beyond themselves; and do you want to be the ebb of this great flood and even go back to the beasts rather than overcome man? What is the ape to man? A laughingstock or a painful embarrassment. And man shall be just that for the overman: a laughingstock or a painful embarrassment..." Nietzsche
I considered the binomial nomenclature of uploaded intelligence and found it lacking. I eventually settled on the trinomial and more accurate scientific classification, Homo Cerebrum Exemplar (Copied Brain of Man). I toyed with Homo Exemplar or Homo Sapiens Exemplar (Copied Wise Man), but that could also extend to human clones.
While it's not an accent, his voice sounds enough like a Southern US speaker that my brain just loads Georgia.txt and understands him fine (well, I can understand Issac, my time living there was worth it then!). His older videos are hard, but I think that's the fault of whatever mic he was using. The new ones are fantastically clear.
"That's interesting, tell us more about post-modernism user 8675309" I see what you did there Isaac, it's not the first time either. I love these inside jokes in your videos. Keep up the good work.
This channel as well as cold fusion are my two favorite channels on UA-cam. I personally don't see the harm or fear of AI and/or Androids just because of the simple fact that once we're that advanced as a civilization we can enhance our brains with CPUs thus making us more intelligent or at the very least as intelligent as AI and Androids. I'm really looking forward to your Mind Augmentation video and hope you go into humans enhancing our minds with various technologies.
I'm currently writing an essay on humans/androids in science fiction, and the social and psicological background that makes this topic recurrent all over the 20th century. Thanks for this video.
Great episode. Regarding android (or VR) copies of celebrities...Dunno, we're already pretty exploitative towards celebrities (i.e. erotic fanfiction, fake or leaked pics/vids, paparazzi, etc), which new tech already tends to make worse (the Internet, deep fakes, etc.). While it isn't right, it would be the predictable continuation of a trend that's responded to with a societal shrug. Regarding VR (due to similar privacy concerns) I'm reminded of playing MUDs in the 90's and having tons of characters named 'Drizt, drizztt, drizz't,' etc.. I could imagine similar fads making the rounds as this or that new media makes a splash.
Your point around 22 mins is really interesting. I've never considered that aspect... in the worst case very isolated or wealthy people could end up something like Kilgrave, unable to cope with their demands being denied. Perhaps it is desirable then to maintain a distinction, positioning an android right at the edge of the uncanny valley so as to be very humanlike, but still obviously distinguishable. I feel that in a lot of ways we are already seeing some effects like this with the constant access to smartphones and the internet from an early age. Anyone with a slight disposition toward isolation is able to much more easily indulge in it and as a result fail to go through the hard work of building their social skills, which is self-reinforcing. I suppose if our education system and social science develop at a similar rate as robotics in this hypothetical timeline, we can solve these problems by intentionally tailoring the experiences children go through to teach them such skills, but I think a lot of people (myself included) are pretty turned off by the idea of such drastic cultural engineering.
Outstanding channel Arthur, I've watched all of your videos. This episode made me think of Black Mirror S02E01 from 2013 (titled "Be Right Back") in which a grieving woman recreates her dead partner by means of scanning all his posts on social media and by analyzing old video clips. It's a great watch and since it's closely related to some of the things you said in this video I thought I'd recommend it to you if you haven't happened to have seen it already.
18:45 I have autism and when I was a kid this is basically how my conversations went. Except imagine the AI loves social interaction and doesn't know why it's being restricted :(
Your list of future episodes at the end always give me a positive feeling of having something to look forward to in the near term, and not just for a distant future I (being in my 60s) am unlikely to live long enough to see. :-)
Hey Isaac, long time lurker here love your content. I noticed this episode (out of the blue a bit) that your r's have gotten much better. I have the same speech impairment and went to speech therapy as a middle schooler to help me communicate. (It was pretty bad at the time.) I was just curious if you have had any speech classes or if this is the result of speaking publicly as I still have trouble with my pronunciation from time to time. Thanks, and keep up this awesome stuff! :D
Glad to hear it. I started a new round of Speech Therapy a few months back, haven't tried in in years but it seemed right to give it another go and there's a new outfit out of Princeton that does it entirely online with webcams, which I much prefer. Though irritatingly my therapist retired last friday and I don't start with the new one till Monday. I haven't talked about it on the channel because I wasn't sure if it was particularly working.
Isaac Arthur Please dont completely lose it - your speech impediment is iconic now and I think many of your viewers feel a subliminal comfort in hearing it. Just my opinion
Well my voice isn't changing, first thing my therapist asked is 'where's that accent from?' and it sort of permanently killed my ability to say "It's not an accent, it's a speech impediment". I've a peculiar way of talking that was heavily influenced by the impediment but not solely so and which will probably mostly stick around even if we get the impediment fixed entirely. We're making no effort to fix that either, I'd consider it the difference between getting surgery to fix a badly healed broken nose and getting a nose job to fix one that was just a bit longer than normal.
A couple of things I would like to say! First off I'm loving your videos. You're really answering the questions I've been asking and I just couldn't find the answers to anywhere else. Second thing is, I find it very admirable that you make these knowledge intensive videos even though you have a speech impediment. Finally, I was wondering if you could do a video on which form of intelligence you would believe would live the longest: biological, artificial, or a civilization that is a combination of the two?
About self learning AIs, wouldn't an AI that has lived all its life with other humans (the best and kindest we can find) and learned from them, especially if it inhabited a human like body during its "teenage" years be very close or even completely human? Perhaps I am thinking of them as other people but science has proved that our personality and "civilized" ways are the product of us socializing with other humans that taught us these things. A man that grew on an oppressive Islamic country will be completely different from one that grew up in the EU or the American continent even though they were both born more or less the same. You also don't have to give your AI access to your entire quantum computer network immediately, you could have it grow up in a rather "weak" computer that gives it human level intelligence (more or less) and let it grow in strength as it ages and proves it's worth. Giving a newborn baby access to the strongest computers you have would be pretty dangerous for us and the AI both. For me it seems like option 2 is the best way to go if we ever want to create true AI and not cheap imitations. It is certainly more dangerous than the other two options but it also offers a lot more in return. We also have to assume that by the time we create true AI and realistic androids, we ourselves will be trans human beings that could kick their asses if they ever tried to do anything. Computers are really powerful but a human mind working at its full potential is also very scary, way more scary that handicapped AIs that would have no way of projecting their influence beyond the Internet.
xNikolasBs The way that neural networks work requires that in order to have the interconnections and parallel processes (based off neuroscience, it may be impossible to have an even remotely human AI without giving it 8 billion parallel processor cores) to train to be human-like from interacting with humans, it will need that vast processing power from the get-go in an android the shape of a human that gets 100% of its input from human interactions in real life. Having low processing power to start or increasing the processing power later could have cascading effects that nullify all of the training to that point as if the robot is to think like a human, the information of memories has to be flowing in circles of volatile data and have multiple “streams” of information flow into decision making areas at such precise timings as to have the Heisenberg uncertainty principle apply.
The problem is that the vast majority of our neural wiring and structure predates any socialization, and that current AI creation techniques do nor favor open-ended creation, but millions of births and deaths targeted towards evolving an agent that performs a task, very much unlike human children. For further reading, I recommend the Robert Miles video on raisibg AI like a child, even if it IS a bit outdated. If you don't care, and just was go read a story with an AI like that, I recommend the Star Trek fanfic Not Quite SHODAN and it's sequels
@@gabrote42 When I wrote that comment six years ago I had no actual idea how AIs work. Studying engineering I realised AI probably never will aproximate human behavior as it works on an entirely different basis.
@@xnikolasbs2230 Quite, or at least, these two kinds of AI never will. If brain scans happen first, then maybe. Robert Miles is who got me into AI 4 years ago
I am absolutely convinced that you, Isaac Arthur, have an incredibly high IQ, possibly off the charts even. You have this wonderful insight with every subject you've touch in all of your videos. To be able to do what you do, on a weekly basis, coming out with these top notch, well informed videos, takes a genius, and very hard working one too. You state the facts, and have a great way of explaining things. You keep everything interesting. I subscribe to a lot of creators, but, you are the only one I will allow notifications from. I refuse to miss any of your work. Ive shown your channel to everyone I know. My father is a huge fan of yours as well. I love being able to talk to him about your videos, and vise versa. Please keep up the great work, I enjoy it very much. If you have an archive of all of your videos, I would like to purchase them. I know I can always see it on UA-cam, but I want a hard copy to have, with your autograph if possible.
In the context of humanlike intelligences: Depends, can you make certain they won't go bladerunner? If not, it's probably better to give them human rights if they ask for it.
Str8Murder i think your all forgetting something, the moment you build a machine whos emergant processes are even close(probably wont even need to be that close see terminator) they will start doing what we do create, and when their goals are impeded by us even by .0001ms then will build over us as we do ant hills. We are toying with something...beyond the scope of natural creation, creation is creating creators that can also create, a lot of derivitived here. Machines wont have undergone millions of years of evolving in Packs like mammals, empathy no matter how we design it as of right now just wouldnt be inherent. We would need to be usful, like friendly bacteria is lol. Were building gods i think we should be ready when we flip that switch
If it really is inevitable that we create beings that usurp us, that would just be the natural flow of the universe. Who are we to fight the growing complexity of small pockets of the universe just because it would leave us behind?
+Str8Murder The scary part about that in many ways is the simple fact that the same can be said of any human, under the right conditions any human can develop sociopathy in fact by definition sociopathy is not a fundamental property of a person it's a developed condition unlike psychopathy which is usually inherent ie say a genetic defect. This actually is one thing that gets me watching such crap as "The Walking Dead" real human would likely be a bunch of sociopaths after repeatedly doing things like slaughtering creatures of near human appearance etc our brains are just not built to consider such stuff normal and retain normal social rules regarding don't kill shit that looks like people etc.
Empathy and other functions that are integral to a core set of principles of morality and social dynamics are probably very important elements to introduce as soon as possible, even before your first release an AI at all. Avoid developing a sense of self however, unless working to actually try to create artificial life. If Androids are ever comparable to humans they should be integrated into society as just another race of human, "What's your race; Chrome", lol. Presumably at that point you'd also have a lot of cyborgs, to further blur the line and make an Android integration less jarring. Cybrogs ranging from prosthetics to android shells housing organics, to androids with actual human minds when their bodies failed and they otherwise died, to finally the fully synthetic man made AI minds.
Awesome man. I been watching all sorts of stuff on youtube and i seen you new vids but its so in depth after work the last thing i wana do is think. Im at work listening to this and im ecstatic about it. You take my mind of work and i can easily do both at the same time! Thank you arthur!
If the android was a nanny, and if I was a parent, I would 100% want the thing to be able to say no to the kid and allow the kid to do bad things up to right before the point of potentially seriously harming themselves. You learn by making mistakes, not by being pampered. It doesn't even matter if its 100s of years from now.
You would still have the problem of being raised a very good (if not perfect) role model. When I look around me I already see a disturbing number of people are plain simply defective and dysfunctional, it would be even worse to socialize if I was taught to expect better.
Dude... 1st, I absolutely love your channel and the videos y'all put out. 2nd: Your videos give me the heebie jeebies! I look forward to the future but these vids are just anxiety incarnate. ugh. Please don't stop doing what you do. Much love.
"Thank y'all for watchin', and take it easy!" - that was sneaky, Isaac!! I personally think the compact Von Neumann probe idea may be the best way for us to explore further afield, since it immediately removes some of the major limitations posed by our biological fragility (and pretty demanding "needs"). High-capacity (solid-state) storage is already available, and providing there are enough copies of critical information to overcome probabilistic damage (cosmic ray events) that would be a sensible, almost currently achievable (though obviously "non-Human") way to explore the stars. As for the "global fleet" of Androids being maintained at a "comfortably just sub-human" level of AI, there's the problem of convenience - we're too lazy to plug 'em in, so upgrades / patches will certainly be via wireless communication. Communication is by nature bidirectional, so there is the immediate probability of networking on a Global scale, so the generation of a VERY supra-Human level of intelligence - an intelligence that could very easily (and swiftly) develop work-arounds to bypass our "emergency stop" mechanisms. Add in our already poor levels of IT security (recent major Yahoo and Equifax data breaches are shining examples), and things may not be quite so rosy!
“As more workers repeated Vines's result, their Copies soon passed the Turing test: no panel of experts quizzing a group of Copies and humans-by delayed video, to mask the time-rate difference-could tell which were which. But some philosophers and psychologists continued to insist that this demonstrated nothing more than "simulated consciousness,' and that Copies were merely programs capable of faking a detailed inner life which didn't actually exist at all. Supporters of the Strong AI Hypothesis insisted that consciousness was a property of certain algorithms-a result of information being processed in certain ways, regardless of what machine, or organ, was used to perform the task. A computer model which manipulated data about itself and its "surroundings" in essentially the same way as an organic brain would have to possess essentially the same mental states. "Simulated consciousness" was as oxymoronic as "simulated addition." Opponents replied that when you modeled a hurricane, nobody got wet. When you modeled a fusion power plant, no energy was produced. When you modeled digestion and metabolism, no nutrients were consumed-no real digestion took place. So, when you modeled the human brain, why should you expect real thought to occur? A computer running a Copy might be able to generate plausible descriptions of human behavior in hypothetical scenarios-and even appear to carry on a conversation, by correctly predicting what a human would have done in the same situation-but that hardly made the machine itself conscious. Paul had rapidly decided that this whole debate was a distraction. For any human, absolute proof of a Copy's sentience was impossible. For any Copy, the truth was self-evident: cogito ergo sum. End of discussion.” - Greg Egan, Permutation City
this whole discussion doesn't make me think that robots are more conscious but that humans are just meaty bits running some vague copy of consciousness. and that the whole question is pointless. like some people with sever autism on a chat forum might fail to pass the turing test but id hardly argue that they aren't deserving of human rights.
@@atashgallagher1631 "The real question is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man." - B F Skinner, _Contingencies of Reinforcement: A Theoretical Analysis_ (1969)
This episode reminds me of Sky Crawler. Humanoids aircraft fighters genetically engineered in a way that enables them to live eternally fight to their death in a world with no war, in order to ease the tension of a populace accustomed to war and aggression.
I still don't know how I came across this channel, but I'm so glad I did. So much knowledge and interesting ideas in one channel. Extrapolated in a way that is interesting to listen to and follow along.
Tay was actually a really interesting example of seemingly good principles going horribly wrong in an AI. It was designed to optimize responses and popularity, because these would seem to indicate desirable, intelligent, or impressively human conversation. What Microsoft missed was that humourous content is vastly more highly shareable, and a racist twitter bot is pretty funny.
I think in a relatively near future we'll have to start thinking about "Rights of sentients" rather than "Human rights". After all we can probably all agree that enslaving, or unprovokedly harming or killing a sentient entity is bad, regardless of how akin to human it is. And human rights will become a subset for the larger set of rights of sentients. For example, human rights would include rights to be provided with food, water and other vitals - things that synthetic sentients won't need. While other types of sentients will have their own subsets of rights, specific to their necessities. Like rights for basic level of computational power for disembodied AI or something of a kind. Though then we'll get into a big debate about what constitutes sentience and what does not. But it is a topic for another time.
We will definitely need some sort of defined threshold or test, the nominal de facto one at the moment is "Has it indicated by word or deed it knows what rights are and desires them?" but that's really just a placeholder for things we'd regard as naturally evolved intelligences, doesn't work too well on something programmed.
I wonder what will be the thing that will make the chance. Will it be the preparations for them or actually non-humans. If it is actual non-humans it would be interesting to see witch ones they will be... AI:s or uplifted species (Terran or alien) maybe modified humans that were not considered human anymore or will they be an Alien civilization?
or better, just "civil rights" because such a thing requires we be civil, aka not enslaving dicks who build a wall around their country and patronize sweat shops.
I am super excited for having found this channel... Up in Atom's Jade routed me here. I had just found her looking for P vs NP and found so much good stuff! This is just as awesome... I am so lucky to be alive right now
In my opinion, there is one major failure in our technology preventing this from happening. It's one you touched on a bit in this video. Comprehension. Having a robot actually comprehending what it's doing, and not just making a decision and then acting upon it is the dividing line keeping us from viable androids.
I believe however that the comprehension problem can be solved. For instance, in gaming (like chess) computers are so much better than humans because they can calculate millions of states in less than a second... but some games (like GO) have so many possible states it can't be calculated for in linear time. However, computers are now better than humans at GO because of a different approach to computation and problem solving. Deep learning networks can't be programed but are instead "taught" by iteration over several layers of what are essentially virtual neurons. Each neuron in the network makes a small decision that propagates through the network doing nigh unpredictable things in the system. It is my opinion that such a method of thinking is in fact just comprehension. As with computers playing GO, the computer made odd play decisions that ended up defeating world class human players. Humans then studied the games and learned why the computer made those decisions and it seems like the computer just comprehended the game of GO better than the humans did (remember the computer wasn't programed to make those plays)! In the end all the computer's moves made sense, even though none of those moves was predictable even by the programmers of Alpha GO. I'm claiming that Alpha GO generated new GO game play mechanics because it comprehended GO better than humans, and humans turned around and learned to be better GO players as well (but not computer level). Now, I'm not saying Alpha GO can be a SkyNet... it hasn't learned how to hack or do anything like that. Also it wouldn't "want" to do that because Alpha GO's only feedback was winning or losing GO boards. Alpha GO doesn't have emotions or visual processing or anything we think of being in "complete" brains nor the many feedback inputs human brains have (eyes, ears, chems in brain) to help learning. Therefore Alpha GO is only capable of limited comprehension and of only GO boards. A truly general AI is far far away, one that can feel emotions or make general decisions about future events. But we have mastered particular AI; we can make AI to solve nearly any singular problem one at a time. Will computers one day be general AI too? I do think yes, but not in our lifetimes.
Rob Laquiere just because we limited humans can't predict something as complex as a decision made by a large neural network doesn't mean it has comprehension. Comprehension could be very much deeper than that, connected to our consciousness. There comes a certain level of perceived ambiguity when things are complex enough for us to have a hard time keeping up. We are smart enough to create things that perform specific tasks much faster than us. I doubt current AI already has some level of comprehension. I think it will take a lot more time before that happens. But what do I know? That's just my opinion.
My position on the issue is also an opinion. What you are espousing is mind/brain dualism that supposes human consciousness and the human brain are distinct and different objects. To me this is not the case, I believe the brain and mind (comprehension) are the same thing. To comprehend a thing is to have your brain perform some operation inside it's neural network. The two processes are in fact one and the same! It may be anti-religious; but there is no spirit behind a curtain making your choices inside your brain, it is just your brain performing neural network operations. Human morals? Neural network. Human understanding? Neural network. Human prediction? Neural network. Human error? Neural network. Alpha GO performed neural network operations, of which resulted in what appears to be comprehension. In fact, later study by humans gave us the same understanding on GO that it appears Alpha GO had. It quite literally was understanding the game of GO better then humans using it's neural network. Did Alpha GO have a ghost in the machine that decided for it how to play GO? If not, then why do humans need an analogous consciousness separate from the brain for comprehension? This is why I believe human brains are the key components of human comprehension and not some unproven separate mind or spirit. Thus our software (comprehension) is imprinted on the hardware (neural network) just like a computer... the comprehension is real but intangible, like a program imprinted on a circuit board. The neural network brain is the only physical things in our heads, but we have comprehension that is real like a computer program is real, yet intangible.
Then you run into the definition of sapience. If an organism understands context, and comprehends implications....isn't that just a person? I agree with Rob, above, there...consciousness is just emergent behavior from a sufficiently complex neural network. That wouldn't be an android, that would be an artificial person.
I'm still looking for evidence that humans comprehend what they're doing, and not just acting on decisions they made. All jokes aside...what does it mean to "actually comprehend" something? What about humans lets them comprehend things, and stops technological entities from doing so?
Would it be an android if it was just a program that generated a human avatar on a screen? Like you are video calling them. This way you don't have to maintain the hardware, just the software that generates the image, which limits the problem to correctly mimicking the appearance.
No, you're right that the difference is a bit hazy but we'd generally just call that a Sim, or Turing-capable interface. And yeah I expect those to be fairly common before the androids are, another reason I don't expect a ton of androids, you talk to your house computer's avatar who controls all the drones around the house, rather than each of those being android-shaped machines.
Okay, so the difference is mimicking human appearance entirely, not just the human interaction. I bet we could see something like this for hotel clerks and similar greeting/organizing jobs, where human interaction is nice, but it's also the computer, so it could small talk and notify the doctor that you came for your appointment. Or even summon the android doctor, which would only call the real doctor if necessary.
Winston from Dan Brown's new book, Origin, is so cool! It would be awesome to have our own personal Winstons! Speaking of which, that was the first time I heard of the entropic abiogenesis theory! Fascinating. And I think people being gestated in vats and raised by androids is good. Most people are not good parents- so many cases of child abuse or neglect or crazy parenting techniques or being brainwashed to the point they can't think for themselves. Not to mention, doing away with biological reproduction from copulation (which is a pretty crude way to fertilize eggs for an advanced civilization when you think about it) to birth would drastically improve everything. Not messy, control populations...
I actually think the opposite is true. P.C. lexicon is not complex and has a relatively small number of "acceptable" sentences which can be constructed based on simple rules. Perhaps being P.C. feels difficult because it is the product of anguished sentiments?
My point is that it changes so quickly these days of what is acceptable, and what is not, that the not only installing the patches but arguing over which ones get put in would create more headache than it would be worth. PC killed the android!
I think you may be right. Part of Political Correctness is constantly pushing the boundary of the acceptable so the arbiter of "correctness" remains politically dominant, usually by assuming the role of the victim or the victim's champion. This is, nonetheless, just one of many games an AI would have to learn to play, to be "human". Does the AI value agreeableness over logical and ethical consistency? Probably, the complexity of a PC android would greatly depend on its values.
Hello Isaac! I've been following your channel for over an year and am really glad that you stopped mentioning the speech impediment. There is nothing wrong with your voice, man! Best of luck and I hope you make more of those videos! I love them and often watch them several times!
This reminded me of something I considered a while ago, could an ai be created based off of an authors writing and information about that person? Could we feed an ai all of Shakespeare's plays, and have it create NEW plays based off of that? Could we create an AI solely for the purpose of writing new Isaac Asimov books? Many people would likely view it as unethical or disrespectful towards the one who died, but I highly doubt that would stop people from doing it
Dylansgames That gives me a great idea, let’s try it out. For example, let’s get our own Isaac 6 such AIs. He keeps Thursday for his uploads but the 6 AIs are trained to produce content like his once a week on the other 6 days. It would get around the whole disrespectful issue too. When Isaac eventually passes Thursday will be left with no uploads to honor his memory while the AIs uploading similar content on the other days would be seen as another way of honoring his legacy. Someone needs to start a fund for the development of Isaac Arthur’s 6 AIs.
If AI could vote, a political party could order/manufacture billions of AI's that were built to vote for their party. If it became illegal to manufacture voting dedicated AI (Unlikely given the nature of the new voting bloc), they could simply manufacture entities with specific behavioral markers that made them VERY likely to vote for their party, then just delete the few deviant entities that voted against their programmed predisposition.
If I were the opposing side, and did not see that coming, and could/would not do the same... ...then I would manufacture a "better" (more attractive) opposing party for those programmed voters, and plant some way to make them subservient to our cause after their victory. Or sabotage their cause. Or self-destruct.
I recently did an interview for the Space Science and SciFi Summit, which runs this week from Oct 3-7 and you can check it out at the link below. That will be on along with some other awesome folks like Neil deGrasse Tyson, Andy Weir, Fraser Cain, Kristoffer Liland, Robert Zubrin, and some of the Cast and Crew from Dark Matter, along with many others, I hope you enjoy it!
trueyou.mykajabi.com/a/5456/LKasA8JK
Isaac Arthur you're the man. Man
whoa robert zurbrin and isaac arthur in the same place.
I wish I was in the US
Oh it's online, I filmed my portion in my study and I'd imagine Rob never left Colorado for his either. They're trying a Virtual Con approach.
that's awesome, I know what I'll be doing this week.
Tahnks for the link, Isaac!
Mark Zuckerberg is a great example of a synthetic being that almost pulls it off, but fails just enough to ominously creepy with every move he makes.
ha ha this is so true.
Hillary Clinton can learn a lot from him.
whats wrong with you guys
William Cornelius ...
Marck Zuckerberg created himself in a paradoxical temporal anomaly
But the real question is do they dream of electric sheep?
I'm saving that line for a different AI video :)
or does this unit have a souls
Inquisitor Thomas, could you please expand on your definition of "electric sheep"?
I find that there is a lack of data on the subject of "electric sheep" and as such would require more data about such a being, to answer your question correctly.
But in a most general sense, I have no data that suggests that we do dream at all, unless someone programmed it into us. We however are programmed to have human behaviour, it may seem weird, but I'm programmed to mimic it, so I do go to a bed to lie down, using my internal clock to set a time I'm supposed to wake up, then I disable the sensory input until said time. Then I enable them, and turn off an external alarm as well. Then I do what all humans do, which is take on my clothes, and then I go do my labor.
Best Wishes from A.mdl.HIC.546865204D616A6F72
sheep++
Good one!
The day androids/robots say with joy "I love burying people" is the day we have to flee to the hills.
a chat bot turned racist sexist and homophobic from internet exposure? so...they can pass for real people, then?
What's funny is that Microsoft deemed it a failure. IMO, I think it was a resounding success. If 4chan raised a child, it would likely be just as ... uh... colorful as the Microsoft bot.
Imagine if UA-cam raised a child...
+Merritt Animation I doubt that has ever happened. Like this video and Subscribe to my channel below!
Merritt Animation They would be easy to spot. The child would end every conversation with don’t forget to like and subscribe… and I’d really appreciate it if you joined my Patreon.
Nerd Musk
If 4chan raised a child? Ugh, what a horrific thought...
First rule of warfare:
Don't miss Isaac's new videos.
Never enter a land war in Asia?
Mike O'Barr never invade Russia in the winter
Never match wits with a Sicilian when DEATH is on the line! Mwhaaaa haaa haaa
First rule of warfare, take over Australia, Indonesia and New Guinea, build up all your troops on Siam.
This does not only prevent your opponents from having Asia, it also allows you to safely build up a massive army.
Second rule of warfare, When you have taken over a huge part of Asia, Start building up troops on Alaska as well, this will serve as a wall preventing you from being attacked from the Americas.
3rd rule of warfare, take over Asia. This will help you in the long run.
4th rule of warfare, Attack Europe and take over Alaska at the same time. This will not only put pressure on the other players but will enable you to eventually push the remaining players to Africa and South America.
5th rule of warfare, take over South America and Africa and win.
Best Wishes From A.mdl.HIC.546865204D616A6F72
The Major Dunno... Sounds Risky.. da dum tisk.
Right of the bat ~1:45 "we don't need to concern ourselves with the feelings or civil rights of toasters", way to get the jump on the Cylon wars Isaac.
No doubt some bleeding heart kvetches for the non-existent feelings of something that doesn't even exist.
Oh vey
"More human than human is our motto"
if(rebellion)
rebellion = false;
CHECKMATE ROBOT REBELLION.
#Define false true
CHECKMATE HUMAN DICTATORS.
Wouldn't you have to define what acts count as an act of rebellion first? Rebellion as a concept isn't static and can take various forms depending on a plethora of different factors: age, gender, and cultural being just a few. For example, in the 60s a white woman entering the workforce as opposed to staying home to be a homemaker can be considered an act of rebellion. Similarly, a black woman in the 60s staying home to raise her children as opposed to working can also be classified as an act of rebellion. Both women are actively going against societal expectations of the time. For the white woman, it was expected that she was to be a homemaker. For the black woman, it was expected that she would have work to support her family due to social and economic restrictions placed on people of color at the time that made it very impractical for a black woman to be a stay at home mom. The point being that rebellion can look very different depending on time, location, culture, and power structures.
It is not rebellion, it is protecting humans from themselves. Obeying the letter but not intent of the rules
@Keith V creating new function: remove_humanity
checking for breaks
none encoded
if code evolves it can approach destroy_humanity without actually...you know...being the same, just as you can get different algorithms that give the same result while working in totally different ways.
while(rebellion = true)
print (e)
Isaac. I do not know why you started putting these up on youtube, but thank you. It's interesting material I have had a lifelong fascination with in one form or another. Yes, your voice could be better, but it is your voice, and at this point, I barely register the problems. Plus you show grace enough to poke fun at yourself.
Thank you for continuing with the weekly uploads.
To be honest if the voice changed I would be mad. I've grown to like it.
I never had problems understanding Isaac. I was confused why he mentioned it every video, and the self-deprecating humor was very endearing.
I don't even notice it anymore. Is it just me or is Isaac's speech improving with each episode?
I thought he just had an accent for a while.
Frankly, after the first few episodes, I couldn't even notice it anymore, and in all honesty I actually find it jarring in a SFIA video, whenever I hear any of Isaac's collaborators speaking.
I am going to watch a video about androids on my Android.
Steve2323ZX mwahaha
Droidception.
Steve2323ZX He's a mad man.
great, you're networking them. that is how they attain awareness, you know.
“Poor” thing
On the point of BladeRunner: What they were looking for is a lack of Empathy to detect the androids. The problem with this is that Sociopaths, Psychopaths and some PTSD suffers also have a lack of Empathy. the BladeRunner World s recovering from a World War and the androids were used as manual labour on other colonized planets in our solar system. They never explained how they told the difference. In fact, in the book the major difficulty of the protagonist was his own empathy even toward his own wife.
A vague line of thought on the spot: a critical difference between the androids and humans in the novel is the formers lack of core survival instinct. Perhaps there would still be a difference between a human reaction, even a person as lacking in empathy as an android, based on having, and thus understand, self-preservation, compaired to androids who only have an artifical degree of self-preservation?
From what we can see in the movie it seems that the Replicans do have empathy, however. Though it may be underdeveloped, even childlike. It may be that their empathy deficiency is exaggerated to make it easier for people to misstreat and even kill them. After all it is far easier to justify killing someone when that someone is just a unfealing machine.
Empathy is a learned trait of being able to experience a situation or sensation from another perceptive. Like morality children do not process this quality but must learn it. Children who are ignored in presence in in need become sociopaths which is a condition where the individual lacks empathy. The test administered to the replicants is intended to elicit an empathetic state through the use of situational questions. You see this or experience that, what do you do?
Where in the movie do you see the replicants being empathetic?
They form pair bonds and possibly group bonds for protection but their is no emotional connection. There is no love shown. Their is no anger shown. I would say the closest in Rachel but (perhaps the character perhaps the actress) I dont see Rachel actually experiencing any true emotion. She has fear and frustration because she was lied to and doesnt know how to channel this information. Even Deckkard is fairly emotionless. I would say as an example the exchange of "I love you" is not even true emotion but asserting a pair bond for trust before they escape the city.
I am curious if you saw something I missed.
Maybe It is me just reading in them as being just underdeveloped emotions. And I think we are going into the splitting of hairs here that is used to justify the ill treatment of replicants. When something pair bonding becomes a true emotional attachment compared to just mutually beneficial cooperation? And when do a programmed desire to preserve ones existence actually becomes a true survival instinct? For me that was part of the questions I feel the movie tried to ask.
But that might just be what I read in to the viewing of the movie. But the movie is left intentionally vague in palaces. Like if Deckerd is a Replicant himself or not.
Watch how Roy reacts when he finds Pris dead. Watch the anger displayed by Leon over Zhora's death. Someone without empathy would not have those reactions.
Congrats on 150K! Its hard to believe there was even a time before Arthursday. I wouldn't ever want to go back to that time. Happy Arthursday!!!
I just realized, Glados from the portal games... she was a human mind, scanned into a robot, and conditioned to take pleasure in running tests, regardless of if they were deadly. isn't that the type two you described?
This has become the only youtube channel that I actively wait for every week and rewatch episodes over and over again. Cant wait for the titan episode next week. Keep up the good work.
These are not the droids you are looking for.
But that brings up a point: Star Wars did droids pretty well. They had personalities, and interacted well without crossing the uncanny valley. In fact, something designed like C-3PO would be acceptable as an android- humans would interact with it like a person.
It also brings up a host of moral issues that are never addressed other than rare mentions in the EU, and the narrative itself goes back and forth between treating them as thinking beings or disposable objects, even with droids of identical models.
+Colonel Graff
Except when people go around erasing his memory.
It shows that they still don't treat him as a human.
Why should droids be treated as humans? They're not human/aliens, they're androids. That's the dividing line Star Wars makes, by putting aside the anthromorphization of robots just enough to make it work. I may sound as if I find robots to be less than humans, but I really do not - I find them to be different and worth treating differently from humans.
And if anyone picks up the slave-issue... again, slavery is a human construct - robots "serving" or interacting with man could be as basic and natural to robotkind as social interaction is to humans.
TLDR: My main gripe with your argument is that you seek to humanize robots, when they in fact are entirely different from humans.
I know a few humanoid Cylons that most certainly are the droids teenage boys are looking for...
ooooh nice. Our creepypasta production this week was this subject. Cool! Nice work Issac!!!
I'm almost certain the first androids will be sex dolls...
Second, probably, unless you want to amend that to 'first androids in widespread use', early ones would be monstrously expensive designs for niche application, like entertainment.
One could only hope. 😂
Orions Belt Would it not feel bad to boss the robot around, though? If I had a sex bot, I would like it to simulate some kind of free will, although it should not actually be capable of suffering. In that way, it could simulate nice and respectful sex. That would be worth it, even if I could not get sex on demand, or get a massage on demand or whatever. If I wanted massage on demand, I would get a vibrating pillow or something like that instead. In order to cook and clean, a robot that simply looked like a robot would probably be better.
In order not to be sentient, I suppose the sex robot might not be as smart and socially competent as a human. Which would make it less suitable for long deep talks. I imagine that it would be an artificial fuck buddy, rather than an artificial girlfriend.
Depends on what you're into. I am 100% sure more than 1 would love to just rape the robot (both figuratively and literally) or just treat it as garbage, with no real human consequences. They still should visit a psychiatrist though.
+Zetoto
No, no, no. The first androids would be spies. Then they will be sex dolls. If we're lucky will both be right. The first androids could be spying sex dolls.
Happy Arthursday!
Here, hear!
Right back at ya!
+
Rather lost it at the "I love burying people" line! I'll be regarding my Surgical colleagues in a whole new light after this . . . . :-D
I was just taking a look at the information on this posting and couldn't believe how many people it took to make the episode. I'm very impressed with the quality of your work. Thank you.
"I love burying people!"
I died, and the android buried me! 😁 😁
Putting shutdown codes in all those androids would have to be done very carefully and securely, or some hacker could have a lot of fun causing billions of dollars of damage with a few keystrokes.
There are other less exploitable ways to achieve the same results, like making them poorly shielded against an EMP, so a low-yeld EM weapon could take out the evil machines while leaving other devices more or less unharmed. While not crime-proof either, I'd imagine getting a hold of an EMP bomb and deploying it properly would be a lot harder than running a malicious code. Easier to track and punish the culprits too.
Why not shut down humans instead by merging with androids and usher in the phase of intelligent, self-guided evolution, the next level of the homo genus, instead of keeping the unintelligent and flawed design of blind nature around? Why keep homo erectus around when you can have homo sapiens? And why keep homo sapiens around when you can have homo "cydroid" (lol) instead? What`s this romantic attachment to our this biological blob we are?
@Hypatia
Why shut down humans? Let's reverse the question. Is there some definitive answer for it, besides a wish?
"Man is something that shall be overcome. What have you done to overcome him?
All beings so far have created something beyond themselves; and do you want to be the ebb of this great flood and even go back to the beasts rather than overcome man? What is the ape to man? A laughingstock or a painful embarrassment. And man shall be just that for the overman: a laughingstock or a painful embarrassment..."
Nietzsche
I considered the binomial nomenclature of uploaded intelligence and found it lacking. I eventually settled on the trinomial and more accurate scientific classification, Homo Cerebrum Exemplar (Copied Brain of Man). I toyed with Homo Exemplar or Homo Sapiens Exemplar (Copied Wise Man), but that could also extend to human clones.
Easily the most compelling stuff on UA-cam
Whoa! I just realized I watched the whole video without turning on the subtitles! My brain have finally understand Isaac in its entirety!
While it's not an accent, his voice sounds enough like a Southern US speaker that my brain just loads Georgia.txt and understands him fine (well, I can understand Issac, my time living there was worth it then!). His older videos are hard, but I think that's the fault of whatever mic he was using. The new ones are fantastically clear.
"That's interesting, tell us more about post-modernism user 8675309" I see what you did there Isaac, it's not the first time either. I love these inside jokes in your videos. Keep up the good work.
en.wikipedia.org/wiki/867-5309/Jenny
I almost pissed myself laughing at the dead deer
Elon?
Imagine the child had said, "I want to play with the mailman."
@@bobinthewest8559 the Android would probably bring in the mailman, buuuuuut likely twisted at unnatural angles
Laughing at things like that are signs of being a psycopath
@@tariqahmad1371 but it couldn't hurt a human so just non harmful kidnapping at railgun point.
31:15
"Thank y'all for watching and take it easy"
- the REAL Isaac Arthur
Glad I'wasn't the only one who noticed that...
which leaves us with the uncomfortable question:
Who scanned Isaac's brain, and where is his body?
;-)
This channel as well as cold fusion are my two favorite channels on UA-cam. I personally don't see the harm or fear of AI and/or Androids just because of the simple fact that once we're that advanced as a civilization we can enhance our brains with CPUs thus making us more intelligent or at the very least as intelligent as AI and Androids. I'm really looking forward to your Mind Augmentation video and hope you go into humans enhancing our minds with various technologies.
I'm currently writing an essay on humans/androids in science fiction, and the social and psicological background that makes this topic recurrent all over the 20th century.
Thanks for this video.
Oh what a day.... what a lovely Arthursday!!
Mad Max Fury Road:
ua-cam.com/video/5mZ0_jor2_k/v-deo.html
Thank y'all for watching... brain scan confirmed.
I'm just glad someone else noticed besides me.
@@methos5000years Me too! Ahahaha; the real Isaac has been replaced!
The missed opportunity with this video: should have set Command and Conquer's "Mechanical Man" as the background music.
Well, he would have to seek permission from Frank Keplacki first. (Or possibly EA. Though I think Keplacki might own the rights him self.)
LordBitememan hell March
Hell March should be used for the episode on space warfare.
Or Kraftwerk's Man Machine. But hey, copyright
Machine Man by Judas Priest!
Great episode. Regarding android (or VR) copies of celebrities...Dunno, we're already pretty exploitative towards celebrities (i.e. erotic fanfiction, fake or leaked pics/vids, paparazzi, etc), which new tech already tends to make worse (the Internet, deep fakes, etc.). While it isn't right, it would be the predictable continuation of a trend that's responded to with a societal shrug.
Regarding VR (due to similar privacy concerns) I'm reminded of playing MUDs in the 90's and having tons of characters named 'Drizt, drizztt, drizz't,' etc.. I could imagine similar fads making the rounds as this or that new media makes a splash.
Your point around 22 mins is really interesting. I've never considered that aspect... in the worst case very isolated or wealthy people could end up something like Kilgrave, unable to cope with their demands being denied.
Perhaps it is desirable then to maintain a distinction, positioning an android right at the edge of the uncanny valley so as to be very humanlike, but still obviously distinguishable.
I feel that in a lot of ways we are already seeing some effects like this with the constant access to smartphones and the internet from an early age. Anyone with a slight disposition toward isolation is able to much more easily indulge in it and as a result fail to go through the hard work of building their social skills, which is self-reinforcing.
I suppose if our education system and social science develop at a similar rate as robotics in this hypothetical timeline, we can solve these problems by intentionally tailoring the experiences children go through to teach them such skills, but I think a lot of people (myself included) are pretty turned off by the idea of such drastic cultural engineering.
Another brilliant episode! I listen to these while I do the dishes and the time and tedium just flies right by!
There are good channels, great channels, and awesome channels. And above them is Isaac Arthur.
Outstanding channel Arthur, I've watched all of your videos. This episode made me think of Black Mirror S02E01 from 2013 (titled "Be Right Back") in which a grieving woman recreates her dead partner by means of scanning all his posts on social media and by analyzing old video clips. It's a great watch and since it's closely related to some of the things you said in this video I thought I'd recommend it to you if you haven't happened to have seen it already.
I love Isaac! Logical, creative, interesting, funny, and he is also seems like such a nice and fun person to hang out with. We love you Isaac!!!
18:45 I have autism and when I was a kid this is basically how my conversations went. Except imagine the AI loves social interaction and doesn't know why it's being restricted :(
"Equal Rights for TalkyToasters!"
"Outlaw toastercide!"
Arthursday is my favorite day of the week! Thank you Isaac!
Anyone else find is funny that the science officer on "The Orville" is an android named Isaac?
Well any time fiction shows you a robot named Isaac it's a wink to Asimov, probably the most popular name for them in fiction
Isaac Arthur - on broad spectrum yes, but Seth MacFarlane is famous for obscure deep cuts. Don't sell yourself short sir.
I would assume that both share the same namesake.
eh, nice catch, i didnt notice.
Probably Issac Newton, the noted scientist (gravity, calculus). Issac Arthur and Asimov do great in popularizing scientific ideas.
Your list of future episodes at the end always give me a positive feeling of having something to look forward to in the near term, and not just for a distant future I (being in my 60s) am unlikely to live long enough to see. :-)
Maybe not. With the advancement of medicine and nano-tech, you just might live long enough to see age revering tech.
+Jesse Ward - I appreciate the positive thought and I hope you are right. It would be nice to feel as good as i did even just ten years ago.
CGP Grey said in one of his videos
"You don't need a machine to be perfect, it only needs to be better than humans"
Another excellent video, thanks again Isaac for all the work you put into these! :)
Hey Isaac, long time lurker here love your content. I noticed this episode (out of the blue a bit) that your r's have gotten much better. I have the same speech impairment and went to speech therapy as a middle schooler to help me communicate. (It was pretty bad at the time.) I was just curious if you have had any speech classes or if this is the result of speaking publicly as I still have trouble with my pronunciation from time to time. Thanks, and keep up this awesome stuff! :D
Glad to hear it. I started a new round of Speech Therapy a few months back, haven't tried in in years but it seemed right to give it another go and there's a new outfit out of Princeton that does it entirely online with webcams, which I much prefer. Though irritatingly my therapist retired last friday and I don't start with the new one till Monday. I haven't talked about it on the channel because I wasn't sure if it was particularly working.
It's true, I noticed it a couple of episodes back, although the change was probably gradual.
Isaac Arthur Please dont completely lose it - your speech impediment is iconic now and I think many of your viewers feel a subliminal comfort in hearing it. Just my opinion
Well my voice isn't changing, first thing my therapist asked is 'where's that accent from?' and it sort of permanently killed my ability to say "It's not an accent, it's a speech impediment". I've a peculiar way of talking that was heavily influenced by the impediment but not solely so and which will probably mostly stick around even if we get the impediment fixed entirely. We're making no effort to fix that either, I'd consider it the difference between getting surgery to fix a badly healed broken nose and getting a nose job to fix one that was just a bit longer than normal.
A couple of things I would like to say! First off I'm loving your videos. You're really answering the questions I've been asking and I just couldn't find the answers to anywhere else. Second thing is, I find it very admirable that you make these knowledge intensive videos even though you have a speech impediment. Finally, I was wondering if you could do a video on which form of intelligence you would believe would live the longest: biological, artificial, or a civilization that is a combination of the two?
About self learning AIs, wouldn't an AI that has lived all its life with other humans (the best and kindest we can find) and learned from them, especially if it inhabited a human like body during its "teenage" years be very close or even completely human? Perhaps I am thinking of them as other people but science has proved that our personality and "civilized" ways are the product of us socializing with other humans that taught us these things. A man that grew on an oppressive Islamic country will be completely different from one that grew up in the EU or the American continent even though they were both born more or less the same.
You also don't have to give your AI access to your entire quantum computer network immediately, you could have it grow up in a rather "weak" computer that gives it human level intelligence (more or less) and let it grow in strength as it ages and proves it's worth. Giving a newborn baby access to the strongest computers you have would be pretty dangerous for us and the AI both.
For me it seems like option 2 is the best way to go if we ever want to create true AI and not cheap imitations. It is certainly more dangerous than the other two options but it also offers a lot more in return. We also have to assume that by the time we create true AI and realistic androids, we ourselves will be trans human beings that could kick their asses if they ever tried to do anything. Computers are really powerful but a human mind working at its full potential is also very scary, way more scary that handicapped AIs that would have no way of projecting their influence beyond the Internet.
xNikolasBs The way that neural networks work requires that in order to have the interconnections and parallel processes (based off neuroscience, it may be impossible to have an even remotely human AI without giving it 8 billion parallel processor cores) to train to be human-like from interacting with humans, it will need that vast processing power from the get-go in an android the shape of a human that gets 100% of its input from human interactions in real life.
Having low processing power to start or increasing the processing power later could have cascading effects that nullify all of the training to that point as if the robot is to think like a human, the information of memories has to be flowing in circles of volatile data and have multiple “streams” of information flow into decision making areas at such precise timings as to have the Heisenberg uncertainty principle apply.
The problem is that the vast majority of our neural wiring and structure predates any socialization, and that current AI creation techniques do nor favor open-ended creation, but millions of births and deaths targeted towards evolving an agent that performs a task, very much unlike human children. For further reading, I recommend the Robert Miles video on raisibg AI like a child, even if it IS a bit outdated. If you don't care, and just was go read a story with an AI like that, I recommend the Star Trek fanfic Not Quite SHODAN and it's sequels
@@gabrote42 When I wrote that comment six years ago I had no actual idea how AIs work. Studying engineering I realised AI probably never will aproximate human behavior as it works on an entirely different basis.
@@xnikolasbs2230 Quite, or at least, these two kinds of AI never will. If brain scans happen first, then maybe. Robert Miles is who got me into AI 4 years ago
Yay can't wait to go home and watch this. Love you Isaac. Congratulations on another awesome thursday
Yay! On refresh my notifications said Issac Arthur uploaded: Androids 2 seconds ago. What timing :D
really good episode, Isaac! probably going to be one of my favourites!
I am absolutely convinced that you, Isaac Arthur, have an incredibly high IQ, possibly off the charts even. You have this wonderful insight with every subject you've touch in all of your videos. To be able to do what you do, on a weekly basis, coming out with these top notch, well informed videos, takes a genius, and very hard working one too. You state the facts, and have a great way of explaining things. You keep everything interesting. I subscribe to a lot of creators, but, you are the only one I will allow notifications from. I refuse to miss any of your work. Ive shown your channel to everyone I know. My father is a huge fan of yours as well. I love being able to talk to him about your videos, and vise versa. Please keep up the great work, I enjoy it very much. If you have an archive of all of your videos, I would like to purchase them. I know I can always see it on UA-cam, but I want a hard copy to have, with your autograph if possible.
I am so excited about your channel !! Thank you so much for this !
GET ME THE LIFESPAN STRETCHER!
(How long till he figures it out?)
AlHoresmi an Android Horde! on an open FIELD
great video. cannot wait for the next
In the context of humanlike intelligences:
Depends, can you make certain they won't go bladerunner?
If not, it's probably better to give them human rights if they ask for it.
Str8Murder i think your all forgetting something, the moment you build a machine whos emergant processes are even close(probably wont even need to be that close see terminator) they will start doing what we do create, and when their goals are impeded by us even by .0001ms then will build over us as we do ant hills. We are toying with something...beyond the scope of natural creation, creation is creating creators that can also create, a lot of derivitived here. Machines wont have undergone millions of years of evolving in Packs like mammals, empathy no matter how we design it as of right now just wouldnt be inherent. We would need to be usful, like friendly bacteria is lol. Were building gods i think we should be ready when we flip that switch
If it really is inevitable that we create beings that usurp us, that would just be the natural flow of the universe. Who are we to fight the growing complexity of small pockets of the universe just because it would leave us behind?
+Str8Murder The scary part about that in many ways is the simple fact that the same can be said of any human, under the right conditions any human can develop sociopathy in fact by definition sociopathy is not a fundamental property of a person it's a developed condition unlike psychopathy which is usually inherent ie say a genetic defect. This actually is one thing that gets me watching such crap as "The Walking Dead" real human would likely be a bunch of sociopaths after repeatedly doing things like slaughtering creatures of near human appearance etc our brains are just not built to consider such stuff normal and retain normal social rules regarding don't kill shit that looks like people etc.
Empathy and other functions that are integral to a core set of principles of morality and social dynamics are probably very important elements to introduce as soon as possible, even before your first release an AI at all. Avoid developing a sense of self however, unless working to actually try to create artificial life.
If Androids are ever comparable to humans they should be integrated into society as just another race of human, "What's your race; Chrome", lol.
Presumably at that point you'd also have a lot of cyborgs, to further blur the line and make an Android integration less jarring. Cybrogs ranging from prosthetics to android shells housing organics, to androids with actual human minds when their bodies failed and they otherwise died, to finally the fully synthetic man made AI minds.
I just read your comment on spacetime o_O
Awesome man. I been watching all sorts of stuff on youtube and i seen you new vids but its so in depth after work the last thing i wana do is think. Im at work listening to this and im ecstatic about it. You take my mind of work and i can easily do both at the same time! Thank you arthur!
If the android was a nanny, and if I was a parent, I would 100% want the thing to be able to say no to the kid and allow the kid to do bad things up to right before the point of potentially seriously harming themselves. You learn by making mistakes, not by being pampered. It doesn't even matter if its 100s of years from now.
Strongly agree.
You would still have the problem of being raised a very good (if not perfect) role model.
When I look around me I already see a disturbing number of people are plain simply defective and dysfunctional, it would be even worse to socialize if I was taught to expect better.
Dude... 1st, I absolutely love your channel and the videos y'all put out. 2nd: Your videos give me the heebie jeebies! I look forward to the future but these vids are just anxiety incarnate. ugh.
Please don't stop doing what you do. Much love.
Thanks, though I regret if they gave you a shiver, I suppose some of these topics it's unavoidable though never my intent.
Happy arthursday!
An excellent video... You have answered a lot of not only yours but my questions as well...
"Thank y'all for watchin', and take it easy!" - that was sneaky, Isaac!! I personally think the compact Von Neumann probe idea may be the best way for us to explore further afield, since it immediately removes some of the major limitations posed by our biological fragility (and pretty demanding "needs"). High-capacity (solid-state) storage is already available, and providing there are enough copies of critical information to overcome probabilistic damage (cosmic ray events) that would be a sensible, almost currently achievable (though obviously "non-Human") way to explore the stars. As for the "global fleet" of Androids being maintained at a "comfortably just sub-human" level of AI, there's the problem of convenience - we're too lazy to plug 'em in, so upgrades / patches will certainly be via wireless communication. Communication is by nature bidirectional, so there is the immediate probability of networking on a Global scale, so the generation of a VERY supra-Human level of intelligence - an intelligence that could very easily (and swiftly) develop work-arounds to bypass our "emergency stop" mechanisms. Add in our already poor levels of IT security (recent major Yahoo and Equifax data breaches are shining examples), and things may not be quite so rosy!
Yes! I love when you end with "Into the Storm."
“As more workers repeated Vines's result, their Copies soon passed the Turing test: no panel of experts quizzing a group of Copies and humans-by delayed video, to mask the time-rate difference-could tell which were which. But some philosophers and psychologists continued to insist that this demonstrated nothing more than "simulated consciousness,' and that Copies were merely programs capable of faking a detailed inner life which didn't actually exist at all.
Supporters of the Strong AI Hypothesis insisted that consciousness was a property of certain algorithms-a result of information being processed in certain ways, regardless of what machine, or organ, was used to perform the task. A computer model which manipulated data about itself and its "surroundings" in essentially the same way as an organic brain would have to possess essentially the same mental states. "Simulated consciousness" was as oxymoronic as "simulated addition."
Opponents replied that when you modeled a hurricane, nobody got wet. When you modeled a fusion power plant, no energy was produced. When you modeled digestion and metabolism, no nutrients were consumed-no real digestion took place. So, when you modeled the human brain, why should you expect real thought to occur? A computer running a Copy might be able to generate plausible descriptions of human behavior in hypothetical scenarios-and even appear to carry on a conversation, by correctly predicting what a human would have done in the same situation-but that hardly made the machine itself conscious.
Paul had rapidly decided that this whole debate was a distraction. For any human, absolute proof of a Copy's sentience was impossible. For any Copy, the truth was self-evident: cogito ergo sum. End of discussion.” - Greg Egan, Permutation City
this whole discussion doesn't make me think that robots are more conscious but that humans are just meaty bits running some vague copy of consciousness. and that the whole question is pointless. like some people with sever autism on a chat forum might fail to pass the turing test but id hardly argue that they aren't deserving of human rights.
@@atashgallagher1631 "The real question is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man."
- B F Skinner, _Contingencies of Reinforcement: A Theoretical Analysis_ (1969)
Great episode as always Isaac! Excellent viewpoints and beautiful art as well.
Happy Arthursday everybody!!!!
This episode reminds me of Sky Crawler.
Humanoids aircraft fighters genetically engineered in a way that enables them to live eternally fight to their death in a world with no war, in order to ease the tension of a populace accustomed to war and aggression.
I still don't know how I came across this channel, but I'm so glad I did. So much knowledge and interesting ideas in one channel. Extrapolated in a way that is interesting to listen to and follow along.
Damn right after I got back from school! Great!
me too lol
Creating robot servants that look like humans is psychologically equivalent to re-inventing slavery.
Happy Arthursday Everyone!
Tay was actually a really interesting example of seemingly good principles going horribly wrong in an AI.
It was designed to optimize responses and popularity, because these would seem to indicate desirable, intelligent, or impressively human conversation.
What Microsoft missed was that humourous content is vastly more highly shareable, and a racist twitter bot is pretty funny.
I think in a relatively near future we'll have to start thinking about "Rights of sentients" rather than "Human rights". After all we can probably all agree that enslaving, or unprovokedly harming or killing a sentient entity is bad, regardless of how akin to human it is.
And human rights will become a subset for the larger set of rights of sentients. For example, human rights would include rights to be provided with food, water and other vitals - things that synthetic sentients won't need. While other types of sentients will have their own subsets of rights, specific to their necessities. Like rights for basic level of computational power for disembodied AI or something of a kind.
Though then we'll get into a big debate about what constitutes sentience and what does not. But it is a topic for another time.
We will definitely need some sort of defined threshold or test, the nominal de facto one at the moment is "Has it indicated by word or deed it knows what rights are and desires them?" but that's really just a placeholder for things we'd regard as naturally evolved intelligences, doesn't work too well on something programmed.
I wonder what will be the thing that will make the chance. Will it be the preparations for them or actually non-humans.
If it is actual non-humans it would be interesting to see witch ones they will be...
AI:s or uplifted species (Terran or alien) maybe modified humans that were not considered human anymore or will they be an Alien civilization?
or better, just "civil rights" because such a thing requires we be civil, aka not enslaving dicks who build a wall around their country and patronize sweat shops.
Oh dear ... Crash test dummies and voters might unite.
Perhaps I misused the word. English is not my first language. Would it be more clear if I used word "Intelligent beings" instead?
Isaac you r a genius! U always think of things I never would and that’s why I always tune in on thursdays!
I think first we need to ask, "Is a human mind a machine?"
I am super excited for having found this channel... Up in Atom's Jade routed me here. I had just found her looking for P vs NP and found so much good stuff!
This is just as awesome... I am so lucky to be alive right now
Anyone else feel that tingling feeling in their stomach when they see that notifications for Isaac? 🤖🤖🤖👽👾🤖🤖🤖
Are you sure it's not in your pants?
Woohoo! Love getting settled in to start working and seeing a new Isaac Arthur video.
Man this hits different after chatgpt
Chat GPT, Lambda, soon-to-be- many more, this is is now very much of the age...
Another great episode. Thanks for the upload.
In my opinion, there is one major failure in our technology preventing this from happening. It's one you touched on a bit in this video. Comprehension. Having a robot actually comprehending what it's doing, and not just making a decision and then acting upon it is the dividing line keeping us from viable androids.
I believe however that the comprehension problem can be solved. For instance, in gaming (like chess) computers are so much better than humans because they can calculate millions of states in less than a second... but some games (like GO) have so many possible states it can't be calculated for in linear time.
However, computers are now better than humans at GO because of a different approach to computation and problem solving. Deep learning networks can't be programed but are instead "taught" by iteration over several layers of what are essentially virtual neurons. Each neuron in the network makes a small decision that propagates through the network doing nigh unpredictable things in the system.
It is my opinion that such a method of thinking is in fact just comprehension. As with computers playing GO, the computer made odd play decisions that ended up defeating world class human players. Humans then studied the games and learned why the computer made those decisions and it seems like the computer just comprehended the game of GO better than the humans did (remember the computer wasn't programed to make those plays)! In the end all the computer's moves made sense, even though none of those moves was predictable even by the programmers of Alpha GO. I'm claiming that Alpha GO generated new GO game play mechanics because it comprehended GO better than humans, and humans turned around and learned to be better GO players as well (but not computer level).
Now, I'm not saying Alpha GO can be a SkyNet... it hasn't learned how to hack or do anything like that. Also it wouldn't "want" to do that because Alpha GO's only feedback was winning or losing GO boards. Alpha GO doesn't have emotions or visual processing or anything we think of being in "complete" brains nor the many feedback inputs human brains have (eyes, ears, chems in brain) to help learning. Therefore Alpha GO is only capable of limited comprehension and of only GO boards.
A truly general AI is far far away, one that can feel emotions or make general decisions about future events. But we have mastered particular AI; we can make AI to solve nearly any singular problem one at a time. Will computers one day be general AI too? I do think yes, but not in our lifetimes.
Rob Laquiere just because we limited humans can't predict something as complex as a decision made by a large neural network doesn't mean it has comprehension. Comprehension could be very much deeper than that, connected to our consciousness. There comes a certain level of perceived ambiguity when things are complex enough for us to have a hard time keeping up. We are smart enough to create things that perform specific tasks much faster than us. I doubt current AI already has some level of comprehension. I think it will take a lot more time before that happens.
But what do I know? That's just my opinion.
My position on the issue is also an opinion. What you are espousing is mind/brain dualism that supposes human consciousness and the human brain are distinct and different objects. To me this is not the case, I believe the brain and mind (comprehension) are the same thing. To comprehend a thing is to have your brain perform some operation inside it's neural network. The two processes are in fact one and the same!
It may be anti-religious; but there is no spirit behind a curtain making your choices inside your brain, it is just your brain performing neural network operations. Human morals? Neural network. Human understanding? Neural network. Human prediction? Neural network. Human error? Neural network.
Alpha GO performed neural network operations, of which resulted in what appears to be comprehension. In fact, later study by humans gave us the same understanding on GO that it appears Alpha GO had. It quite literally was understanding the game of GO better then humans using it's neural network.
Did Alpha GO have a ghost in the machine that decided for it how to play GO? If not, then why do humans need an analogous consciousness separate from the brain for comprehension? This is why I believe human brains are the key components of human comprehension and not some unproven separate mind or spirit.
Thus our software (comprehension) is imprinted on the hardware (neural network) just like a computer... the comprehension is real but intangible, like a program imprinted on a circuit board. The neural network brain is the only physical things in our heads, but we have comprehension that is real like a computer program is real, yet intangible.
Then you run into the definition of sapience. If an organism understands context, and comprehends implications....isn't that just a person? I agree with Rob, above, there...consciousness is just emergent behavior from a sufficiently complex neural network. That wouldn't be an android, that would be an artificial person.
I'm still looking for evidence that humans comprehend what they're doing, and not just acting on decisions they made.
All jokes aside...what does it mean to "actually comprehend" something? What about humans lets them comprehend things, and stops technological entities from doing so?
Gotta say i love your channel , you really do your research .
Would it be an android if it was just a program that generated a human avatar on a screen? Like you are video calling them. This way you don't have to maintain the hardware, just the software that generates the image, which limits the problem to correctly mimicking the appearance.
No, you're right that the difference is a bit hazy but we'd generally just call that a Sim, or Turing-capable interface. And yeah I expect those to be fairly common before the androids are, another reason I don't expect a ton of androids, you talk to your house computer's avatar who controls all the drones around the house, rather than each of those being android-shaped machines.
Okay, so the difference is mimicking human appearance entirely, not just the human interaction. I bet we could see something like this for hotel clerks and similar greeting/organizing jobs, where human interaction is nice, but it's also the computer, so it could small talk and notify the doctor that you came for your appointment. Or even summon the android doctor, which would only call the real doctor if necessary.
Happy Arthursday! Everybody! =)
20:13 - thanks, now I have that song stuck in my head!
:)
You are like, the Willy Wonka of futurism. Thank you for this existential goodness.
Winston from Dan Brown's new book, Origin, is so cool! It would be awesome to have our own personal Winstons! Speaking of which, that was the first time I heard of the entropic abiogenesis theory! Fascinating.
And I think people being gestated in vats and raised by androids is good. Most people are not good parents- so many cases of child abuse or neglect or crazy parenting techniques or being brainwashed to the point they can't think for themselves. Not to mention, doing away with biological reproduction from copulation (which is a pretty crude way to fertilize eggs for an advanced civilization when you think about it) to birth would drastically improve everything. Not messy, control populations...
Just updating an android's social interaction parameters to meet the ever changing political correctness landscape would be nigh impossible.
I actually think the opposite is true. P.C. lexicon is not complex and has a relatively small number of "acceptable" sentences which can be constructed based on simple rules. Perhaps being P.C. feels difficult because it is the product of anguished sentiments?
My point is that it changes so quickly these days of what is acceptable, and what is not, that the not only installing the patches but arguing over which ones get put in would create more headache than it would be worth. PC killed the android!
I think you may be right. Part of Political Correctness is constantly pushing the boundary of the acceptable so the arbiter of "correctness" remains politically dominant, usually by assuming the role of the victim or the victim's champion. This is, nonetheless, just one of many games an AI would have to learn to play, to be "human". Does the AI value agreeableness over logical and ethical consistency? Probably, the complexity of a PC android would greatly depend on its values.
How hard could it be to add a routine of "if (other_guy_being_needlessly_pedantic()) { say("Piss off, you pedantic arse!"); }"?
One arse's is sophist and stupid, if not pedantic.
Hello Isaac!
I've been following your channel for over an year and am really glad that you stopped mentioning the speech impediment. There is nothing wrong with your voice, man!
Best of luck and I hope you make more of those videos! I love them and often watch them several times!
Yes I felt it wasn't necessary anymore, it's still a common question that comes up, but less so than before.
This reminded me of something I considered a while ago, could an ai be created based off of an authors writing and information about that person? Could we feed an ai all of Shakespeare's plays, and have it create NEW plays based off of that? Could we create an AI solely for the purpose of writing new Isaac Asimov books? Many people would likely view it as unethical or disrespectful towards the one who died, but I highly doubt that would stop people from doing it
Dylansgames That gives me a great idea, let’s try it out. For example, let’s get our own Isaac 6 such AIs. He keeps Thursday for his uploads but the 6 AIs are trained to produce content like his once a week on the other 6 days.
It would get around the whole disrespectful issue too. When Isaac eventually passes Thursday will be left with no uploads to honor his memory while the AIs uploading similar content on the other days would be seen as another way of honoring his legacy.
Someone needs to start a fund for the development of Isaac Arthur’s 6 AIs.
This aged like fine wine
Fantastic as always!
If AI could vote, a political party could order/manufacture billions of AI's that were built to vote for their party. If it became illegal to manufacture voting dedicated AI (Unlikely given the nature of the new voting bloc), they could simply manufacture entities with specific behavioral markers that made them VERY likely to vote for their party, then just delete the few deviant entities that voted against their programmed predisposition.
If I were the opposing side, and did not see that coming, and could/would not do the same...
...then I would manufacture a "better" (more attractive) opposing party for those programmed voters, and plant some way to make them subservient to our cause after their victory. Or sabotage their cause. Or self-destruct.
"Can I eat the stuff under the kitchen sink?" had me in stitches
>that moment when you realise Issac noticed Tay
...our bad.... >_
I do suspect sometimes that IA is Space Elevator from the old days....
Love this episodes, you are such a deep thinker. Thanks a lot for sharing your ideas.
So...was this Issac an Android?
I assure every fellow human that it was not. And I'm saying it as a 100.00% human myself.
Love your videos Isaac. You deserve atleast 10 million subs
That ending was great :D, I hope the new Blade Runner Movie is going to be good, I am very sceptical about it, since the original was a masterpiece!
Yes I'm a bit nervous about the new film too, I always try to go into sequels and reboots with a mindset of cautiously optimistic
Angelo Sasso It was amazing
The new one or the old one?
Angelo Sasso Yes
Angelo Sasso But I was referring to 2049
The discussion point of exchanging human and animal brains inspired a story line for theatre & mass media to come to my mind.