Sam Harris & Steve Jurvetson - The Future of Artificial Intelligence
Вставка
- Опубліковано 4 лют 2025
- Sam Harris and Steve Jurvetson discuss the future of artificial intelligence at Tim Draper’s CEO Summit.
Sam Benjamin Harris is an author, philosopher, neuroscientist, blogger, and podcast host.
Stephen T. Jurvetson is an American businessman and venture capitalist. He is formerly a partner of Draper Fisher Jurvetson.
November 17th, 2017
Oh, I see you improved my video post. thanks. Here is the description, index and some juicy quotes: flic.kr/p/21DGQCw such as: “This is the most important game that we’re playing in technology. Intelligence is the most valuable resource we have. It is the source of everything we value, or it is the thing we use to protect everything we value. It seems patently obvious that if we are able to improve our intelligent machines, we will.”
“Many of you probably harbor a doubt that minds can be platform independent. There is an assumption working in the background that there may be something magical about computers made of meat.”
“Many people are common sense dualists. They think there is a ghost in the machine. There is something magical that’s giving us, if not intelligence per se, at the very least consciousness. I think those two break apart. I think it is conceivable that we could build superintelligent machines that are not conscious and that is the worst case scenario ethically.”
Wonderful discussion, probably one of my favorites on the topic.
It's great that Jurvetson can think at roughly the same level as Sam Harris and follow his train of thought. Not many interviewers can do that
He's Steve Jurvetson lol
At around the 52 minute mark, Sam Harris talks about being a multi-disciplinary omnivore. Fair enough, but his shallow understanding of (and frequent slagging of) the discipline of economics illustrates the problem facing those who would give to AI machines constraining/guiding values. Brilliant though he is, I would not want Sam Harris to be the one deciding which orienting values should be built into AI machines when it comes to economic understanding... and given his views, I am equally sure that the only economists he would want to be allowed near the AI code would be those who share his left-leaning viewpoint... and therein lies the problem. The question, as always, comes down to “Who decides?”. See “The Vision of the Anointed” by Thomas Sowell.
both the speaker and the host were marvelously eloquent, coherent, and insightful. its a shame they didnt have another couple hours to go even deeper.
Can you show me the "Blue Fairy"?
i drink the green fairy, and smoke the green dragon. and my cat says hi
This is some of the deepest thinking i have ever heard. Just Beautiful
bmusic10
Consider the additionally intriguing idea, that the purpose of human life is reasonably to create AGI.
(Purpose may be objective, see Wikipedia/Teleonomy. Not to be confused for theism/nonsense/teleological argument!)
Paper:
www.researchgate.net/publication/319235750_Why_is_the_purpose_of_human_life_to_create_Artificial_General_Intelligence
Not as deep as AGI can think.
Lol, Sam had it out with Moby? I like him even more now.
Very interesting and i like his take on free will
It always trips me up as to why it seems like so few people see his point. Even just the interaction of time, information, and processing means you have a state and a fixed set of information and inertia at any given moment.
All in how you look at it. Consciousness = free will. This may seem a trivial equality, but it is a good rule of thumb in thinking about AI. Prove me wrong! (first you'll have to read this, then decide to type)(decide!)
CandidDate
Are you free to be conscious of the decisions you make?
If *variablename* < 5 then *statements* else *statements*
First program have to read input then decide its course of action. (decide!)
Apparently it have as much free will as I am :)
*variablename* := 5.00000000000000000000001
The purpose of human life is reasonably to create AGI.
(Purpose may be objective, see Wikipedia/Teleonomy. Not to be confused for theism/nonsense/teleological argument!)
Paper:
www.researchgate.net/publication/319235750_Why_is_the_purpose_of_human_life_to_create_Artificial_General_Intelligence
Excuse me if I sound stupid, but aren't devices like Google home, Alexa and smart assistants in general proof that intelligence does not require being conscience? Yes, intelligence is a scale and perhaps once a certain level is reached it requires being conscience, but aren't these home assistants intelligent?
SlicedBananas I would argue that they are not intelligent or at least only intelligent in a extremely narrow way. However, I would agree that intelligence doesn’t require consciousness. They seem to be quite different. We don’t know how consciousnesses arises though, but as Harris argued here, it is most likely a result of information processing, just as intelligence.
I like Sam Harris but I some of his thinking seems incomplete.
1:00 We are on thy way to a machine intelligent future and it doesn't matter if the change is incremental or exponential. We had intelligent machines for a long time. There's a degree of intelligence in an abacus or an astrolabe. So maybe the future is a continuation of the past, smart machines. Also the pace does matter because the world isn't sitting still, human IQ is supposedly increasing at about 10% per generation so if we reach some technological barrier that slows machine intelligence to less than that of human intelligence or if we invent some technology to increase human intelligence fast than machine intelligence then the future might look very different from what many suppose.
4:30 Mind is platform independent. It surprises me that Sam Harris accepts that so uncritically. There's nothing magic about what happens in a human head and there is nothing magic about what happens in a CPU. But the brains and computers are fundamentally different. Humans are bad at computation, our brains are not computational, brains are a complicated mess of chemicals and structures that somehow give rise (in a non magical way) to our intelligence. The idea that a thing made out of semiconductors and wires can have an emotion misunderstands what emotions are, they are physical state of a biological system. Take fear, heart racing, adrenaline pumping, muscles tensing publics dilating and all the other things that fear is aren't going to happen in a computer unless you give the computer a heart and hormones. Take all the biological components out of fear and what are you left with? The realization that there is some sort of danger? I'm sure computers can do that, a car's lane departure system can do that but that's the car being afraid.
Besides fear, there is caring in general and motivation in particular. I'm not at all convinced and I see no evidence that solely computational systems can care or be motivated. We could be built the smartest computer ever, load it with all our knowledge and even make it capable of changing itself but once it's plugged in all it's going to do is stare at us blankly until we give some set of instructions, an algorithm to follow.
13:00 Conscious does require something other than computation, it requires a good definition. Conscious is this vague term humans made up and now they are all chagrined about trying to figure out what it is. Before we can decide whether computers can be conscious or not we need an operationalizable definition of consciousness. Explicitly excluding anything supernatural, conscious may require more than an algorithm.
Computers are smart, they will get smarter, I believe machine intelligence will change the world and there is a lot of danger there and a lot of potential for good. But machine intellgence isn't human intelligence, they are different things. We shouldn't anthropomorphize computers. The real danger isn't the machines, it what stupid emotional and hormonal people will do with the machines.
O Soul
A large portion of your points are reasonably off:
1.) Imagine the cosmos as a sequence of quantum bits.
2.) Each human may be seen as a clump of bits with general intelligence.
3.) Sam's platform independence comment underscores that we are not special clumps (at least not special in how theism may illustrate humans to be) so, there is no reason why there shan't be more advanced, inorganic clumps of information, with steeper degrees of general intelligence!
Jordan Bennett
1) The cosmos is not a sequence of quantum bits. Bits, quantum or otherwise, are like all other number, they are great at representing what exists but they are not what exists.
2) General intelligence in humans is usually defined as the correlation between various measure of intelligence. Whether or not the correlation points to some underlying thing is hotly debated among cognitive scientists. There are people who are good (or bad) at many things so that might be attributed to some underlying general intelligence. But in science we look for things that disprove the theory. Some people are genius at somethings (say math) and are poor at other things (say language or social skills), if there was some underlying general intelligence they should be good (or bad) at everything. Expand general intelligence to other animals and even machines and the correlation that is evidence of general intelligence breaks down even more.
3) About 5:50 in Sam talks "common sense dualists", maybe he should look in the mirror when he talks about platform independent minds. As if minds are separate from the physical entities (brains or computing machines) that create them. Minds, consciousness, intelligence are not separate from matter, they are the activities or configuration of matter. Different types of matter (proteins and fats or silicon and gallium) in very different structures are going to have different behaviours, that's dualism, that just sensible. Neural nets are to networks of neurons like a stick figure is to human anatomy.
Computers have intelligence, real intelligence, there's nothing artificial about it, but it's a very different sort of intelligence than human intelligence. Just like people tend toward dualistic explanations they tend towards anthropomorphism. Both should be avoided.
O Soul
1.) It is not yet known whether the cosmos is actually math. (So it would be hasty to jump to the conclusion that the cosmos is not numbers!)
2.) You misunderstood Sam's remark.
By "platform independence", he is simply saying that general intelligence is replicable (rather than existent) outside of human flesh.
Jordan Bennett
1. You can assume the universe is however you want. I'm a strong proponent of Naturalism, the philosophy behind science. Naturalism and thus science assume that universe is composed of only one thing, matter. Yes it's just an assumptions but using that assumption science has given us a more useful understanding of the world round us than any other approach.
2. General intelligence is replicable outside of human flesh. Yes, there are intelligence things that are not human. Anything that is not human doesn't have human intelligence by definition of human intelligence. The common sense dualism of mind and body is no different from mind and platform. The mind isn't separate from the brain, the mind is the brain and the rest of the body doing it's thing. No dualism, the mind and the platform are not different things, they are different ways of describing the same thing. Machine intelligences are very different from human intelligence in many ways including they do want and they don't care.
Intelligence comes down to being good at a task or sets of tasks. General intelligence is being good at lots of things and is usually measured as some correlation coefficient, a g factor There is no consensus among cognitive scientists about how meaningful the g factor is. Include other creatures and the idea there some common core, some general intelligence, between them becomes even more dicy. Computers and humans is a great stretch beyond that. Being that we come from completely different platforms there may be little we share with the machines.
O Soul
1.) That one may subscribe to naturalism, reasonably, does not warrant that the cosmos may not be numbers. (See "Our mathematical universe").
2.) That the theorems regarding general intelligence is not yet grasped:
i.) Does not change the reality that AGI research seeks to replicate general intelligence beyond the scope of humans.
ii.) Does not change that you misrepresented Sam's "platform independent" remark, as I underlined in my prior response.
Maybe we can set up a symrobotic relationship like our gut biomes. Super intelligent machines need us somehow and we have some effect on their emotional landscape.
Like the matrix?
...wonders if AI will make stupid mistakes like publishing radio articles to a video channel if it replaces humans. It will be really great to have such a high intelligense behind video channel contributers.
11:25 - “Winner takes all” - in a free market?
Assaf Wodeslavsky I wholly agree. For a fellow as smart as Sam Harris, it amazes me that he is so historically myopic. The giants of British capitalism in the 18th and 19th century gave way to the American giants of the 19th and 20th Century, and where were the giants of China today in 1949 when Mao took power. What of the Koreans, and the Japanese? And how about the giants of tomorrow who will unseat today’s behemoths? Left-learning thinkers see concentrated power but are somehow incapable of seeing that in a dynamic system the power shifts, and shifts and will keep shifting.
The only basis for value that you both seem to be recognizing here are only pleasure/pain sensations, intelligence and productivity. What about BEING? Does being not have value, even if just for the sake of being? What is it about a person that gives him/her energy or life? What makes Elon Musk different for walking across town to go to a birthday party when he was a child? Is a flower less valuable because it cannot use human tools? Should all the chickens be killed after we stop eating animals?
Can someone please explain to me what credentials Sam Harris posses to intelligently discuss issues surrounding hardware/software/mathematical/open and closed AI models/systems?
@HKashaf Hes a neuroscientist and philosopher that studies consciousness. He may not have the credentials to build ai but he definitely has the credentials to talk about the moral implications of it.
Anybody can understand the argument for the inevitability and danger of AI that Sam is a proponent of. You don't really need any credentials.
HKashaf hater
He is Sam Harris. It doesn't get any better than that.
Consistent use of reason.
The main concern I have, is how the seed of moral understanding is defined. Relative morality has no meaning as it is ever-changing. Morality requires some form of an absolute center to gravitate around.
Valerie Kneen-Teed you'll love to read San Harris' book "the moral landscape".
Sam Harris is an exceptionally bright guy, but in this interview he and his interviewer both allowed themselves to fall - yet again in Harris’ case - into the “winner takes all” trap. There are many organizations - some state-sponsored and some profit-oriented - who are racing down this path. Some will get there sooner. Others will get there a bit later. But no contender is going to stop, and the reason is that winning a battle is not the same as winning the war. More important still, all the competitors will be modelling intelligence as they understand it, and they will all be attempting to constrain and direct that intelligence in accordance with the values held dear by their developers. State-sponsored Russian AI will be guided by Russian state values. State-sponsored Chinese AI will be guided by and will try to maximize what is valued by the Chinese state. Ditto any state-sponsored AI, and in a democracy where parties change and the values hovering the values of those parties change, so will the values governing their AI machines.... and spare a thought for the values that will be built into the AI machines built by jihadist regimes. There won’t be one machine. There will be many. The intelligence of those machines will escalate exponentially, but these super-human intelligences will not be perfect, they will each start off with the differing and flawed value systems of their creators, and the inherent flaws of those value systems will unleash upon the world immensely powerful and deeply flawed gods that will battle among themselves, putting all life on earth in peril.
This is pure sci-fi.
CandidDate Tomorrow's technology is today's Science Fiction.
+michaelgorby Or it stays science fiction like 99% of the space travel, teleporting, cloning, shape-shifting ideas in sci-fi. Your analogy applies to an tiny sub-set of the past's sci-fi. The more common trend is that the real advances, no one sees coming. The fact that everyone/the crowd mentality/Y2K-type people think AGI is going to happen historically should mean that it's delusion. Everyone being so certain and making such confident statements about AGI is a bad sign. Look up Hinton or Ng, who are actually at the forefront of the field, they don't make nearly as confident statements as your average tinfoil hat wearer does.
Tab Let Yeah, I tend to agree with you on virtually everything here. I just think we can never say what will remain sci fi bec we're we obviously have more future ahead of us. Except of course technologies that we have relative certainty won't be realized due to limitations set by our understanding of the laws of physics.
The technology we have now would have seemed like sci-fi to people living hundreds/thousands of years ago. There was literally a science fiction book from a couple hundred years ago about people sailing to the moon in giant ships. Well... we put people on the moon. Several decades ago, in fact.
"The fact that everyone/the crowd mentality/Y2K-type people think AGI is going to happen historically should mean that it's delusion." I wouldn't say that's correct. Those people could accidentally be right. Also, there are people at the forefront of AI as well who say that AGI will probably happen eventually (e.g. Hassabis), but maybe a little later than some of the fanatacists are saying.
Google DotA OpenAI, and watch the videos.