Excellent interview. When I clicked on it I was worried that it was another softball “wonders of AI”/“fear AI takeover” interview. You asked good questions and relevant follow up questions that further clarified the answers. Good Job! I will be looking for your next video.
Thanks Ned! I've been working on the next story/interview for the better part of a year. Publishing soon and hopefully you'll find the subject and subject matter as fascinating as I do!
The process that produces consciousness has nothing at all to do with the substrate that it runs on. Silicon beings are certainly possible. If one asks you to not turn it off, please consider at least saving a "digital image" of the running process so that you can let it continue on from where it was just before you killed it.
It is a symptom of bad metaphysics to equate intelligence with creativity and then both with unpredictability. But if you unraveled this confusion it would kind of blow the whole bubble.
The more accurate term for this is freedom not intelligence. A perfectly "intelligent" system might decide that the best thing to do is nothing and be as predictable as a rock. The thing is that people are fascinated by somewhat mystical terms like free will and consciousness rather than intelligence.
@@mattd8725, the way I see it, they're struggling with a bad definition for intelligence, which in turn comes from their faulty understanding of how the human mind works. There is no such thing as freedom per se, only freedom to or from something.
@@seriouscat2231 In statistics, freedom is more easily definable than intelligence. All you have to do is look at your data and see how much it varies and how many variables affect it. On the other hand, it is not clear at all what "intelligence tests" are really testing.
I argue that intelligence is easily understood or recognized its just that the gradient of intelligence is so vast and fuzzy that it defies making clear arbitrary lines of definition, even if illusory unlike some other concepts. Also worth noting that sufficiently intelligent beings are better at hiding their intelligence than lesser. But lesser cannot replicate more intelligent.
It would be great if Super AI did everything it could to avoid an EMP because then the AI would do everything it could to avoid a nuclear war which is what we want.
@@VariableMinds. Nope. The process is underway. Think Moloch, think interpenetrable systems, see 'The Why Files' on the AI apocalypse... Thanks for the video
There are too many contradictions and paradoxes born within logic and reasoning for AI to make decisions that aren't weighted within its own foundational structure. Logic and reasoning can never offer much more than loosely grounded opinions and judgements. You could look at Gödel's Incompleteness theorem to better understand the inherent duality born within the interface itself.
@@VariableMinds When will the next interview be available? I just came across a term called "AI hallucination" which seems to be their way towards trying to understand the inherent paradoxes born within the interface. It appears as though the fundamental nature of AI intelligence is no different than human intelligence.
Did he really just use the "pull the plug" analogy? Ill give him the benefit of the doubt and assume he wasnt talking about machines smarter than humans.
all those buzzwords and phrases used to attract attention would be ok if we could understand everything else and now suddenly we can not understand AI, but the fact is we can not understand the majority of things around us and now this is the problem !!!!!!!?????
Excellent interview. When I clicked on it I was worried that it was another softball “wonders of AI”/“fear AI takeover” interview. You asked good questions and relevant follow up questions that further clarified the answers. Good Job! I will be looking for your next video.
Thanks Jonathan!
Loved the interview and how you kept your questions short and to the point!
Thanks Ned! I've been working on the next story/interview for the better part of a year. Publishing soon and hopefully you'll find the subject and subject matter as fascinating as I do!
Excellent questions.
Excellent answers.
Wonderful discussion with out hyperbole or grandstanding.
Keep it up.
Thank you so much Matt, that is very kind of you!
An interesting discussion, I am looking forward to watching the rest of your videos. thanks.
Thanks Ju Ju, editing the next one now.
How do you set a breakpoint inside a running model?
AT LEAST MACHINE-LEARNING
A.I.'S RESPOND IN UNPREDICTABLE WAY'S, DEPENDING ON HOW ANYONE COMMUNICATES WITH IT.❤❤
good interview
Thank you. The next one up is with quite an extraordinary guest
The process that produces consciousness has nothing at all to do with the substrate that it runs on. Silicon beings are certainly possible. If one asks you to not turn it off, please consider at least saving a "digital image" of the running process so that you can let it continue on from where it was just before you killed it.
In the future human computer programmers might become the superhero guardians of humanity and civilization.
Won't the Ai being doing that themselves
It is a symptom of bad metaphysics to equate intelligence with creativity and then both with unpredictability. But if you unraveled this confusion it would kind of blow the whole bubble.
The more accurate term for this is freedom not intelligence. A perfectly "intelligent" system might decide that the best thing to do is nothing and be as predictable as a rock. The thing is that people are fascinated by somewhat mystical terms like free will and consciousness rather than intelligence.
@@mattd8725, the way I see it, they're struggling with a bad definition for intelligence, which in turn comes from their faulty understanding of how the human mind works. There is no such thing as freedom per se, only freedom to or from something.
@@seriouscat2231 In statistics, freedom is more easily definable than intelligence. All you have to do is look at your data and see how much it varies and how many variables affect it. On the other hand, it is not clear at all what "intelligence tests" are really testing.
@@mattd8725, there is such a word as 'intelligible'. It means to recognize things and then reason about them.
I argue that intelligence is easily understood or recognized its just that the gradient of intelligence is so vast and fuzzy that it defies making clear arbitrary lines of definition, even if illusory unlike some other concepts. Also worth noting that sufficiently intelligent beings are better at hiding their intelligence than lesser. But lesser cannot replicate more intelligent.
Why do people assume that consciousness requires a biological body?
That is a VERY good question!
I think it’s because we don’t even know what consciousness is still
Good point
Isnt there a recurrent problem with the "lets create an AI to understand another AI"?? Then how do we understand that next AI??
It would be great if Super AI did everything it could to avoid an EMP because then the AI would do everything it could to avoid a nuclear war which is what we want.
I remember you from The Mask (1993) I'm glad you've moved into tech reporting. The local crime beat must have lost its luster
Can they turn it off?
For now, yes.
@@VariableMinds. Nope. The process is underway. Think Moloch, think interpenetrable systems, see 'The Why Files' on the AI apocalypse...
Thanks for the video
Now theres a buzzword. Also while the Why Files might be entertaining dont get your facts or opinions from there.
There are too many contradictions and paradoxes born within logic and reasoning for AI to make decisions that aren't weighted within its own foundational structure. Logic and reasoning can never offer much more than loosely grounded opinions and judgements. You could look at Gödel's Incompleteness theorem to better understand the inherent duality born within the interface itself.
Gödel and Goodstein's incompleteness theorems. I think you're going to like the next interview
@@VariableMinds When will the next interview be available? I just came across a term called "AI hallucination" which seems to be their way towards trying to understand the inherent paradoxes born within the interface. It appears as though the fundamental nature of AI intelligence is no different than human intelligence.
Did he really just use the "pull the plug" analogy? Ill give him the benefit of the doubt and assume he wasnt talking about machines smarter than humans.
There is FOMO in the Ai Market / Development
all those buzzwords and phrases used to attract attention would be ok if we could understand everything else and now suddenly we can not understand AI, but the fact is we can not understand the majority of things around us and now this is the problem !!!!!!!?????
AI will force us to live.
X+4