Even as someone who is bearish on AGI and AI world domination, I think these guys are too skeptical: -The closed box problem: multimodal AIs are a thing - which can generalize their inputs to various different types of outputs. Now I wouldn't say this is human general intelligence, but it's still reasonably closer. -I'm only 15 minutes in, but one of the commenters mentioned that they aren't down with AI art - AI art is legit. Even dalle-mini can do some impressive stuff.
Very interesting, I'm not sure how to interpret their arguments however. They hang a lot on chaos theory/complexity and the inherent limitations of mathematics. As a non-mathematician/physicist, it is difficult for me to evaluate their claims there. I do know a bit about genetic engineering, and it seems likely to me that their arguments are wrong. The complexity of the genome is essentially irrelevant for embryo selection, as all you have to do is match the genes that correlate with relevant phenotypes, give each embryo a polygenic score for every relevant trait, then pick the embryo that maximizes on the desire trait - you don't have to know what each particular gene does. Knowing what each particular gene does would be necessary for CRISPR/editing, but not embryo selection. Also, the "can't linearly augment a trait, can only isolate a trait" point seems like semantics to me. The real question is whether or not we can increase human intelligence or any other relevant trait through these methods. Could we augment wolves to get more agreeable dogs? Could we augment fruits to get juicier and sweeter fruits? Whether you call it linear augmentation or isolation of an already existing trait, it really doesn't matter. Also, humans with a 140+ IQ obviously already exist, so if you want to say that we're isolating the existing trait of these already existing people, that's fine. The point is that these techniques should increase average IQ and whatever else people want to increase. Their (imo) poor arguments on genetic engineering make me wary about their other arguments that are harder for me to interpret. Cheers.
Really really interesting video with good arguments. Would love to see more experts with an actual mathematical/physical background speak about the limitations of AI. Too many people are predicting AGI without even being able to understand linear regression.
I quit the video after the discussion on art. A lot of what they had been saying up to then sounded dubious, but the part about art was just completely asinine. The first guy offers a passable definition of art, then gives examples of it in human art, then concludes that, since it has so far always come from human personality, it will never come from AI. There isn't even an argument there. Then the second guy makes an acceptable point, in saying that, while DALL-E's greatest hits can be impressive, the misses outnumber the hits, and getting consistently good output requires human tweaking and curation. So, why can't we improve the hit rate, e.g. through RLHF? We are still talking about the book Why Machines Will Never Rule The World, right? We didn't switch to talking about their less publicized book, Why The Machines Currently On The Market Will Not Rule The World For At Least Two More Version Numbers? Yeah, RLHF on _information_ creates a sycophantic Bay Area NPC, probably because that's who they train it on. But on _art..._ It'd probably just improve the hit rate, right?
These guys in 1900: Birds are far too complicated to be described with a bunch of mathematical formulas, therefore humans will never achieve heavier-than-air flight.
Not quite, the correct analogy would be “birds are too complicated so we will never replicate birds.” I think they certainly don’t dispute narrow AI and the potential of AI to enable new technology, it is general AI they dispute.
@@kellyhyland821 I appreciate your response - I was hoping someone would engage with me on this! So Landgrebe and Smith are arguing against the possibility of AGI by asserting that creating it would require an impeccable understanding of the human brain. Here are a couple of relevant quotes, all from between ~4 and 6 minutes: Landgrebe: "The reality of intelligence" - presumably referring to human intelligence - "is so incredibly complex that our mathematical models cannot capture it." Smith adds, "...you would need vast amounts of imaging data at resolutions much higher than what we can currently achieve. Due to these factors, the complexity of intelligent behavior is beyond the scope of mathematics [and, therefore, beyond the capabilities of technology]." So according to them, the only path to AGI is through replicating the human brain. To me, this notion seems as flawed as the idea that the sole way to achieve flight is by imitating birds or insects. The underlying reasoning appears to be: "We have only observed a few instances of this phenomenon in nature; therefore, these are the only possible ways it can be achieved." I find this completely unconvincing, whether we're discussing specific abilities like flying or playing chess, or a more comprehensive concept like general intelligence.
@@kellyhyland821 No, they're talking about "ruling the world". This is something humans do, but something that could be done in some other way than the way human biology does it. The two analogies would match up like this -> flight - > flight -> ruling the world -> ruling the world The two guys in this video are saying human biology can't be modeled with current mathematics -> AIs wont rule the world. This is dumb, because, like with the bird analogy, they're not ruling out -> rule the world
One of the main arguments is that we overestimate the power of mathematics and underestimate the complexity and dynamicity of natural processes. Intelligence being one example.
Even as someone who is bearish on AGI and AI world domination, I think these guys are too skeptical:
-The closed box problem: multimodal AIs are a thing - which can generalize their inputs to various different types of outputs. Now I wouldn't say this is human general intelligence, but it's still reasonably closer.
-I'm only 15 minutes in, but one of the commenters mentioned that they aren't down with AI art - AI art is legit. Even dalle-mini can do some impressive stuff.
No mention of the scaling hypothesis? My lord gwern would be very disappointed.
Very interesting, I'm not sure how to interpret their arguments however. They hang a lot on chaos theory/complexity and the inherent limitations of mathematics. As a non-mathematician/physicist, it is difficult for me to evaluate their claims there. I do know a bit about genetic engineering, and it seems likely to me that their arguments are wrong.
The complexity of the genome is essentially irrelevant for embryo selection, as all you have to do is match the genes that correlate with relevant phenotypes, give each embryo a polygenic score for every relevant trait, then pick the embryo that maximizes on the desire trait - you don't have to know what each particular gene does. Knowing what each particular gene does would be necessary for CRISPR/editing, but not embryo selection.
Also, the "can't linearly augment a trait, can only isolate a trait" point seems like semantics to me. The real question is whether or not we can increase human intelligence or any other relevant trait through these methods. Could we augment wolves to get more agreeable dogs? Could we augment fruits to get juicier and sweeter fruits? Whether you call it linear augmentation or isolation of an already existing trait, it really doesn't matter. Also, humans with a 140+ IQ obviously already exist, so if you want to say that we're isolating the existing trait of these already existing people, that's fine. The point is that these techniques should increase average IQ and whatever else people want to increase.
Their (imo) poor arguments on genetic engineering make me wary about their other arguments that are harder for me to interpret. Cheers.
Really really interesting video with good arguments. Would love to see more experts with an actual mathematical/physical background speak about the limitations of AI. Too many people are predicting AGI without even being able to understand linear regression.
I quit the video after the discussion on art. A lot of what they had been saying up to then sounded dubious, but the part about art was just completely asinine.
The first guy offers a passable definition of art, then gives examples of it in human art, then concludes that, since it has so far always come from human personality, it will never come from AI. There isn't even an argument there.
Then the second guy makes an acceptable point, in saying that, while DALL-E's greatest hits can be impressive, the misses outnumber the hits, and getting consistently good output requires human tweaking and curation.
So, why can't we improve the hit rate, e.g. through RLHF?
We are still talking about the book Why Machines Will Never Rule The World, right? We didn't switch to talking about their less publicized book, Why The Machines Currently On The Market Will Not Rule The World For At Least Two More Version Numbers?
Yeah, RLHF on _information_ creates a sycophantic Bay Area NPC, probably because that's who they train it on. But on _art..._ It'd probably just improve the hit rate, right?
I lost interest at the art part as well, no string arguments
These guys in 1900: Birds are far too complicated to be described with a bunch of mathematical formulas, therefore humans will never achieve heavier-than-air flight.
Not quite, the correct analogy would be “birds are too complicated so we will never replicate birds.” I think they certainly don’t dispute narrow AI and the potential of AI to enable new technology, it is general AI they dispute.
@@kellyhyland821 I appreciate your response - I was hoping someone would engage with me on this!
So Landgrebe and Smith are arguing against the possibility of AGI by asserting that creating it would require an impeccable understanding of the human brain. Here are a couple of relevant quotes, all from between ~4 and 6 minutes:
Landgrebe: "The reality of intelligence" - presumably referring to human intelligence - "is so incredibly complex that our mathematical models cannot capture it."
Smith adds, "...you would need vast amounts of imaging data at resolutions much higher than what we can currently achieve. Due to these factors, the complexity of intelligent behavior is beyond the scope of mathematics [and, therefore, beyond the capabilities of technology]."
So according to them, the only path to AGI is through replicating the human brain. To me, this notion seems as flawed as the idea that the sole way to achieve flight is by imitating birds or insects. The underlying reasoning appears to be: "We have only observed a few instances of this phenomenon in nature; therefore, these are the only possible ways it can be achieved." I find this completely unconvincing, whether we're discussing specific abilities like flying or playing chess, or a more comprehensive concept like general intelligence.
@@kellyhyland821 No, they're talking about "ruling the world". This is something humans do, but something that could be done in some other way than the way human biology does it. The two analogies would match up like this
-> flight
- > flight
-> ruling the world
-> ruling the world
The two guys in this video are saying human biology can't be modeled with current mathematics -> AIs wont rule the world. This is dumb, because, like with the bird analogy, they're not ruling out -> rule the world
Someone please give TL;DR
One of the main arguments is that we overestimate the power of mathematics and underestimate the complexity and dynamicity of natural processes. Intelligence being one example.
Singularity...
We'll be lucky if we survive 2023
This felt like a bit of a waste of time.