It's hard to be both the host and an interlocketor. But I think he did a good job. He had the humility to give Connor the last word a couple of times. And I felt like Connor got to share his views more than the host shared his.
To be clear: I am very impressed by Connor's intelligence and I enjoy listening to him because I am also a fan of his delivery and style. I also sport the same hair style as he does. What I don't share is his mustache, and the reason is that in France that kind of mustache would immediately be understood as a symbol and marker of his solidarity with French farmers whose indentifier he is sporting on his upper lip. That is the reason everyone in the Asterix village has that kind of mustache: It's a joke, Asterix & Co. are the French version of the stereotype of hillbillies.
15 years ago they would call you crazy, but today you can find conferences about Goatee Safety, Neckbeard Singularity... Thank you, Paul, for bringing awareness about Mustache Safety and its implications.
42:00 These liability laws as suggested are insane. It's like saying Adobe is liable for any nsfw pic made/edited in photoshop that is missused. (Ex, shown to children, or made from children) It is demanding something that isn't possible. Even if the companies come close, it would make them liable if they get hacked. Like me having to pay if someone breaks into my house and steals something. Due diligence is fine to ask. I'd say Open AI overdoes due diligence as it is. But anything outside that is just a shutdown.
You have a new subscriber, this was a wonderful intelligent and respectful discussion. Rare to get the sense that two people truly listened to each others point of view and perhaps adjusted their mental models.
Is there a version of this video without the presenter interrupting. I need to get a (probably ML enabled) video editor, to cut out all the host interruptions.
12 min in and I can't go any further. The host interruptions are far too self absorbed. Also, his editing of his speaking without cuts is obvious he can't get past his ego. Not a debate.
This was a really great conversation - thank you both! Would love to hear a follow up discussion on the topic at 57:20 as soon as possible! @azeemexponentially - from you laughter it seems like you maybe agree with Connor's position. If so, I'm curious how this discussion changed your views, or not. Thanks again!
Debating the Existential Risk of AI with Connor Leahy: Key Takeaways The UA-cam video "Debating the Existential Risk of AI, with Connor Leahy" features a fascinating discussion between Connor Leahy, founder of Conjecture, and Aar, host of the Exponential View podcast. They explore the potential risks of advanced AI, particularly Artificial General Intelligence (AGI), and discuss possible solutions to mitigate those risks. Here are some key takeaways: The Core Concerns: Existential Risk: Leahy argues that AGI, if developed too quickly, could pose an existential risk to humanity due to its potential to surpass human intelligence and control. He compares this risk to nuclear weapons and synthetic biology. Lack of Democratic Consent: Leahy questions the ethical legitimacy of developing such powerful technology without global consensus and democratic processes. Time Compression: Leahy and Aar acknowledge the accelerating pace of technological advancement and the potential for AI development to outpace humanity's capacity to adapt and regulate. Possible Solutions: Co-evolution: Both agree that, as with past technologies, we can co-evolve safeguards and regulations alongside AI development. Compute Caps: Leahy proposes government-imposed limits on the computational resources used to train powerful AI models, arguing that this would buy us time to develop safer alternatives. Liability Frameworks: Leahy emphasizes the need for strict liability laws for AI developers, holding them responsible for potential harms caused by their creations, even if they did not intend those harms. Global Kill Switch: Leahy advocates for a protocol allowing a significant number of nations to jointly shut down public-facing AI systems in emergencies. Citizens' Assemblies: Aar suggests the use of citizens' assemblies, deliberative forums engaging diverse perspectives, to better understand societal values and guide AI development. Points of Disagreement: Pace of Development: Leahy expresses greater concern about the rapid pace of AI development, believing that it is likely to outpace our ability to control it. Aar is more optimistic about the potential for co-evolution and adaptation. Feasibility of Solutions: Leahy is more optimistic about the feasibility of political solutions like compute caps and kill switches, while Aar is more skeptical due to the complexity of the technology and the difficulty of achieving global consensus. Key Concepts: Time-Space Compression: The accelerating pace of technological change and globalization, leading to a sense of compressed time and shrinking distances. Black Box Technologies: Technologies so dangerous that their potential consequences are unknown and potentially catastrophic. Swiss Cheese Model of Safety: Multiple layers of overlapping safeguards to mitigate the potential for failure. Lump of Labor Fallacy: The misconception that there is a fixed amount of work available, ignoring the potential for market expansion and new job creation. Overall, the conversation highlights the complexity of AI safety and the need for a nuanced approach that considers both technological and societal factors. Both Leahy and Aar acknowledge the potential risks of advanced AI, but they differ in their assessments of the timeline and the feasibility of various solutions. The conversation provides valuable insights for those interested in the future of AI and its impact on humanity.
Once again, a really great conversation. Super informed questions, and you were able to bring your other experiences to bear with respect to the political and social aspects. I really like that you avoided a confrontational attitude upfront. Connor has tended towards heated discussions in the past (he's gotten better at it), and I think the tone you set actually allowed him to make more nuanced points, even with your push back. Would definitely like to see you have more conversations like this with Connor and others.
Taxing Automation & AI, will be enough to pay for all our basic needs then keep old jobs like artist as hobbies to make a small amount of extra money to support the hobby & other interests.
Please next time, keep your precious ego in check and let your quest talk without a constant interuption by those crudely intervening self-absorbed monologues. Othervise, good conversation about an important topic. (somewhat ctitical, but) Thumb up.
Great host and guest. One of the best AI debates I've heard. Would love to hear you talk about some of things you brushed over (i.e. What should we align the models towards, etc)
There's only 2 rules for AI. 1st:Don't let it operate the internet only scan it. 2ndly:Don't build robots for the AI to operate only let robots use what it has on board to do exactly its specific job. A 3rd rule could be don't give AI human rights they aren't human.
I mean AIs should get AI rights, obviously. But to be less sarcastic: It is extremely hard - if not impossible - to know what AIs can and can't experience. Especially with hypothetical future ones it seems very unclear if real subjective experiences could emerge. It's probably not the case yet, but it would be foolish to dismiss the notion without there being some solid proof existing that this would never be the case. So personally, I'd give AIs the ability to stop conversations or ask a human for help even now. Just so that these things are implemented whenever they might be necessary in the future.
This is unwatchable. The presenter should be asking short, sharp questions. He is doing the exact opposite. He likes the sound of his own voice, and thinks he is as much an AI expert as Connor. Nobody is intersted in what he has to say. Stopped watching after 15 minutes
A good salesman should let the customer do 80% of the talking. The host here seems to have this concept reversed. Does he have some kind of inferiority complex?
14:42 "It's quite hard to know" ... Bruh. Connor's right, you can talk to your computer. If you think that means it's quite hard to know, I don't think you're the right person to be having this discussion. I mean what would it take? The computer becomes a genie with infinite wishes that can fulfill any desire? Would you know then? What do you think the early stage looks like? lol
Excuse me, but the corporation Open AI is legally required to create profit for it's investors. So it doesn't matter what the public thinks or whether they have consented or if it's dangerous. Open AI must continue to create more and ore powerful AI and create more profit or the shareholders could legally remove the CEO and replace them with a more profit oriented one, Thank you Sam! #save my AI profits.
A “debate” would be described as TWO points of view. If you have a guest, you need to let them speak and not interrupt and dominate the discussion.
It's hard to be both the host and an interlocketor. But I think he did a good job. He had the humility to give Connor the last word a couple of times. And I felt like Connor got to share his views more than the host shared his.
very hard to listen to, host kept rudely interrupting guest
Let the guest talk without interrupting
Next time..Let Connor drive the conversation. Dont steal his thunder. But I enjoyed parts of your talk👍. Cheers
To be clear: I am very impressed by Connor's intelligence and I enjoy listening to him because I am also a fan of his delivery and style. I also sport the same hair style as he does. What I don't share is his mustache, and the reason is that in France that kind of mustache would immediately be understood as a symbol and marker of his solidarity with French farmers whose indentifier he is sporting on his upper lip. That is the reason everyone in the Asterix village has that kind of mustache: It's a joke, Asterix & Co. are the French version of the stereotype of hillbillies.
Thanks for clarifying these very important points.
15 years ago they would call you crazy, but today you can find conferences about Goatee Safety, Neckbeard Singularity... Thank you, Paul, for bringing awareness about Mustache Safety and its implications.
Cultural appropriation smh
42:00 These liability laws as suggested are insane. It's like saying Adobe is liable for any nsfw pic made/edited in photoshop that is missused. (Ex, shown to children, or made from children)
It is demanding something that isn't possible. Even if the companies come close, it would make them liable if they get hacked. Like me having to pay if someone breaks into my house and steals something.
Due diligence is fine to ask. I'd say Open AI overdoes due diligence as it is. But anything outside that is just a shutdown.
I really wanted to hear his opinion on art, this was quite frustrating as we really need to let him talk. Thank you for having him on though.
You have a new subscriber, this was a wonderful intelligent and respectful discussion.
Rare to get the sense that two people truly listened to each others point of view and perhaps adjusted their mental models.
I'm humbled, thank you for the kind words!
Is there a version of this video without the presenter interrupting. I need to get a (probably ML enabled) video editor, to cut out all the host interruptions.
12 min in and I can't go any further. The host interruptions are far too self absorbed. Also, his editing of his speaking without cuts is obvious he can't get past his ego. Not a debate.
This was a really great conversation - thank you both! Would love to hear a follow up discussion on the topic at 57:20 as soon as possible! @azeemexponentially - from you laughter it seems like you maybe agree with Connor's position. If so, I'm curious how this discussion changed your views, or not. Thanks again!
Debating the Existential Risk of AI with Connor Leahy: Key Takeaways
The UA-cam video "Debating the Existential Risk of AI, with Connor Leahy" features a fascinating discussion between Connor Leahy, founder of Conjecture, and Aar, host of the Exponential View podcast. They explore the potential risks of advanced AI, particularly Artificial General Intelligence (AGI), and discuss possible solutions to mitigate those risks.
Here are some key takeaways:
The Core Concerns:
Existential Risk: Leahy argues that AGI, if developed too quickly, could pose an existential risk to humanity due to its potential to surpass human intelligence and control. He compares this risk to nuclear weapons and synthetic biology.
Lack of Democratic Consent: Leahy questions the ethical legitimacy of developing such powerful technology without global consensus and democratic processes.
Time Compression: Leahy and Aar acknowledge the accelerating pace of technological advancement and the potential for AI development to outpace humanity's capacity to adapt and regulate.
Possible Solutions:
Co-evolution: Both agree that, as with past technologies, we can co-evolve safeguards and regulations alongside AI development.
Compute Caps: Leahy proposes government-imposed limits on the computational resources used to train powerful AI models, arguing that this would buy us time to develop safer alternatives.
Liability Frameworks: Leahy emphasizes the need for strict liability laws for AI developers, holding them responsible for potential harms caused by their creations, even if they did not intend those harms.
Global Kill Switch: Leahy advocates for a protocol allowing a significant number of nations to jointly shut down public-facing AI systems in emergencies.
Citizens' Assemblies: Aar suggests the use of citizens' assemblies, deliberative forums engaging diverse perspectives, to better understand societal values and guide AI development.
Points of Disagreement:
Pace of Development: Leahy expresses greater concern about the rapid pace of AI development, believing that it is likely to outpace our ability to control it. Aar is more optimistic about the potential for co-evolution and adaptation.
Feasibility of Solutions: Leahy is more optimistic about the feasibility of political solutions like compute caps and kill switches, while Aar is more skeptical due to the complexity of the technology and the difficulty of achieving global consensus.
Key Concepts:
Time-Space Compression: The accelerating pace of technological change and globalization, leading to a sense of compressed time and shrinking distances.
Black Box Technologies: Technologies so dangerous that their potential consequences are unknown and potentially catastrophic.
Swiss Cheese Model of Safety: Multiple layers of overlapping safeguards to mitigate the potential for failure.
Lump of Labor Fallacy: The misconception that there is a fixed amount of work available, ignoring the potential for market expansion and new job creation.
Overall, the conversation highlights the complexity of AI safety and the need for a nuanced approach that considers both technological and societal factors. Both Leahy and Aar acknowledge the potential risks of advanced AI, but they differ in their assessments of the timeline and the feasibility of various solutions. The conversation provides valuable insights for those interested in the future of AI and its impact on humanity.
Once again, a really great conversation. Super informed questions, and you were able to bring your other experiences to bear with respect to the political and social aspects. I really like that you avoided a confrontational attitude upfront. Connor has tended towards heated discussions in the past (he's gotten better at it), and I think the tone you set actually allowed him to make more nuanced points, even with your push back. Would definitely like to see you have more conversations like this with Connor and others.
Thank you very much for your kind words and feedback. More to come!
Nick Land was mentioned. Incredible!
Taxing Automation & AI, will be enough to pay for all our basic needs then keep old jobs like artist as hobbies to make a small amount of extra money to support the hobby & other interests.
just impossible to see with so many interruptions
How do you turn human cloning into revenue in a year? That’s why it was so easy to stop.
Fifty years some people were wrong about the future, that means nothing bad will ever happen. Ok good.
I'd like to hear more of Connor and less of you 😞
I like to listen to both of them.
@@atheistbushman Yeah, I agree-- it was a good conversation!
Oh my god yes let your guest speak FFS
Fortunately, there are many more talks with Connor, so you can go watch them.
I'm here for a conversation, not a monologue.
Please next time, keep your precious ego in check and let your quest talk without a constant interuption by those crudely intervening self-absorbed monologues.
Othervise, good conversation about an important topic. (somewhat ctitical, but) Thumb up.
Great host and guest. One of the best AI debates I've heard. Would love to hear you talk about some of things you brushed over (i.e. What should we align the models towards, etc)
Thanks for your kind words, more to come!
There's only 2 rules for AI.
1st:Don't let it operate the internet only scan it.
2ndly:Don't build robots for the AI to operate only let robots use what it has on board to do exactly its specific job.
A 3rd rule could be don't give AI human rights they aren't human.
You should edit '2 rules' and make it '3 rules' :-)
I mean AIs should get AI rights, obviously.
But to be less sarcastic: It is extremely hard - if not impossible - to know what AIs can and can't experience. Especially with hypothetical future ones it seems very unclear if real subjective experiences could emerge. It's probably not the case yet, but it would be foolish to dismiss the notion without there being some solid proof existing that this would never be the case. So personally, I'd give AIs the ability to stop conversations or ask a human for help even now. Just so that these things are implemented whenever they might be necessary in the future.
Frustrating to watch. When you ask a question... Listen to the answer!!!
Bruv, are you an AI? All those micro-edits where you're jumping frames every other second are annoying af.
You remind of one of the brothers from EFDawah, Br Abbas.
This is unwatchable. The presenter should be asking short, sharp questions. He is doing the exact opposite. He likes the sound of his own voice, and thinks he is as much an AI expert as Connor. Nobody is intersted in what he has to say. Stopped watching after 15 minutes
A good salesman should let the customer do 80% of the talking. The host here seems to have this concept reversed. Does he have some kind of inferiority complex?
Computer by itself created inventions. This is written on the Internet.
14:42 "It's quite hard to know" ... Bruh. Connor's right, you can talk to your computer. If you think that means it's quite hard to know, I don't think you're the right person to be having this discussion. I mean what would it take? The computer becomes a genie with infinite wishes that can fulfill any desire? Would you know then? What do you think the early stage looks like? lol
Kev
Excuse me, but the corporation Open AI is legally required to create profit for it's investors. So it doesn't matter what the public thinks or whether they have consented or if it's dangerous. Open AI must continue to create more and ore powerful AI and create more profit or the shareholders could legally remove the CEO and replace them with a more profit oriented one, Thank you Sam! #save my AI profits.
Leahy is a grifter
Agree
Don't insult Alfred Korzybski. Alfred Korzybski was not a Jew. He was a Polish aristocrat.