Igor Sterner | This House Believes Artificial intelligence Is An Existential Threat | CUS
Вставка
- Опубліковано 21 жов 2023
- Igor Sterner speaks as First opposition for the motion on Thursday 12th October 2023 at 8:00pm in the Debating Chamber.
The rapid growth in the capabilities of AI have struck fear into the hearts of many, while others herald it as mankind's greatest innovation. From autonomous weapons to cancer-curing algorithms to a malicious superintelligence, we aim to discover whether AI will be the end of us or the beginning of a new era.
............................................................................................................................
Igor Sterner
Igor Sterner is a fifth year post-graduate studying in the field of natural language processing at the Department of Computer Science in Cambridge. He was a graduate of Pembroke College of Engineering, he won the right to speak through open audition.
Thumbnail Photographer: Nordin Catic
............................................................................................................................
Connect with us on:
Facebook: / thecambridgeunion
Instagram: / cambridgeunion
Twitter: / cambridgeunion
LinkedIn: / cambridge-union-society
Who is this child and who let him up on stage to discuss AGI? Seriously misguided argument imo. Does he even comprehend the potential risks?
AGI !== A calculator. Like, not even close, mate.
I don't think some people in this debate understand AGI. It's surprising.
Lets see AI's as onions. What are you afraid of, onions?
What a silly analogy..
A Calculator doesn't have a goal. Ai has a goal and figures out its way to archieve it.
the Ai has not a goal, it will get its goal from the way we build the Ai
@@celestemtz587 u hear how stupid what u just said was… right?
@@TheRealRobertG
LOOOOOL HAHAHAHA
Exactly!
The computers are not infinite but:
1. Nobody talked about infinity. Competing for resources with a more capable agent is the actual problem.
2. Self improvement also means improvement in efficiency, so less "computers" needed for a higher performance.
He doesn't understand. He thinks AI is a tool just like any other.
Oh Dear! Is this the best student Cambridge has to offer?
Let me guess, Oxford?
Wow this might be one of the worst arguments against ai existential threat I've heard. Conflating all past, present and future AI with LLMs is just plain wrong. This guy studies AI and seems ignorant about how ai tools can act autonomously and do things unintended by the creators (agents or goal directed behaviour, take a class in RL please). Then closing with the ad hominem that if you don't agree it's because you are a bad person and have something to hide. Super slimey
If you think this is bad watch the one by Judy Wajcman. Not really. I wouldn't want you to waste your time.
Yep. That was my take too. Ugh
He should be embarrassed to have put forward such weak arguments, and his faculty advisors should reconsider if he is intelligent enough to deserve a degree.
Basis flaw in this argument: AI is not - like traffic lights - a string of code programmed into a computer. I think that a big misunderstanding. Of course we rely on tools and specifically computers for a lot of things. But there is a computer code behind it. If this code encounters something that was not foreseen by the programmer, the system can't handle it. AI will respond to things never programmed into its system. And, oh dear, the end is so weak. If you are against it, you have something to hide?
We have nothing to fear but fear itself. He she or it that gets to the singularity first wins.
What YOU think AI is vs what IT IS does not have any bearing on what AI will DO..
Holy sheeeT!
Igor Sterner tries to argue that anyone who considers A.I. to be an existential threat must then have something to hide. What dude? Sit down.
I found this the most laughable statement
Unbelievable incomprehension of the subject.
What more do you want that you can't buy?
Why, the future, Mr Gittes. --Chinatown
let's assume for the sake of argument that AIs that do nothing but calculate and are just like knives. Then what about taking one of those calculators and giving it a reward functioon for doing actions in the real world? Then would you accept that's an existential risk to humanity? less intelligent AI agents that seek rewards from interacting in the real world are already built and in existence. And there's a monetary incentive to use AIs as agents in the real world to make profit.
So his argument still leads you to conclude that AIs are an existential risk since they could easily be modified to be an existential risk and likely will.
lol...