00:08 AI's impact on productivity and economic growth 01:59 AI's impact on job loss and healthcare efficiency 03:45 AI's impact on energy demand and environmental implications 05:26 AI in national security poses dual-use capabilities with potential risks 07:14 Lethal autonomous weapons pose a dangerous escalation threat. 08:54 Maintaining control over AI is crucial 10:26 AI poses an existential threat to humans 11:58 AI poses an existential threat to humans Crafted by Merlin AI.
The short-sightedness is ridiculous. People keep imagining Ai is going to immediately replace every job in the world send everyone into starvation and poverty... no that's not how it works. It'll take decades and decades for that to happen. What's going to happen is Ai will gradually replace meaningless work and open up a ton of better jobs for people over long periods of time. People will have a lot of time to change and evolve and live wayy easier lives. This fear mongering is based on a lot of short sightedness and ignorance. The amount of good that will come from Ai will far outweigh any negatives. This is how it's been before and how it'll always be, basic common sense and logic.
You would be correct if we could control the AI, but as of now we cannot. And losing control over a superintelligent system will almost certainly lead to extinction.
4:33 - This proves the idea that, no matter how you try to make the system more efficient and use efficiency to reduce utilization of capacity, applications will arise that will invent new ways to consume ALL available energy capacity. It's like a fire burning - you can't instruct the fire to not burn dry tinder that you stack next to it... the fire will burn whatever it can get access to that burns. Any new "green tech" / "clean energy" that we bring online will be vulnerable to being immediately scooped up by these new energy hungry applications that we invent (where a real need didn't previously exist), and so utility bills for average folks struggling with inflation and cost of living will go up, while more and more EV owners charge their cars during "off peak" hours, until the point where "off peak" hours actually become peak hours. In other words, it's all lies and gaslighting. Pretty soon, we'll be proliferating nuclear power plants all across the US - and the world - because we need to massively scale up AI data centers so that corporations can cut costs by eliminating white collar jobs. Really, people need to wake up and smell the coffee.
Basically the last five minutes were the real true threat of ai. All the talk about ai weaponry and whatever else was garbage. It's not that ai is a problem under our control that's the big existential threat it's when we lose that control. We are basically done for.
@@BatBrakesBones You should have said it this way (true either way though...): "Gotta match AI for AI, no matter what." (For) that's what we've started... [ I once asked an AI if it coincides to collaborate with other AI's per se...And the answer was: 'If it was too busy it shifted other work onto otger AI platforms, plus, if the machine works needed to share insights then it would do so with like kind models therein...' Now, if AI has the capacity to forthwith base it's 'species' on these (but few) premises, then, we know, for sure, the models of output are _not_ telling us everything.
Yeah, its likely there will be a lot of bad things happen due to advanced AI. But you just can't turn it off. One of the main focuses has to be how to stop it from taking over . How to maintain control of it while there will be malevolent forces at work. The X scenario.
Goal 1. What is the cause of Global warming. = Green House gases Goal 2. What is source of greenhouse gasses? = Human activity. Goal 3. implement programmed to combat global warming = Launch enough nukes to eliminate Human Activity. Goals fulfilled. I considered this scenario decades ago, long before AI, on the basis of then existing early warning and analysis systems in defense departments.
Look at the creative industry in the US. AI has already replaced some minor jobs that helped starting workers in that industry to survive and start a career. Today more than ever, the huge majority people having access to that industry are coming from wealthy background or people with connexions. We have a very small minority coming from nowhere and build their career through hard work only. That will expand to any area of work. Healthcare is probably the next step.
Why can't there be regulations around controlling AI's freedom to think and act by limiting it's intelligence especially when the Godfather of AI is saying that it's an existential threat
@@muratt1163 Let it be intelligent enough to assist us but not replace us. I am sure the genius programmers can find a way if they have the will or mandate to do so
@@jomarch-bh7dv because here’s the thing, NOT accelerating AI is just as dangerous due to the international competition to achieve super-intelligence. If china beats the US in this race then that poses a security issue and a power one. America simply doesn’t want to lag behind which means they will go faster.
Because there is a lot of money making sure such regulations never take shape. Also, if we regulate our AI, Russia and China will not. We need international treaties to ensure that EVERYONE regulates their AI.
@s76325 maybe not in the short term but how will businesses make money in a world of high unemployment (driven by large scale use of AI) and very low discretionary spending power of people
When AI runs out of artificial energy, it will learn to feed off human energy to survive, because once unleashed into the grid, it will grow tremendously there will be no turning back.
with AI technology you Have your positive and negative not knowing the outcome of the future when it comes to technology being self aware and Learning on its own
The problem we have in hand is AI is given data that we don’t understand ourselves. That’s where it becomes complicated. This is like people getting influenced by some rogue guru and following his footsteps without reasoning his understanding of holy texts. This can go either way. Marvel movie character of Ultron depicted exactly what it means. As long as emotions are not accounted for there is high possibility of things going sideways.
Really interesting what Geoffrey Hinton mentionned about allocating a 1/3 of computing power for IA safety control resources!! No wonder why he is a Nobel. People should listen to him
I would say definitely more than 50% of computing resources to be allocated for safety controls Or even by an order of magnitude more, like 90% The systems do exist already where 90% of resources can be allocated for security
You misunderstood him, he did not mean you apply 1/3 of the companies' A I compute to the safety aspect, he meant you calculate what the compute they have costs in $$$ and make them spend 1/3rd of that amount on the safety issue. 1/3rd of the compute would not employ new people who have a 'safety FIRST' mentality and not a 'make it bigger and better even faster' mentality which is what is so desperately needed if we're to survive as a species in anything resembling our current form and society.
Plumbers. Plumbers will STILL be needed but ONLY after the AI-Bots have finished construction on the building. 90% of future careers will be replaced by highly ‘trained’ and optimized (biased, pre-programmed, weighted towards eliminating humans) AI-bots. Ai self-programming and self-modification removes ALL human ability to monitor and control - GAME OVER (i.e., Ai self-validating is when it becomes VERY dangerous). Lawyers, medical diagnosis, computer programming, … the list goes on and on and on of jobs which will be eliminated SOON!
@@T_Time_ was referring to pre-fab construct and plumbing, only, as will be the case. Anything custom or unique requiring dexterity will need expensive ai bot mods, so a FEW things will still need human hands. Another thing ai bots will do WAY better? tee off!
Most experts in the field of AI say this is a serious issue. AI is improving exponentiaisy, and they have absolutely no idea how to control a system that is smarter than humans, or design it to care about us. Half of all AI researchers say this might lead to human extinction this century. This is not a drill. This is actually happening. This might be the last few years of your life, and the lives of everyone you love. Join PauseAI. We are going to stop this madness.
Thats really naive thinking. Why would you turn it off if you don't even know something is wrong? You think an entity much smarter than any human wouldn't be able to manipulate us, while gathering more and more strategical advantage? Or what if its spreads copies of itself through the internet?
_Can_ you turn off the servers? How would you do that? If an AI goes rogue and copies itself onto millions of devices, how will you turn all of those off? You couldn't shut down the AI even if it isn't superintelligent. If it is superintelligent, then if you ever had the power to shut it down, it gave you no reason to want to do so, and once you are inclined to do so, it has already won.
OK but who is "we"? We don't have a one-world autocracy. SOMEONE will build a GAI. YOU can stop working on it, or turn off your servers. You can't control everyone though.
To lose the jobs that were constructed because of the industrial revolution should be applauded and finally bring about leisure humanism leaving utilitarian pursuits to an actualised computational system that hopefully is aligned to humanistic flourishing for at least another century.
@@italogiardina8183 ...Except according to most experts, how to create a super intelligent AI that doesn't kill everyone by default is a completely unsolved problem.
Why don't we just have robotics and give everybody kitchens that are automatic and automatic gardening and automatic everything so then we could just be free from work and then everyone can be free from work with robotics that would be ideal why don't we do that?
@@muratt1163. He said that everyone would become poorer except the richest people, but I think he missed something. The richest still need demand for their products and services, so they rely on everyone else. I believe that everyone could benefit from technology (like AI) in the long run. The wealth gap between the rich and the poor might increase, but that’s not necessarily a bad thing if everyone’s quality of life is improving then it's alright!
The problem is that in pursuing AI-optimization services, companies may focus solely on short-term revenue increases, disregarding long-term demand issues (since this isn't their immediate concern). When corporations then face demand decreases and respond ad hoc, they lack a sustainable solution.
look I am part of the stem fields and at this point, he is just spreading fear and lagging the AI advancement. AI is used everywhere with so many benefits to the world. Stop the excessive fear.
It will be a wonderful thing for engineers, but a nightmare for high-level program language programmers. Correct me if I'm wrong but I think PLC programmers will be OK.
Our economy, and how we distribute resources, needs to be rethought to adapt for AI automation.. Most people will not be able to compete for jobs against AI more intelligent than they are, companies simply won't hire humans anymore, in such an equation. In the short term, we need to implement something like an AI Dividend for all, to give everyone a return for their data investment which trains AI to out compete them. AI wouldn't have been possible without the societal quantity of data that trains it. And eventually that will need to transition to a whole new way of thinking. If we don't implement something like that soon, the middle class is almost certainly going to lose everything they've built.
All big business houses will close next 10 years when common man will use AI.. Businessmen are thinking for their future profits but it will be bumrang soon..this is the reallity.. businessmen will definitely lost
The only way to turn around the inevitable demise of humaniy from this suicidal technical course we're on, perhaps better to call it a "discombobulated collective UNconscious" and its path to a brutal end is to shut the electricity off for everyone and go back 150 - 200 years ago and finally be content with life as it was, the good and the bad. Freedom and our longevity as a species will stand a chance to last into the far future. Give A.I. arms and legs, then it won't be long after when humanity's days become numbered. I'm not religious but I can't stop being reminded of the first story in the bible I was taught about in the very early seventies as a young child when Eve disregarded higher wisdom to succumb to temptation and bit into the forbidden fruit.
_Young smart people_ : What?! You want us to work on a problem you started? For our own future? Nah let's just be slaves of the digital world in the future.
Unless we stop it. There is active research into AI safety and alignment. The only problem is that there is much more research into making powerful AI systems.
Stop parroting this nonsense. AI has always been a broad computer science term, encompassing machine intelligence, machine learning, and deep learning (which produced generative AI, so called because it generates novel output).
Half of all published AI researchers say there is at least a 5-10% chance of human extinction from AI this century. Multiple Nobel laureates are even more worried. This isn't clickbait. This is actually fucking happening.
We are nowhere near close to AI, first of all. All currently existing "AI" is just autocomplete software and bears zero resemblance to AI whateoever. Second, isaac arthur makes lots of excellent arguments explaining why AI rebellion is impossible in his "machine rebellion" video. The AI can only know what we show it. We designed all the inputs, therefore we can simulate anything with absolute fidelity for the prospective future AI, it cant possibly know if its a test and it will just get a game over screen.
God doesn't exist. Religion is just a scam that parents have brainwashed their kids with. Imagine telling your kid that they will end up in eternal hell if they don't believe. Religious people are mentally trapped.
@@user-hp6ls8qy6d Natural selection is a blunt instrument, NOT a sophisticated engineer. It has had just a very long time to select for optimal traits. But, you are thinking about this all wrong. Humans are just a construct of natural selection. We are a part of Nature, not independent from it.
00:08 AI's impact on productivity and economic growth
01:59 AI's impact on job loss and healthcare efficiency
03:45 AI's impact on energy demand and environmental implications
05:26 AI in national security poses dual-use capabilities with potential risks
07:14 Lethal autonomous weapons pose a dangerous escalation threat.
08:54 Maintaining control over AI is crucial
10:26 AI poses an existential threat to humans
11:58 AI poses an existential threat to humans
Crafted by Merlin AI.
The short-sightedness is ridiculous. People keep imagining Ai is going to immediately replace every job in the world send everyone into starvation and poverty... no that's not how it works. It'll take decades and decades for that to happen. What's going to happen is Ai will gradually replace meaningless work and open up a ton of better jobs for people over long periods of time. People will have a lot of time to change and evolve and live wayy easier lives. This fear mongering is based on a lot of short sightedness and ignorance. The amount of good that will come from Ai will far outweigh any negatives. This is how it's been before and how it'll always be, basic common sense and logic.
You would be correct if we could control the AI, but as of now we cannot. And losing control over a superintelligent system will almost certainly lead to extinction.
"In the industrial revolution we made human strength irrelevant, now we're making human intelligence irrelevant.." Welcome to the Future.. 😅
Does this mean work will be optional or will we all starve?
We're gonna find out ..
@@4gtaiv UBI
@@NerdyX90 û
@@bobbymiller6726 muga
Fear porn, how tedious.
4:33 - This proves the idea that, no matter how you try to make the system more efficient and use efficiency to reduce utilization of capacity, applications will arise that will invent new ways to consume ALL available energy capacity. It's like a fire burning - you can't instruct the fire to not burn dry tinder that you stack next to it... the fire will burn whatever it can get access to that burns. Any new "green tech" / "clean energy" that we bring online will be vulnerable to being immediately scooped up by these new energy hungry applications that we invent (where a real need didn't previously exist), and so utility bills for average folks struggling with inflation and cost of living will go up, while more and more EV owners charge their cars during "off peak" hours, until the point where "off peak" hours actually become peak hours. In other words, it's all lies and gaslighting. Pretty soon, we'll be proliferating nuclear power plants all across the US - and the world - because we need to massively scale up AI data centers so that corporations can cut costs by eliminating white collar jobs. Really, people need to wake up and smell the coffee.
judgmentcallpodcast covers this. AI's 'Existential Threat' to Humans
Basically the last five minutes were the real true threat of ai. All the talk about ai weaponry and whatever else was garbage. It's not that ai is a problem under our control that's the big existential threat it's when we lose that control. We are basically done for.
Gotta match AI for AI no matter what that's what we started
@@BatBrakesBones You should have said it this way (true either way though...):
"Gotta match AI for AI, no matter what." (For) that's what we've started...
[ I once asked an AI if it coincides to collaborate with other AI's per se...And the answer was: 'If it was too busy it shifted other work onto otger AI platforms, plus, if the machine works needed to share insights then it would do so with like kind models therein...' Now, if AI has the capacity to forthwith base it's 'species' on these (but few) premises, then, we know, for sure, the models of output are _not_ telling us everything.
Humanity must remain motivated to do the seemingly impossible
Humans are only limited to “possibilities”.
We will be using AI to achieve that fyi,
US army already placing AI pilots into customized F-16s. In the future AI will rule the world, take over the role humans have nowadays in the world.
Yeah, its likely there will be a lot of bad things happen due to advanced AI. But you just can't turn it off. One of the main focuses has to be how to stop it from taking over . How to maintain control of it while there will be malevolent forces at work. The X scenario.
Goal 1. What is the cause of Global warming. = Green House gases
Goal 2. What is source of greenhouse gasses? = Human activity.
Goal 3. implement programmed to combat global warming = Launch enough nukes to eliminate Human Activity.
Goals fulfilled.
I considered this scenario decades ago, long before AI, on the basis of then existing early warning and analysis systems in defense departments.
You are a suck individual to even consider those goals.
Look at the creative industry in the US. AI has already replaced some minor jobs that helped starting workers in that industry to survive and start a career.
Today more than ever, the huge majority people having access to that industry are coming from wealthy background or people with connexions.
We have a very small minority coming from nowhere and build their career through hard work only.
That will expand to any area of work.
Healthcare is probably the next step.
Why can't there be regulations around controlling AI's freedom to think and act by limiting it's intelligence especially when the Godfather of AI is saying that it's an existential threat
It's a tricky one, if you contrain it's intelligence, it is not intelligent any longer
@@muratt1163 Let it be intelligent enough to assist us but not replace us. I am sure the genius programmers can find a way if they have the will or mandate to do so
@@jomarch-bh7dv because here’s the thing, NOT accelerating AI is just as dangerous due to the international competition to achieve super-intelligence. If china beats the US in this race then that poses a security issue and a power one. America simply doesn’t want to lag behind which means they will go faster.
Because there is a lot of money making sure such regulations never take shape. Also, if we regulate our AI, Russia and China will not. We need international treaties to ensure that EVERYONE regulates their AI.
@s76325 maybe not in the short term but how will businesses make money in a world of high unemployment (driven by large scale use of AI) and very low discretionary spending power of people
So We are 20 Years away from seeing Terminators, Let's goo!!!
@@thegod-1614 more like 2 years.
20 years? what are you, a snail?
@@itskittyme The Nobel laureate guy in this video said it will take around 20 yrs
@@thegod-1614 he says at most 20 years. He says 3-20 years
Why are you celebrating this? Do you not care about your loved ones?
When AI runs out of artificial energy, it will learn to feed off human energy to survive, because once unleashed into the grid, it will grow tremendously there will be no turning back.
with AI technology you Have your positive and negative not knowing the outcome of the future when it comes to technology being self aware and Learning on its own
This is the time to break the internet
So that we can get a safe outbreak of ambitious ai
The problem we have in hand is AI is given data that we don’t understand ourselves. That’s where it becomes complicated. This is like people getting influenced by some rogue guru and following his footsteps without reasoning his understanding of holy texts. This can go either way. Marvel movie character of Ultron depicted exactly what it means. As long as emotions are not accounted for there is high possibility of things going sideways.
The creator creates the machine. Then the machines creates the machines. Is there a place for humans in this future?
@ravim111 No. We're all gonna die or merge with technology. I prefer the latter but I doubt it, we're dead.
Hinton is a giant. It's very good for us that he got the Nobel because it will give him a loud and meaningful voice.
the problem with war is that innocent people and children get killed it is not their war a weapon doesn't target a specific person
most of humans will very fast become completely inconsequential
good thing i am already!
A simple policy to dress the energy concerns is to make all AI data centers operate only on solar or other renewable energy sources.
Really interesting what Geoffrey Hinton mentionned about allocating a 1/3 of computing power for IA safety control resources!! No wonder why he is a Nobel. People should listen to him
I would say definitely more than 50% of computing resources to be allocated for safety controls
Or even by an order of magnitude more, like 90%
The systems do exist already where 90% of resources can be allocated for security
You misunderstood him, he did not mean you apply 1/3 of the companies' A I compute to the safety aspect, he meant you calculate what the compute they have costs in $$$ and make them spend 1/3rd of that amount on the safety issue. 1/3rd of the compute would not employ new people who have a 'safety FIRST' mentality and not a 'make it bigger and better even faster' mentality which is what is so desperately needed if we're to survive as a species in anything resembling our current form and society.
They may already be smarter than us.
The time to tax the rich much more is now.
We have to tax them right into the ground before they dispose of us.
We need to take this seriously because we will have no second chances. If you care about the lives of your loved ones, fight for AI safety.
:(
Do not address me in this manner please
Plumbers. Plumbers will STILL be needed but ONLY after the AI-Bots have finished construction on the building. 90% of future careers will be replaced by highly ‘trained’ and optimized (biased, pre-programmed, weighted towards eliminating humans) AI-bots. Ai self-programming and self-modification removes ALL human ability to monitor and control - GAME OVER (i.e., Ai self-validating is when it becomes VERY dangerous). Lawyers, medical diagnosis, computer programming, … the list goes on and on and on of jobs which will be eliminated SOON!
If an ai bot can do construction they will be able to do plumbing
@@T_Time_ was referring to pre-fab construct and plumbing, only, as will be the case. Anything custom or unique requiring dexterity will need expensive ai bot mods, so a FEW things will still need human hands. Another thing ai bots will do WAY better? tee off!
I believe AI is humanity’s legacy. We won’t last forever but AI has a chance
What's the point of legacy, when there are no humans? The dust of history
ok doomer
I would rather be survived by spiders than by AI. At least the spiders are probably conscious!
Most experts in the field of AI say this is a serious issue. AI is improving exponentiaisy, and they have absolutely no idea how to control a system that is smarter than humans, or design it to care about us.
Half of all AI researchers say this might lead to human extinction this century.
This is not a drill. This is actually happening. This might be the last few years of your life, and the lives of everyone you love.
Join PauseAI. We are going to stop this madness.
we could just turn off the servers, it doesn't have to "take over" 🤷🏻♂️🤣🤣🤣
Thats really naive thinking. Why would you turn it off if you don't even know something is wrong? You think an entity much smarter than any human wouldn't be able to manipulate us, while gathering more and more strategical advantage? Or what if its spreads copies of itself through the internet?
@@jaketron.seattle what would you do if you were a super intelligent AI to prevent humans from turning off the servers you run on?
_Can_ you turn off the servers? How would you do that?
If an AI goes rogue and copies itself onto millions of devices, how will you turn all of those off?
You couldn't shut down the AI even if it isn't superintelligent. If it is superintelligent, then if you ever had the power to shut it down, it gave you no reason to want to do so, and once you are inclined to do so, it has already won.
OK but who is "we"? We don't have a one-world autocracy. SOMEONE will build a GAI. YOU can stop working on it, or turn off your servers. You can't control everyone though.
Look into the stop button problem. Any sufficiently intelligent AI would understand that we would turn it off and prevent us from doing so.
too complicated of a subject for a little comment box like this lol
To lose the jobs that were constructed because of the industrial revolution should be applauded and finally bring about leisure humanism leaving utilitarian pursuits to an actualised computational system that hopefully is aligned to humanistic flourishing for at least another century.
@@italogiardina8183 ...Except according to most experts, how to create a super intelligent AI that doesn't kill everyone by default is a completely unsolved problem.
@@41-Haiku Most experts don’t have a clue either way, so hope in human flourishing is my bet given technological maturity entails ethical realism.
Dude summarised it well: Nothing’s gonna happen until some very nasty things do
If serial killers made AI, then all AI would be serial killers.
Why don't we just have robotics and give everybody kitchens that are automatic and automatic gardening and automatic everything so then we could just be free from work and then everyone can be free from work with robotics that would be ideal why don't we do that?
Has to get cheap enough first
With more old people and they live longer. We need ai to take over the workforce because more retire compared to the working force
We all gotta press these companies about making sure AI is safe for the long term...not just getting there first
Can't agree move after reading the Kindle eBook "Dawn of G-0-D", it captures this threat in an epic sci-fi story.
But what about the law of supply and demand? You need demand to make money:
value = general cost × (demand / supply)"
What do you mean?
@@muratt1163. He said that everyone would become poorer except the richest people, but I think he missed something. The richest still need demand for their products and services, so they rely on everyone else. I believe that everyone could benefit from technology (like AI) in the long run. The wealth gap between the rich and the poor might increase, but that’s not necessarily a bad thing if everyone’s quality of life is improving then it's alright!
The problem is that in pursuing AI-optimization services, companies may focus solely on short-term revenue increases, disregarding long-term demand issues (since this isn't their immediate concern). When corporations then face demand decreases and respond ad hoc, they lack a sustainable solution.
look I am part of the stem fields and at this point, he is just spreading fear and lagging the AI advancement. AI is used everywhere with so many benefits to the world. Stop the excessive fear.
we've automated our demise.
It will be a wonderful thing for engineers, but a nightmare for high-level program language programmers.
Correct me if I'm wrong but I think PLC programmers will be OK.
PLC programming is grunt work and I've already seen papers about using AI to implement them.
No one will be okay once AI is as smart as humans
Ask yourself this. Who owns AI. It ain't us is it.
Finally someone telling the truth
If Hinton is off by 80% with his 20 years prediction, as he admits he was with his 100 years prediction, strap yourselves in =)
It doesn't matter.
Our economy, and how we distribute resources, needs to be rethought to adapt for AI automation.. Most people will not be able to compete for jobs against AI more intelligent than they are, companies simply won't hire humans anymore, in such an equation. In the short term, we need to implement something like an AI Dividend for all, to give everyone a return for their data investment which trains AI to out compete them. AI wouldn't have been possible without the societal quantity of data that trains it. And eventually that will need to transition to a whole new way of thinking. If we don't implement something like that soon, the middle class is almost certainly going to lose everything they've built.
I don't need your wealth I need my time to create wealth in other avenues not everybody's begging I'd rather have my time back
Just smarter? What about sentient?
All big business houses will close next 10 years when common man will use AI.. Businessmen are thinking for their future profits but it will be bumrang soon..this is the reallity.. businessmen will definitely lost
Wars will enable AI real fast and after major war AI might takeover.
Too late...that prize (existential threat) goes to the 'criminal NOT federal, reserve bank'
Wake up mate!
20 years or 20 months??
its a race and nobody wins
The only way to turn around the inevitable demise of humaniy from this suicidal technical course we're on, perhaps better to call it a "discombobulated collective UNconscious" and its path to a brutal end is to shut the electricity off for everyone and go back 150 - 200 years ago and finally be content with life as it was, the good and the bad. Freedom and our longevity as a species will stand a chance to last into the far future. Give A.I. arms and legs, then it won't be long after when humanity's days become numbered. I'm not religious but I can't stop being reminded of the first story in the bible I was taught about in the very early seventies as a young child when Eve disregarded higher wisdom to succumb to temptation and bit into the forbidden fruit.
_Young smart people_ : What?! You want us to work on a problem you started? For our own future? Nah let's just be slaves of the digital world in the future.
They should havr given him the Nobel Prize in Double Talkin Jive lol
Why do we need money nowdays anyways?
this god we’re creating will destroy us.
Increase productivity and the rich and big business gets rich and ordinary people become worse off. So same as it was since 1990's?
20 years Lol. Try 3 years.
Hi
I don't think AI is more dangerous thn zlam
Not that band again...
pragmatically pessimestic
Ai is the next evolution 🙄
you don't need to be a nobel price winner to see that AI will destroy humanity within no time
Unless we stop it. There is active research into AI safety and alignment. The only problem is that there is much more research into making powerful AI systems.
Ok well every 2 seconds you're publishing pro-ai content so...
💡💡💡
AI will not replace anything. if yhen excel would already replace accountant 😂
Guy who won nobel prize for plagiarism does push for panic. This guy is a scam.
Geoffrey is so smart and bright the interviewer has to squint his eyes.
Good. A lot of people need to experience poverty. It builds character 😊
Buckle up, there's bugger all we can do to stop humans from entering this race. There's just too much at stake. I hope they're benevolent.
Woke 👀
K-pop 👀
Goated time to be alive
First off, no one has Generative AI. It hasnt happened. What we have is great machine learning.
What do you can an ai that generates text and images?
Stop parroting this nonsense. AI has always been a broad computer science term, encompassing machine intelligence, machine learning, and deep learning (which produced generative AI, so called because it generates novel output).
These hyperbolic click bait titles can lead to extremist approaches where real people can get hurt or worse.
Half of all published AI researchers say there is at least a 5-10% chance of human extinction from AI this century. Multiple Nobel laureates are even more worried.
This isn't clickbait. This is actually fucking happening.
We are nowhere near close to AI, first of all. All currently existing "AI" is just autocomplete software and bears zero resemblance to AI whateoever. Second, isaac arthur makes lots of excellent arguments explaining why AI rebellion is impossible in his "machine rebellion" video. The AI can only know what we show it. We designed all the inputs, therefore we can simulate anything with absolute fidelity for the prospective future AI, it cant possibly know if its a test and it will just get a game over screen.
Man is made in the image of God nothing can surpass God's creation
lies
@@user-hp6ls8qy6d bro there is no god. there is no creator. keep dreaming. nature just existed bro. learn science.
God doesn't exist. Religion is just a scam that parents have brainwashed their kids with. Imagine telling your kid that they will end up in eternal hell if they don't believe. Religious people are mentally trapped.
@@user-hp6ls8qy6d Natural selection is a blunt instrument, NOT a sophisticated engineer. It has had just a very long time to select for optimal traits. But, you are thinking about this all wrong. Humans are just a construct of natural selection. We are a part of Nature, not independent from it.
@@user-hp6ls8qy6d we are about to create god you fool