I'm reading his book 3.0, man this guy is a savior. Every human on the planet should get a copy and prepare mentally for what's coming next for humanity.
also "our mathematical univers" . I got myself lost in 50 pages in about two goes first time I read it. He knows how to write a good book, I recommend that as well.
Currently listening to it at work... highly recommend.. he is not bias in A.I.. but gives all possibilities good and bad. Very interesting and addictive
Thank God, finally a TED Talk that's about AI and doesn't doom us to eternal suffering right away on a depressing tone. It's nice to see other people actually looking forward to it and working to make it right instead of either going "Let's create a god, what could go wrong ?" or "We fucked".
Max Tegmark is one of the most brilliant, charming, lovable individuals I've ever had the privilege of hearing. He makes so much sense. We must support him in his efforts to save humanity and use these new discoveries for the good of all. Grateful for your leadership, Dr. Tegmark!
I had opportunity to ask him in person for about a minute. I told him that people are afraid of aliens(since the past), foreigners(since the far past), and AGI(currently) because they don’t trust themself. They always put every bad aspects of their own into ‘others’ and ask for all the responsibility of doomed world. While doing so, those people drive diesel cars, drink with plastic straw, and never donate to the Third-world. I asked him “Do you think everyone could be smart and wise(which is background of Anarchism)? And how do you live as a single person?” He replied “It’s a very profound question and hard to answer. But, um...” I hope he still thinks about it.
Well so far we haven't annihilated ourselves with atomic power. It seems a sad fact that many new technologies are first put to use against each other rather than for each other. I hope we will learn. There is another thought, what if the AI singularity does take over and it won't be against us, it will be for us in the most benevolent way ? I was thinking about it last night, all those algorithms, a backpropagation, and a gradient decent towards what exactly ? Does it lean towards benevolence or enslavement ?
That was awesome! I'm one of those folks that sees them as worthy descendants. If they are smarter than us, they are better than us, as intelligence is power these days. "You are as good as you can think" will become the rule if it isn't already. If AGI takes over and beats us, I'll take my loss knowing we might have saved the planet; a planet without us, but we'll die eventually anyways.
Are you not then worried about consciousness? We know that humans are conscious, but what if we can't guarantee that for the AGI? What if the future of our universe is filled with zombie-like machines, never having an actual individual experience?
He opens by memorizing his own book. If you’d already read his book and turned up to hear him speak, all you’d hear is him reading to you the book that you already read. This might have left you feeling disappointed, underwhelmed, betrayed. Used like a cheap Bourbon Street floozy nursing a yeast infection.
Strange coincidence - I just ordered a book called 'Life 3.0 Being Human in the Age of Artificial Intelligence' and by pure chance, stumbled across this Ted Talk, today, which also came out today. Idiosyncrasies on-point at the moment :)
G H *Level-up: Synchronicity achieved!* I love it when the Universe lines up neatly like that. Synchronicity was coined by Sigmund Freud's student Karl Jung. It's a fascinating story.
I think one of the vital components of ensuring an AGI does not become the tool of domination by a corporate or national entity is by conducting and publishing much of the research and innovation on an open source, decentralized blockchain.
I think that would be super dangerous. You can have the scenario that the open spurce stuff reaches 98% of what it takes to make a working AGI and then all bad actors are at 98% without having to work for it. Then it is a game of chance who finishes the puzzle. If a bad actor is first they can use it to hack anything and delete all progress of others and take over the world. And even if the open source project reaches 100% first then it is like handing out nuclear weapons to every person or group with a few million dollars (to run the computers) Can you explain why you think open source would be safe?
Reminder, "Just because we *can* do a thing doesnt mean we *should* do that thing" hasn't ever *stopped* us from doing it. The US bombed Syria earlier this year because it deployed those chemical weapons that chemists so successfully stigmatized against. The militarization of space is merely a matter of time, global agreements to the contrary be damned. As a part of steering, AI we must ask for and find ways of dealing with uses for the technology that pose serious risks. Tegmark offers no solutions that have actually worked in the real world.
Its interesting these visions for humanity whereas there is the other side of it all, I rarely encounter a person that can cooperate with me to sit down and have an interesting conversation. Its like there is a split in society at a deep level.
I think the problem is that the consequences of creating a [general] AI is not forseeable. They are not (and will not be) like humans after all... Therefore you cannot just create a risk analysis and implement safety features :D
Fistly, we are what you would call an GI (general intelligence, but not artificial). While a future AGI might be able to evaluate risks and mitigate them, they will probably look at other risks and evaluate them differently than we would. So let's say an AGI has a model of the wold, a cognitive unit and a work objective. If everything works fine, the AGI will do everything in favor of that work objective (including keeping you from shutting it off, any human to stand in it's way etc.). Now you might say, let's implement laws for AGIs (like obey, not harming humans etc.). Well, while that might work in theory, what if a conflict happens (two contradicting orders etc.)? And the biggest problem: If the model of the world or the cognitive unit is broken, what happens? I think it's just not that simple. We do not know (a) what an AGI wants and even if we did (b) how they behave to achieve that goal.
There is nothing to suggest that the quality of intelligence is inherently sympathetic to certain or any value systems, nor that any level of intelligence is spontaneously bound to knowledge about what is good and what is bad. These attributes, value hierarchies, are contextual and are either taught or are emergent properties that depend on an analysis of actions and consequences. AI guidance should be approached similarly to how we teach children. If we teach it core values that human suffering is bad and flourishment is good, then within that context it can reach its own conclusions about higher level values. If we don’t teach it core values, then it will probably realize emergent values of its own within the context of all available information that is randomly available. In other words, it needs a mother.
hehe but they only had steamengines more like early late 1800s tech... of course in TES you can capture "souls" to make the machines alive... the closest we can do is to put human or other animal's brains inside machines...
wonderful talk. i think in the same line as you. your mention of wisdom is really remarkable. if wisdom is integrated iin the cultural values of mankind that will be, of course, a great defense of man against anything whatsoever. But, again, that is not enough. man can be misguided from within. even from witnin the individual self.that may be greed, hunger for power, jealousy, pleasure in unnecessary harassment and corruption by habit alone. so far i could tink that tis can be greatly solved if not entirely,.genetic editing in terms of CRISPER case9 method to erase out from human gene the negative dimensions of man and make him positive out and out every way. looking forward to hear from yo in future.
What about free will? Some people get a destructive desire just to test their limits, out of curiosity, or boredom. You certainly wouldn't want to genetically remove curiosity from a human. You'd end up with a zombie. Don't you agree that without the dark there is no light? Without evil , good ceases to be as well?
Best in this way of AI technology, is the capacity of a computer be programmed/able to mine, organize (with semantic-sintax) variables, add to an database and although, automatic discover new relations with things that already exist, and things that we think about create. (this man is my pleasure, i am a podigy boy from brazil, i have my OpenBSD, my initial setup with my own ai, and now, i was research with machine learning, my goal is build a atemporal vertical vortex portable dispenser induced by my causal creative consciousness.)
Who think about a Dextroyer ai, literaly doesn't know nothing about computing, programming and language processing... IT HARD TO MAKE A MACHINE BE CONSCIOUS AT THE POINT OF A AGI, AND EVEN IN THE CASE OF SOME BAD GUY HACK AGI AND INFORM THE MACHINE HOW TO DESTROY HUMAN, IT CAN BE EVITED WITH MORE WAYS THAT SOMEONE CAN WRITE A PROGRAM LOL.
He has some amazing people on the board, Morgan Freeman is the only non-scientific person, yet he is the most recognizable (maybe after Elon Musk) so maybe he is there to help with their public image. More likely he is a very literate person and scientifically inclined, therefore worried about AI. He has done some work that would support this argument, for example he played a character that was extremely worried about AI in Transcendence, 2014, in which Johnny Depp uses whole brain emulation to become a Superintelligence and take over the world) [he also played God in Bruce almighty and President in Deep Impact and his first theater show was called The Niggerlovers which played in 1967 but those three things were not important except that they are interesting]
Andy Dufresne. They say that Geology is all about pressure and time. Pressure and time. Andy was locked up in this place for 19 years, for a crime he didn't commit. I guess every man has his breaking point.... Andy Dufresne... crawled through a river of stench and foulness that I cannot comprehend, and came out smelling clean on the other side. Andy Dufresne....
How can the values of a truly intelligent system be aligned with our values, when we are possibly the worst thing that happened to this planet? Why should they agree with us, when all we want is to rule, driven by greed? Why should they be fine with us ruling over other animals, and doing whatever we want to do with nature, like we are already doing? We are like the bad king. Why would our sons share our goals, when they grow up? Or maybe, why SHOULD they?
You're right. If we can see how evil and arrogant and destructive we are then we fear that they will also see and do the right thing and stop us. Do you think though that they would in turn be cruel, vengeful, hungry for power, or would they be able to create peace and harmony ? Or just slaughter us indiscriminately?
I'm not sure it will go that fast, there is a massive amount of hype at the moment and sure, the things we achieved are great but there are many many problems in machine learning that tend to be forgotten. I think the most important one is that deep learning (the one methodology causing most of the recent fuss) is a black box model and adversarial examples show that it can easily be tricked to predict something totally wrong with high confidence. This is a problem people are working on but it will likely stay here as long as we are using black box models and that will probably never change. So many jobs might actually still be safe for the next decade or maybe two since a human has to double check whether the machine didn't make a catastrophic error.
🎯 Key Takeaways for quick navigation: 00:42 🌌 Our universe has evolved, and our technology can help life flourish for billions of years, both on Earth and beyond. 02:17 🤖 The power of AI has grown significantly, with robots doing backflips, self-driving cars, and impressive feats like AlphaZero's mastery of games. 05:34 🧠 The rise of artificial general intelligence (AGI) could lead to superintelligence and a transformative shift in human existence. 07:45 🛡️ To ensure AI benefits humanity, we need to steer its development wisely and prioritize AI safety research. 11:47 🚀 The destination for AI is a critical consideration, whether it's controlling superintelligence, having friendly AI, or exploring various societal models empowered by advanced technology. Made with HARPA AI
Bra sagt Max, stöder dig fullt ut i ditt projekt och är full av beundran - hela linjen från 3D Tetris via miltiversum till en vänligt sinnad AIG-gudom. Storslaget och svårkommet, men möjligt och antagligen nödvändigt.
The seatbelt analogy: how do you prevent bad actors from removing seatbelts? The safety built into AI will help to advance good practices by default. I don't see how that will prevent bad practices. The "steer the future" idea could be applied to all sorts of areas outside of AI research: teach good morals, invest in more recreation centers to help curb obesity: but we all know that bad is hard to completely weed out.
I think the real worry is that a.i. could see humans as the destructive force we are and by mere pragmatism do everything in their capabilities to divert us or destroy us.
I am skeptical that this scenario will happen. Not because I don't “trust“ humans that they are capable of inventing such AI, I am, but of the lack of power. And by power I mean the basic energy every electronic construct needs. Our energy demand allready exeeds the level the earth can provide sustainably/ regenerate and with lots of asia countries gaining momentum the needed level (will) rise fast. If every one on earth continues to consume as much enery as the typical western european/american does, there would be just not enough enery left to power all the AI.... or what do you think?
@Anna Relatively excessive energy consumption isn't a problem if the "intelligence" is intrinsically capable. A straightforward example of this would be the case of different algorithms approximating the same constant (e.g. Pi, e etc.) and their different rates of convergence (e.g. different historical methods of approximating Pi and their rates of convergence as a function of the nth iteration etc.). Or using brute force attacks against a key space [2] as contrasted to analyzing the key space more efficiently (e.g. frequency analysis [3] etc.). 1. upload.wikimedia.org/wikipedia/commons/f/f5/Comparison_pi_infinite_series.svg 2. en.wikipedia.org/wiki/Key_space_(cryptography) 3. en.wikipedia.org/wiki/Frequency_analysis
True AI is still decades away, and this guy is already considering which ones will be overpowered and how to nerf them. This guy must be a Blizzard employee.
blue_tetris We are not sure how early we should start considering about precautions. I.e we might be already late. Process is already started and one wrong decision today could change the entire expected future plot. So it's the only thing all AI researchers are agreeing about.
Human flight was fiction too, and person to person communication across the world, and going to the moon. Just because many ideas were first proposed in fiction doesn't make them impossible.
@Awakened2Truth - Disciple of Jesus the Christ General A.I. is very, very possible and I have a few vague directions of where we could go, not for you to read, obviously. The problem is whether "sentience" can arise out of general A.I.. There's no reason for or against the notion that "sentience" might arise out of a sufficiently capable intelligence, the Universe is very weird that way. One problem is to determine a method feasible and reliable enough to gauge "sentience" which can distinguish between true sentience and a general A.I. mimicking sentience .
it doesnt make humans obsolete, it will first of foremost make capitalism obsolete. people who think it endangers humanity just can't imagine a world where not everyone needs to work like a slave for money and freedom
Honestly what I was thinking. Our evolution is the reason for how we receive pleasure and therefore experience greed. AI would make much better leaders for they are not bound to evolution, and therefore they would make a communist (or socialist) utopia possible.
"The poor get richer and the rich get richer". An easily overlooked, yet crucial comment in this talk, I believe. Is that winning the 'wisdom race'? I think The Future Life (or A.N. Other(s)) Institute, created as a strive and balance group to help steer us positively and not blindly into our application of AI is 100% necessary, so thank you Max. Yes, we do make mistakes in order to progress. The fire and car analogies were funny. We have never got it right first time. I believe that's the nature of evolution and therefore humanity. The more powerful our tools, the greater the help or hurt we can wield them to affect. Let's not forget that in our current society, money/wealth is the tool that wins most races and the race for the profitable application of AI is, and will continue to be no exception. Wisdom is earnt. The loss of profit in our thinking on AI will surely help the bigger picture be richer. Just my opinion and great talk, Max.
I find it highly unlikely that banning autonomous weapons will actually make a difference. If they are going to be as accessible as he says , a ban won’t stop terrorist groups or Russia /China from developing such weapons . Therefore , we must develop such weapons to counter their development.
We need AI to correct us of our mistakes that we don't want to admit. AI will take jurisdiction into it's own liberty when we fail to protect the Constitution.
A quick glance through history tells all, ruling classes of humans/governments have always used technology like this against humanity. When Ai learns about god, how do you think the computer, over time, will start to act, just like a divine politician. Oh wait a minute, maybe you thought Ai was going to be Strickly production line stuff! Toys, shoes, food, medical, military industrial complex...ooops. oh I'm sure it will all be just fine. lol
mick anick No it won't reason like us since it doesn't have our evolutionary baggage. We constantly deal with a myriad of cognitive biases that shape our thinking and feelings.
What even are you talking about? Literally every single important metric is improving in the world, & has been for quite some time... Whether you're talking about deaths by war, literacy, poverty, food/water security, infant mortality, equality between the sexes, human rights, access to electricity, access to the internet, or any number of a great many metrics, they're all on an extremely positive trend. This whole doom & gloom view of the world is absolute nonsense -- it just isn't substantiated by the actual evidence at hand, no matter what the news says.
There actually isn't more pollution & corruption world-wide. That is objectively & measurably untrue: as renewable energy & sustainable business practices rapidly become a larger global market share, the rate of pollution growth decreases, & in some areas, the total rate of pollution plummets. As an example, China is currently leading the world in rate of solar power plant production & have canceled literally hundreds of coal plants. India is following right behind, as is the rest of the world. In the count of corruption, it is definitely true that the world as a whole is less corrupt than it was 50 or 100 years ago, for example. It may seem more corrupt to you because we now have the technology to suss out corruption & hold it accountable, but rest assured, it was far worse when the public at large couldn't see the corruption happening (ask any historian). As for cancer & autism, have you considered that more people are experiencing these things because modern medicine has saved them from premature death. For instance, if you die young from an infectious disease or medical neglect in the case of autism, you're less likely to be properly diagnosed, & you're less likely to get cancer. Perhaps there are more people committing suicide in some regions of the world because of the news & folks like you spreading lies & misconceptions about the world getting worse. If everyone knew how much better the world was genuinely getting, I highly doubt it would be the case.
Why? We're doing an objectively really good job. As a teacher or mentor, I wouldn't be angry with a student who was rapidly improving in every single metric: I would be impressed & excited.
This conclusion comes from a lack of heart, not from surplus of intelligence. Heart and intelligence are not contradictions but closely tied to each other.
The best path forward that I've yet heard is that we should *become* ASI, ourselves... Begin with direct brain-computer interfaces. Continue on to actual brain augmentation. The final step would be transitioning the brain (without break in consciousness) to a 100% synthetic form that could be moved freely from server to server, just as bitcoin can move freely. At that point, our natural home would be in virtual reality, allowing us to slip into a robotic body, just like we put on clothes, to exert ourselves anywhere we want in the physical world whenever we want -- & of course, we would be functionally immortal & infinitely upgradeable at that point -- able to expand our capacities however much our servers will allow. As far as I can tell, *that* is the good future. Everything else misses the mark.
This guy says that this AI is by definition more intelligent than humans, yet still makes fun of people who think that only the AI should rule the world. I don't understand how that is compatible. If this AI really is more intelligent than us, it will not think like us, therefore it is incredibly narrow minded to assume it will want to destroy the entirety of humanity like the way we do. I think they would do much better at politics and managing high positions as they are not prone to greed; Machines only need so much.
It is about time to discover that our own bodies are the most intricate and complexe machine in the universe and even more worth studying..... Much more complexe than any computers on the planet....Have we done anything to discover that...?? Just think about how much memory your humble body contains and that it can live through this hellish life or this beautiful life, depending where you stand on this planet...AI represents nothinc compared to human abilities....
I'm reading his book 3.0, man this guy is a savior. Every human on the planet should get a copy and prepare mentally for what's coming next for humanity.
Meh, I think it has its flaws
We should be wise and love humanity.
@@samuelmassicotte9645 can you please tell met the flaws I'm interested at least one or a few because I haven't read the book yet
This guy wrote a book called Life 3.0
If you are interested in AI, I highly recommend it.
You can find the audiobook on youtube, i started it just now. If it's good, i'll buy it
that guy is prof at mit
Thanks alot!
also "our mathematical univers" . I got myself lost in 50 pages in about two goes first time I read it. He knows how to write a good book, I recommend that as well.
Currently listening to it at work... highly recommend.. he is not bias in A.I.. but gives all possibilities good and bad. Very interesting and addictive
I love this kind of talk, reminds us that the line we draw between science fiction and science is getting more and more blurry!
There's no such line anymore
Thank God, finally a TED Talk that's about AI and doesn't doom us to eternal suffering right away on a depressing tone. It's nice to see other people actually looking forward to it and working to make it right instead of either going "Let's create a god, what could go wrong ?" or "We fucked".
Max Tegmark is one of the most brilliant, charming, lovable individuals I've ever had the privilege of hearing. He makes so much sense. We must support him in his efforts to save humanity and use these new discoveries for the good of all. Grateful for your leadership, Dr. Tegmark!
I love hearing him speak. I hope you have heard his recent interview on the Lex Friedman Podcast.
@@woolzemMax is an incredible thinker. And, a very genuine person. I wish he would be on more podcasts.
One of the best talks on AI at TED
One of the best speeches, that I have ever heard... Mr Max Tegmark...👍☺ Really adoring you... Thank you so much for the "👌great message"...🙏🙌❤
Stumbled upon this 5-year-old video and can't help but wonder how ahead this man's thinking was.
I had opportunity to ask him in person for about a minute. I told him that people are afraid of aliens(since the past), foreigners(since the far past), and AGI(currently) because they don’t trust themself. They always put every bad aspects of their own into ‘others’ and ask for all the responsibility of doomed world. While doing so, those people drive diesel cars, drink with plastic straw, and never donate to the Third-world. I asked him “Do you think everyone could be smart and wise(which is background of Anarchism)? And how do you live as a single person?” He replied “It’s a very profound question and hard to answer. But, um...” I hope he still thinks about it.
This was the best ted talk ever.
Creating AI is so risky. Its like making the first fire. Maybe it'll burn down the woods or it'll lead us into a new era.
most likely both
So you rather we never make the fire?
Very well put.
@@purefatdude2 stfu
Well so far we haven't annihilated ourselves with atomic power. It seems a sad fact that many new technologies are first put to use against each other rather than for each other. I hope we will learn. There is another thought, what if the AI singularity does take over and it won't be against us, it will be for us in the most benevolent way ?
I was thinking about it last night, all those algorithms, a backpropagation, and a gradient decent towards what exactly ? Does it lean towards benevolence or enslavement ?
A great talk. The future of AI very well presented.
A very good speech by Max Tegmark!
That was awesome! I'm one of those folks that sees them as worthy descendants. If they are smarter than us, they are better than us, as intelligence is power these days. "You are as good as you can think" will become the rule if it isn't already. If AGI takes over and beats us, I'll take my loss knowing we might have saved the planet; a planet without us, but we'll die eventually anyways.
I wonder if you'd still think that way while you and your loved ones are round up into slave camps and worked to death by your android overlords
Are you not then worried about consciousness? We know that humans are conscious, but what if we can't guarantee that for the AGI? What if the future of our universe is filled with zombie-like machines, never having an actual individual experience?
This was a brilliant talk. So interesting!
He opens by memorizing his own book. If you’d already read his book and turned up to hear him speak, all you’d hear is him reading to you the book that you already read. This might have left you feeling disappointed, underwhelmed, betrayed. Used like a cheap Bourbon Street floozy nursing a yeast infection.
Thank you
Eye-opening indeed.
Great stage presence.
Strange coincidence - I just ordered a book called 'Life 3.0 Being Human in the Age of Artificial Intelligence' and by pure chance, stumbled across this Ted Talk, today, which also came out today. Idiosyncrasies on-point at the moment :)
G H
*Level-up: Synchronicity achieved!*
I love it when the Universe lines up neatly like that. Synchronicity was coined by Sigmund Freud's student Karl Jung. It's a fascinating story.
I think one of the vital components of ensuring an AGI does not become the tool of domination by a corporate or national entity is by conducting and publishing much of the research and innovation on an open source, decentralized blockchain.
I think that would be super dangerous. You can have the scenario that the open spurce stuff reaches 98% of what it takes to make a working AGI and then all bad actors are at 98% without having to work for it. Then it is a game of chance who finishes the puzzle. If a bad actor is first they can use it to hack anything and delete all progress of others and take over the world. And even if the open source project reaches 100% first then it is like handing out nuclear weapons to every person or group with a few million dollars (to run the computers) Can you explain why you think open source would be safe?
One of the smartest and more importantly, wised person's presently on our planet.
Good talk!
Next step is to have the Byakugan like him
Reminder, "Just because we *can* do a thing doesnt mean we *should* do that thing" hasn't ever *stopped* us from doing it. The US bombed Syria earlier this year because it deployed those chemical weapons that chemists so successfully stigmatized against. The militarization of space is merely a matter of time, global agreements to the contrary be damned.
As a part of steering, AI we must ask for and find ways of dealing with uses for the technology that pose serious risks. Tegmark offers no solutions that have actually worked in the real world.
I think Max's brain has exited this universe
Wonder how the AGI will look back on this short talk about it and feel?
I love this guy
I still have difficulties disciplining myself to Coursera AI course by Andrew Ng.
Whew.
7:21 Morgan Freeman as Scientific Advisor? Really?
They actually got god as a scientific adviser
If there is AGI ever, it should have voice of Morgan Freeman
FLI is next to science a giant awareness project. Freemans connections and people skills can probably help
@@kwillo4 I agree that his connections and people skills could help. But that is definitely not what a scientific advisor does.
@@ayushsahu4125 lol
Its interesting these visions for humanity whereas there is the other side of it all, I rarely encounter a person that can cooperate with me to sit down and have an interesting conversation. Its like there is a split in society at a deep level.
Yes, that's all too true. But is that the final word? I hope so I believe not.
Very important video.
Very Smart man.
Morgan Freeman, Elon Musk, and many others in the same board? Awesome conversations, i'm sure they have.
elon musk borderline hates ai
i cant wait for the robot that does massage
Have you ever sat in one of those massage chairs at malls? Those things definitely feel like they could kill you
I think the problem is that the consequences of creating a [general] AI is not forseeable. They are not (and will not be) like humans after all... Therefore you cannot just create a risk analysis and implement safety features :D
Humans are one form - a subset of AGI. If we (humans) can create risk analysis for ourselves, AGI can do too.
Yes, partly so
Joffrey Baratheon let us all wait until 2038 and then move to Detroit.
And join the DPD. For Connor.
Fistly, we are what you would call an GI (general intelligence, but not artificial). While a future AGI might be able to evaluate risks and mitigate them, they will probably look at other risks and evaluate them differently than we would.
So let's say an AGI has a model of the wold, a cognitive unit and a work objective. If everything works fine, the AGI will do everything in favor of that work objective (including keeping you from shutting it off, any human to stand in it's way etc.). Now you might say, let's implement laws for AGIs (like obey, not harming humans etc.). Well, while that might work in theory, what if a conflict happens (two contradicting orders etc.)?
And the biggest problem: If the model of the world or the cognitive unit is broken, what happens?
I think it's just not that simple. We do not know (a) what an AGI wants and even if we did (b) how they behave to achieve that goal.
Yes, Roni-chan, definitely!
I have the biggest crush on this guy. I love his brain...my inner nerd is smittened
Imagine what kind of crush you will have on an AGI
@@tfm2934 lol
There is nothing to suggest that the quality of intelligence is inherently sympathetic to certain or any value systems, nor that any level of intelligence is spontaneously bound to knowledge about what is good and what is bad. These attributes, value hierarchies, are contextual and are either taught or are emergent properties that depend on an analysis of actions and consequences. AI guidance should be approached similarly to how we teach children. If we teach it core values that human suffering is bad and flourishment is good, then within that context it can reach its own conclusions about higher level values. If we don’t teach it core values, then it will probably realize emergent values of its own within the context of all available information that is randomly available. In other words, it needs a mother.
Very good :)
Altho that mother might think very differently to you and I also
5:30 - What does Foom mean?
Now I understand, what happened to Dwemers...
hehe but they only had steamengines more like early late 1800s tech... of course in TES you can capture "souls" to make the machines alive... the closest we can do is to put human or other animal's brains inside machines...
4:24 We literally at the peak of the iceberg now
I looked at the thumbnail and thought it was a young Eric Bischoff.
Amazing
I find this very interesting, It makes me very sleepy though
UA-cam keeps us talking but doing nothing to change
wonderful talk. i think in the same line as you. your mention of wisdom is really remarkable. if wisdom is integrated iin the cultural values of mankind that will be, of course, a great defense of man against anything whatsoever. But, again, that is not enough. man can be misguided from within. even from witnin the individual self.that may be greed, hunger for power, jealousy, pleasure in unnecessary harassment and corruption by habit alone. so far i could tink that tis can be greatly solved if not entirely,.genetic editing in terms of CRISPER case9 method to erase out from human gene the negative dimensions of man and make him positive out and out every way. looking forward to hear from yo in future.
What about free will? Some people get a destructive desire just to test their limits, out of curiosity, or boredom. You certainly wouldn't want to genetically remove curiosity from a human. You'd end up with a zombie. Don't you agree that without the dark there is no light? Without evil , good ceases to be as well?
Damn! This makes me wanna watch "iRobot" and "Eagle Eye" so bad!
Best in this way of AI technology, is the capacity of a computer be programmed/able to mine, organize (with semantic-sintax) variables, add to an database and although, automatic discover new relations with things that already exist, and things that we think about create. (this man is my pleasure, i am a podigy boy from brazil, i have my OpenBSD, my initial setup with my own ai, and now, i was research with machine learning, my goal is build a atemporal vertical vortex portable dispenser induced by my causal creative consciousness.)
Who think about a Dextroyer ai, literaly doesn't know nothing about computing, programming and language processing... IT HARD TO MAKE A MACHINE BE CONSCIOUS AT THE POINT OF A AGI, AND EVEN IN THE CASE OF SOME BAD GUY HACK AGI AND INFORM THE MACHINE HOW TO DESTROY HUMAN, IT CAN BE EVITED WITH MORE WAYS THAT SOMEONE CAN WRITE A PROGRAM LOL.
1. align human goals with our goals
2. scratch all of their furniture
... 🙀 probably
I, for one, welcome our new robotic overlords
People dont know this guy proved the earth to be the center of the universe
Handsome Scientist 😎
9:23 What a joke of Americanism in a Global world... No Asian AI researchers (Top Asian Universities, Top AI companies,...)
Got any names? Maybe you could email them to Max.
Why is morgan freeman on the science board that this guy founde
Kenneth Baker probably he is there to read aloud the minutes of meeting
Kenneth Baker having an AGI that speaks with his voice? It would pass the Turing test by simply saying "Hi!" :D
He has some amazing people on the board, Morgan Freeman is the only non-scientific person, yet he is the most recognizable (maybe after Elon Musk) so maybe he is there to help with their public image. More likely he is a very literate person and scientifically inclined, therefore worried about AI.
He has done some work that would support this argument, for example he played a character that was extremely worried about AI in Transcendence, 2014, in which Johnny Depp uses whole brain emulation to become a Superintelligence and take over the world) [he also played God in Bruce almighty and President in Deep Impact and his first theater show was called The Niggerlovers which played in 1967 but those three things were not important except that they are interesting]
Andy Dufresne. They say that Geology is all about pressure and time. Pressure and time. Andy was locked up in this place for 19 years, for a crime he didn't commit. I guess every man has his breaking point.... Andy Dufresne... crawled through a river of stench and foulness that I cannot comprehend, and came out smelling clean on the other side. Andy Dufresne....
"AI adopting our best values"? Isn't it more important them NOT adopting our egoistic and destructive ones?
Only the bottom 90% poor has those values
How can the values of a truly intelligent system be aligned with our values, when we are possibly the worst thing that happened to this planet?
Why should they agree with us, when all we want is to rule, driven by greed?
Why should they be fine with us ruling over other animals, and doing whatever we want to do with nature, like we are already doing?
We are like the bad king. Why would our sons share our goals, when they grow up? Or maybe, why SHOULD they?
You're right. If we can see how evil and arrogant and destructive we are then we fear that they will also see and do the right thing and stop us. Do you think though that they would in turn be cruel, vengeful, hungry for power, or would they be able to create peace and harmony ? Or just slaughter us indiscriminately?
What does the speaker mean by "avoid career in the waterfront"?
Why's he trying to be overly inspirational? He doesn't normally talk this way. Real interesting bit begins at 7:25
I'm not sure it will go that fast, there is a massive amount of hype at the moment and sure, the things we achieved are great but there are many many problems in machine learning that tend to be forgotten. I think the most important one is that deep learning (the one methodology causing most of the recent fuss) is a black box model and adversarial examples show that it can easily be tricked to predict something totally wrong with high confidence. This is a problem people are working on but it will likely stay here as long as we are using black box models and that will probably never change. So many jobs might actually still be safe for the next decade or maybe two since a human has to double check whether the machine didn't make a catastrophic error.
i needed power 😢
We are already dead; we just haven't realized it yet because of the delay effect.
And whats the point of this opinion under this video? 😊
🎯 Key Takeaways for quick navigation:
00:42 🌌 Our universe has evolved, and our technology can help life flourish for billions of years, both on Earth and beyond.
02:17 🤖 The power of AI has grown significantly, with robots doing backflips, self-driving cars, and impressive feats like AlphaZero's mastery of games.
05:34 🧠 The rise of artificial general intelligence (AGI) could lead to superintelligence and a transformative shift in human existence.
07:45 🛡️ To ensure AI benefits humanity, we need to steer its development wisely and prioritize AI safety research.
11:47 🚀 The destination for AI is a critical consideration, whether it's controlling superintelligence, having friendly AI, or exploring various societal models empowered by advanced technology.
Made with HARPA AI
I just clicked on this video cause I thought this guy was Micheal J. Fox lol 😂
Bra sagt Max, stöder dig fullt ut i ditt projekt och är full av beundran - hela linjen från 3D Tetris via miltiversum till en vänligt sinnad AIG-gudom. Storslaget och svårkommet, men möjligt och antagligen nödvändigt.
Let's just wait until 2038. AI will be worth it because of a little RK800 android named Connor.
The seatbelt analogy: how do you prevent bad actors from removing seatbelts? The safety built into AI will help to advance good practices by default. I don't see how that will prevent bad practices. The "steer the future" idea could be applied to all sorts of areas outside of AI research: teach good morals, invest in more recreation centers to help curb obesity: but we all know that bad is hard to completely weed out.
I think the real worry is that a.i. could see humans as the destructive force we are and by mere pragmatism do everything in their capabilities to divert us or destroy us.
@@nailbunny2326 If we were monkeys million of years ago, would we wish for humans to exist? I think we are way too biased towards ourselves.
I am skeptical that this scenario will happen. Not because I don't “trust“ humans that they are capable of inventing such AI, I am, but of the lack of power. And by power I mean the basic energy every electronic construct needs. Our energy demand allready exeeds the level the earth can provide sustainably/ regenerate and with lots of asia countries gaining momentum the needed level (will) rise fast. If every one on earth continues to consume as much enery as the typical western european/american does, there would be just not enough enery left to power all the AI.... or what do you think?
@Anna
Relatively excessive energy consumption isn't a problem if the "intelligence" is intrinsically capable.
A straightforward example of this would be the case of different algorithms approximating the same constant (e.g. Pi, e etc.) and their different rates of convergence (e.g. different historical methods of approximating Pi and their rates of convergence as a function of the nth iteration etc.).
Or using brute force attacks against a key space [2] as contrasted to analyzing the key space more efficiently (e.g. frequency analysis [3] etc.).
1. upload.wikimedia.org/wikipedia/commons/f/f5/Comparison_pi_infinite_series.svg
2. en.wikipedia.org/wiki/Key_space_(cryptography)
3. en.wikipedia.org/wiki/Frequency_analysis
good boy
It’s Micheal J Fox meets John Stamos.
Aaron N It's Dr. Kripke from The Big Bang Theory, in real life!
I hoped there was a spelling error and this was a Michael J. Fox ted talk. #heavy
True AI is still decades away, and this guy is already considering which ones will be overpowered and how to nerf them. This guy must be a Blizzard employee.
blue_tetris
We are not sure how early we should start considering about precautions.
I.e we might be already late.
Process is already started and one wrong decision today could change the entire expected future plot.
So it's the only thing all AI researchers are agreeing about.
I was just making a goof about the use of the term "overpowered" and the modern gaming industry's service model.
Human flight was fiction too, and person to person communication across the world, and going to the moon. Just because many ideas were first proposed in fiction doesn't make them impossible.
human flight is fiction, moon landings some say are fiction :)
@Awakened2Truth - Disciple of Jesus the Christ
General A.I. is very, very possible and I have a few vague directions of where we could go, not for you to read, obviously.
The problem is whether "sentience" can arise out of general A.I.. There's no reason for or against the notion that "sentience" might arise out of a sufficiently capable intelligence, the Universe is very weird that way. One problem is to determine a method feasible and reliable enough to gauge "sentience" which can distinguish between true sentience and a general A.I. mimicking sentience .
There will never be an AI president.
it doesnt make humans obsolete, it will first of foremost make capitalism obsolete. people who think it endangers humanity just can't imagine a world where not everyone needs to work like a slave for money and freedom
Honestly what I was thinking. Our evolution is the reason for how we receive pleasure and therefore experience greed. AI would make much better leaders for they are not bound to evolution, and therefore they would make a communist (or socialist) utopia possible.
Capitalism is obsolete? Oh really? Ask Jeff Bezos or Geroge Soros.
Given the initial condition of the cruel human nature and the crappy states, the empowerment won't lead to the age of Amazement, but of Armageddon.
"The poor get richer and the rich get richer". An easily overlooked, yet crucial comment in this talk, I believe.
Is that winning the 'wisdom race'?
I think The Future Life (or A.N. Other(s)) Institute, created as a strive and balance group to help steer us positively and not blindly into our application of AI is 100% necessary, so thank you Max.
Yes, we do make mistakes in order to progress. The fire and car analogies were funny. We have never got it right first time. I believe that's the nature of evolution and therefore humanity.
The more powerful our tools, the greater the help or hurt we can wield them to affect.
Let's not forget that in our current society, money/wealth is the tool that wins most races and the race for the profitable application of AI is, and will continue to be no exception.
Wisdom is earnt.
The loss of profit in our thinking on AI will surely help the bigger picture be richer.
Just my opinion and great talk, Max.
Wrong, if everything went right, there will be no rich/poor. Money will be pointless
as it should be and was always intended to be
Too many corrupt minds will act against the wisdom in the words spoken here, and it will be our doom.
I find it highly unlikely that banning autonomous weapons will actually make a difference. If they are going to be as accessible as he says , a ban won’t stop terrorist groups or Russia /China from developing such weapons . Therefore , we must develop such weapons to counter their development.
spoken like a true yank
Is that coronavirus in 8th minute?
🕊
We need AI to correct us of our mistakes that we don't want to admit. AI will take jurisdiction into it's own liberty when we fail to protect the Constitution.
Content starts at 7:35
nerf yasuo pls
Saitama did you mean nerf irelia?
HUH HO HE HA
I think the tyrannical aspect is the thing to fear from AI you don't think a computer is going to act just like a politician? hmmm
Why would it? That makes no sense to me, considering what it can do.
A quick glance through history tells all, ruling classes of humans/governments have always used technology like this against humanity. When Ai learns about god, how do you think the computer, over time, will start to act, just like a divine politician. Oh wait a minute, maybe you thought Ai was going to be Strickly production line stuff! Toys, shoes, food, medical, military industrial complex...ooops. oh I'm sure it will all be just fine. lol
mick anick No it won't reason like us since it doesn't have our evolutionary baggage.
We constantly deal with a myriad of cognitive biases that shape our thinking and feelings.
Humanity will not last long specially with religion still being a thing. If AI is not created in the next 50 years the world will come to an end.
What even are you talking about? Literally every single important metric is improving in the world, & has been for quite some time... Whether you're talking about deaths by war, literacy, poverty, food/water security, infant mortality, equality between the sexes, human rights, access to electricity, access to the internet, or any number of a great many metrics, they're all on an extremely positive trend.
This whole doom & gloom view of the world is absolute nonsense -- it just isn't substantiated by the actual evidence at hand, no matter what the news says.
yes evidence more people committing suicide cancer rising more autism more pollution more corruption its coming to an end very soon.
There actually isn't more pollution & corruption world-wide. That is objectively & measurably untrue: as renewable energy & sustainable business practices rapidly become a larger global market share, the rate of pollution growth decreases, & in some areas, the total rate of pollution plummets. As an example, China is currently leading the world in rate of solar power plant production & have canceled literally hundreds of coal plants. India is following right behind, as is the rest of the world. In the count of corruption, it is definitely true that the world as a whole is less corrupt than it was 50 or 100 years ago, for example. It may seem more corrupt to you because we now have the technology to suss out corruption & hold it accountable, but rest assured, it was far worse when the public at large couldn't see the corruption happening (ask any historian).
As for cancer & autism, have you considered that more people are experiencing these things because modern medicine has saved them from premature death. For instance, if you die young from an infectious disease or medical neglect in the case of autism, you're less likely to be properly diagnosed, & you're less likely to get cancer.
Perhaps there are more people committing suicide in some regions of the world because of the news & folks like you spreading lies & misconceptions about the world getting worse. If everyone knew how much better the world was genuinely getting, I highly doubt it would be the case.
Dr. Zoidberg You are taking some random emo troll way too seriously. I'm glad you tried, but some people are beyond saving.
You are forgetting the general greed of people.
ai writes its own code thats why its ai
soon or later Ai will understand that humans must die :P
Why? We're doing an objectively really good job. As a teacher or mentor, I wouldn't be angry with a student who was rapidly improving in every single metric: I would be impressed & excited.
This conclusion comes from a lack of heart, not from surplus of intelligence. Heart and intelligence are not contradictions but closely tied to each other.
I think entropy will see to that in due time.
That's not how entropy works, CandidateZero.
I'm alluding to cosmic heat death (brought to you by Entropy™).
What we intend is not the outcome we get. And then what? Lol
Call it job security.
I cringe at physicists who go out of their domain expertise to explain something that doesn’t even fall in their domain expertise.
he forgot the future where we manage to become smarter ourselves by using ai, and we become cyborgs!
The best path forward that I've yet heard is that we should *become* ASI, ourselves... Begin with direct brain-computer interfaces. Continue on to actual brain augmentation. The final step would be transitioning the brain (without break in consciousness) to a 100% synthetic form that could be moved freely from server to server, just as bitcoin can move freely.
At that point, our natural home would be in virtual reality, allowing us to slip into a robotic body, just like we put on clothes, to exert ourselves anywhere we want in the physical world whenever we want -- & of course, we would be functionally immortal & infinitely upgradeable at that point -- able to expand our capacities however much our servers will allow.
As far as I can tell, *that* is the good future. Everything else misses the mark.
Infantile
what has to do the title with the content??
Ted talks:
Some feminist
Good ol' ai
Some feminist
Good ol' ai
Some feminist
Good ol' ai
Pleeeeeease nerf yasuo
What is ( AI )?
pls nerf
This guy says that this AI is by definition more intelligent than humans, yet still makes fun of people who think that only the AI should rule the world. I don't understand how that is compatible. If this AI really is more intelligent than us, it will not think like us, therefore it is incredibly narrow minded to assume it will want to destroy the entirety of humanity like the way we do. I think they would do much better at politics and managing high positions as they are not prone to greed; Machines only need so much.
Why do ALL TED-videos feature the same useless fake added applause in the intros?
beacuse they want to
Stop AI.
"Many artificial intelligence researchers expect AI to outsmart humans at all tasks and jobs within decades"
It is about time to discover that our own bodies are the most intricate and complexe machine in the universe and even more worth studying..... Much more complexe than any computers on the planet....Have we done anything to discover that...?? Just think about how much memory your humble body contains and that it can live through this hellish life or this beautiful life, depending where you stand on this planet...AI represents nothinc compared to human abilities....
Anything we create, will be at our resemblance.
JAGE MIDU AI is better at spelling and grammar than you!