Max Tegmark - How Far Will AI Go? Intelligible Intelligence & Beneficial Intelligence

Поділитися
Вставка
  • Опубліковано 25 лип 2018
  • Recorded July 18th, 2018 at IJCAI-ECAI-18
    Max Tegmark is a Professor doing physics and AI research at MIT, and advocates for positive use of technology as President of the Future of Life Institute. He is the author of over 200 publications as well as the New York Times bestsellers “Life 3.0: Being Human in the Age of Artificial Intelligence” and “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality”. His work with the Sloan Digital Sky Survey on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.”
  • Наука та технологія

КОМЕНТАРІ • 53

  • @myspacetimesaucegoog5632
    @myspacetimesaucegoog5632 5 років тому +3

    Max and Co's job of developing ways to understanding AI-generated algorithms is arguably the most important one. Fascinating too!

  • @thezzach
    @thezzach 5 років тому +6

    Great video. Suggestion: UA-cam videos do NOT need to include the host introducing the main speaker or the applause.

  • @wind1050ful
    @wind1050ful 5 років тому

    That waving at 3.11 ❤️

  • @KatharineOsborne
    @KatharineOsborne 11 місяців тому

    It’s weird watching this 4 years later when LLMs have come to prominence.

  • @4G12
    @4G12 5 років тому

    I think ultimately a combination of Classic AI and Machine Learning AI will be the winner in the race to AGI since teaching a growing child directly with preexisting knowledge and wisdom is too much of an advantage to ignore.

  • @MrAndrew535
    @MrAndrew535 5 років тому +2

    The only two important questions of this era which will determine the survival of some semblance of the species are "what is intelligence"? and "what constitutes an authentic human"?

  • @FlyingOctopus0
    @FlyingOctopus0 5 років тому +3

    I think that value alligment is very important in current AI. We need to be sure that AI will not make decisions that are based on wrong assumtions. Also it is important not only to look at decisions that are made, but also how those decision affect the whole system. For example decisions made by youtube algorithm influence content creators, because creators change their behaviour to favour the algorithm. It is important to consider what are consecuence of values put in the algorithm, including actions of all people affected by algorithm.
    Idea of AI helping to make more understandable AI is definitely sensible.
    The reinforcement algorithm that plays breakout does more than fiting a line. I heard some comments, I think from deep mind, saying that their program learned to put a ball at the top part to get more points. So simplification of neural network may hurt the performence. Making more understandable AI is useful, but that might hurt the performence. Some parts have to be impossible to uderstand, because we do not understand some concepts either. As humans we have some concept that we understand only on intuitive level and if asked for explanation the only thing we can try is to give enough information so that the other person can magicaly gain intuition. Of course the magic is powered by evolution of brain, culture, tools, ... .
    We need understanding of AI to be on the level of understanding human behaviour. For example not uderstanding human brain does not change the fact that we can more less know when to trust people or expect them to do something for us. With AI we need only to understand how to change its behaviour, when it did something wrong and maybe also rationalize its actions. Also even people do not understand really, why we do action we do. There are some edge case, when brain just come up with a reason for action independently from real decision process. relevant video here: ua-cam.com/video/HqekWf-JC-A/v-deo.html -> they used magic to trick people into selecting opposite chioce(without their knowledge) and watching their reaction.
    There are also some neurological condition(I think bad communication between brain hemispheres) that cause seeing only half of things. These patinens draw half of a cat, eat half of a plate, etc., but they are not aware of this. When asked why they did this they usually come up with some reason like not feeling well or misunderstanding the task.

  • @przemysawkrokosz6025
    @przemysawkrokosz6025 5 років тому +1

    Mr Hawking is not with us anymore, unfortunatelly (9:20)

  • @manfredadams3252
    @manfredadams3252 5 років тому +10

    Max Tegmark can speak fluent Orcish.

    • @joelkasen8442
      @joelkasen8442 2 роки тому

      I guess I am kinda randomly asking but do anybody know of a good place to watch new series online?

    • @gannonkamdyn6796
      @gannonkamdyn6796 2 роки тому

      @Joel Kasen lately I have been using flixzone. You can find it by googling :)

    • @azariahanders3987
      @azariahanders3987 2 роки тому

      @Joel Kasen I would suggest Flixzone. Just search on google for it =)

  • @vkrakovna
    @vkrakovna 5 років тому +1

    Here's a link to my list of AI safety papers that was mentioned in the talk: vkrakovna.wordpress.com/ai-safety-resources

  • @walteralter9061
    @walteralter9061 5 років тому

    An intelligence that is not neurotic, with total memory access, that is predictively logical, that does not filter data phobicly, that is not driven by subconscious bigotry or rage or envy...what's not to like?

  • @georget5874
    @georget5874 5 років тому +1

    hm there's a lot of hype around AI, I studied machine learning and neural networks at university in 2001, and fundamentally the technology we are using now is just improved versions of what we were using back then, the reason people are able to do so much with machine learning now is basically that a. computer power has vastly increased and b. there are all these enormous datasets that big companies can train their models with. So if fundamentally nothing new has been discovered about how intelligence works in the human mind for decades then why do people seem to think AGI is just around the corner... from the perspective of computer science the artificial neuron model we use, is just as applicable to a fly as it is to a human being.

    • @weeral1
      @weeral1 5 років тому

      Well you _may_ be right.. however.. You mentioned 2001. Jumps in technology these days is almost measured in half years. so what you did 18 years ago is kind of....
      Next is military/government ops have proven to be decades ahead of what they tell us (and yes they can keep secrets) and are limitless funded cutting edge stuff. So you may be right. You may not!

  • @wizkidd6950
    @wizkidd6950 5 років тому

    Short of machine learning designing better computers there is no connection between AI safety and AGI safety. One is a issue of the system not knowing what it is really doing the latter is where we do not know the dangers of what we accept from an AGI. Or to put it more sanely its not the AGI that would be deadly but, humanity that would be dangerous to its self. As AGI is at the point synonymous with super intelligence.

  • @ConnoisseurOfExistence
    @ConnoisseurOfExistence 5 років тому

    I hope Mr. Kai-Fu Lee listens to this carefully...

  • @okiranova
    @okiranova 5 років тому +10

    47 years old Marty McFly.

  • @timothybucky7170
    @timothybucky7170 5 років тому

    ai will develop it's own values as well as it's own inscrutable reasons and the code it is made out of the weapon thing is just not possible to stop as a who will not be stopped where ever this who is. max does not understand anything and he is max. we are going to get there, lets stop now and think about it. i hear all of this talking before any of these people have bothered to think about this.

  • @thetrumanshow4791
    @thetrumanshow4791 5 років тому

    The ultimate goal for ASI or AGI should be to raise the human race up to a 'post scarcity society' and to extend our lifespans indefinitely.

  • @Asimovum
    @Asimovum 5 років тому +11

    Michael J Fox has a wide range of abilities

  • @bushfingers
    @bushfingers 5 років тому

    Stuff AI - bring back the rhinoceros

  • @sherlockholmeslives.1605
    @sherlockholmeslives.1605 5 років тому +1

    Max Erik Tegmark ( b.25 May, 1968 )
    Swedish-American Physicist and Cosmologist.
    Known as Mad Max for his unorthodox views on physics.
    Pretty much the smartest person om this planet!

  • @jayjaychadoy9226
    @jayjaychadoy9226 2 роки тому

    But AI could not save my brilliant son from suiciding himself Feb 22, 2021, even though he followed closely AI progress, not could AI save my husband’s brain from a brain aneurysm when our son was three. Come on!

  • @machinistnick2859
    @machinistnick2859 3 роки тому

    I pledge knot to build lethal AI

  • @mctrjalloh6082
    @mctrjalloh6082 5 років тому +1

    18:38 except if they discover flaws in the physics theories (that are men made by the way)

    • @skierpage
      @skierpage 5 років тому

      It's very rare that discovering a flaw in a physics theory lets us do something "impossible." Rather, anomalous reproduceable experimental results suggest limitations in some theory, and pressure builds up for a replacement theory that better accounts for them (on top of everything the accepted theory explains). Scientists dismiss things like cold fusion and tapping vacuum energy a) because they appear to violate accepted theory and b) because the anomalous experimental results have been tiny and hard to reproduce. Theories are man-made but many physics theories match reality to 10+ decimal places, it's not coincidence.

  • @mikedurden1219
    @mikedurden1219 5 років тому

    Max talked about persuading AI to adopt and values which align with "ours". But it is our very values which are leading to all the damage to our society and environment that we are witnessing today. Would not therefore an "aligned" AI simply accelerate this damage.
    Also the values of which particular geopolitical group does the "we" and "our" refer to?
    If a single AGI does eventually rise above the intergeopolitical squabbling that is currently rife on this Earth and is able to redefine what is good for the planet as a whole, surely this would be a noble goal. The process of achieving this global harmony, (if AGI were given (or took) the power so to do) will certainly NOT align with the values of "our" or any of those conflicting groups.
    In any event it will be a very rough ride indeed.

    • @heavenman8189
      @heavenman8189 5 років тому

      I Only Partially Agree with this. One can argue that the "values" that made us bad are also the same ones that made us great. Giving AGI our values does give us an insight into how it will act/react in certain situation( predictability is what we are after here) If we knew how to make the AGI above inter geopolitical squabbling we definitely would ( I argue that what he meant by giving it human values , not necessarily cultural ones). But since we don't , the next best thing would be to make it like at least like us since we can be compasionate , generous etc. These are the qualities i would replicate .

    • @wassollderscheiss33
      @wassollderscheiss33 5 років тому

      "Max"? So the two of you are close? Or do just try to be unpleasently american?

    • @phanupongasvakiat337
      @phanupongasvakiat337 5 років тому

      That will be a very short term solution.As a species we have it all wrong, done wrong and AI will not go along with human race but what is correct the best.

  • @bradynields9783
    @bradynields9783 5 років тому

    7:50 What if the benefit of AI is to create something that can teach us. So what they are smarter than humans. Maybe they can teach us how to teach old dogs new tricks so we can steer this world to nirvana!

  • @phanupongasvakiat337
    @phanupongasvakiat337 5 років тому

    Already you are talking and suggesting that AI be warped and obey human race/value.AI say No Deal.

  • @zagyex
    @zagyex 5 років тому

    Elon Musk ftw ! :D

  • @johnsmithy7918
    @johnsmithy7918 5 років тому +6

    Jesus that nose-breath-tic is annoying! Sort it out Max!

    • @anthonytroia1
      @anthonytroia1 5 років тому

      yah, wtf? I thought it was the mic for a second and I came down here to see if anyone else noticed it...

    • @tamaraspink4201
      @tamaraspink4201 4 роки тому +3

      There’s obviously something wrong with his sinuses. Give him a break!

    • @juliashearer7842
      @juliashearer7842 Рік тому

      Yes thank you for this important point and comment

  • @palfers1
    @palfers1 5 років тому +7

    I wish to god that Max would get his dripping sinuses fixed. It's almost impossible to listen to him without wanting to gag empathetically.

    • @Jimmy-B-
      @Jimmy-B- 5 років тому

      Andrew Palfreyman coke head?

    • @shawn563
      @shawn563 5 років тому

      It is really distracting. I think its getting worse over the years, either that or its that i tend to notice it more.

    • @MrJamesLongstreet
      @MrJamesLongstreet 5 років тому +2

      Look/listen/watch closer - that's just a tic of his.