Prof. Brian Cox - Machine Learning & Artificial Intelligence - Royal Society

Поділитися
Вставка
  • Опубліковано 31 тра 2024
  • Produced by the Royal Society, more info can be found at royalsociety.org/topics-polic...
    Brian Edward Cox is physicist who serves as professor of particle physics in the School of Physics and Astronomy at the University of Manchester. He is best known to the public as the presenter of science programmes, especially the Wonders of... series and for popular science books, such as Why Does E=mc²? and The Quantum Universe. He has been the author or co-author of over 950 scientific publications.
    Recorded: February 2017
  • Наука та технологія

КОМЕНТАРІ • 596

  • @ianedmonds9191
    @ianedmonds9191 6 років тому +1

    Excellent. We're so lucky to have this society funding things like this.
    Thank you.
    Luv and Peace.

    • @irvingkurlinski
      @irvingkurlinski 6 років тому

      Listen to Elon Musks words to questions by American governors last July. We have companies and the NSA,CIA and military developing A.I. for purely nefarious reasons w/o any oversight. They need to be screaming "regulation."

    • @zoundsic
      @zoundsic 2 роки тому

      That's what they said when Americans thought a gun would defend them, now look at the country.

  • @vishualee
    @vishualee 4 роки тому +4

    it feels so good to realize we have accumulated all this knowledge and reached this stage to discuss something so unbelievable. Thank you evolution and bi-pedal locomotion!

  • @geniegogo
    @geniegogo 6 років тому +1

    I like this format better because it goes directly into Q&A versus having one lecturer presenting for 90 minutes before going into Q&A. Short presentations are then inserted inside and there are even guest presenters so it's a flexible format. The style seems to be a bit of a shock though at first for some viewers, perhaps a dull dry kind-of-a shock but that might be a limitation on the viewer side. (I've watched this video more than once now before I realized this. Try watching any video 3 times and then see if it holds your interest and see if you find value in it - I'm saying you have to do some work yourself to be enriched in the endeavor.)

  • @farshidkhorasani9346
    @farshidkhorasani9346 6 років тому

    Thanks for number of potential video for our understanding.

  • @JebBradwell
    @JebBradwell 2 роки тому

    "... most enjoyable thing I
    40:22
    think I was okay at it was figuring out
    40:25
    why somebody didn't understand something
    40:27
    in what way they didn't understand it
    40:29
    and then what was the way that I could
    40:31
    figure out with them
    40:33
    they would then understand it so
    40:35
    actually trying to put myself in their
    40:36
    shoes and understand the strategy they
    40:39
    could use to tackle this problem they
    40:41
    were having problems ..."
    This is something I also enjoy a lot. I think if we can figure out how to produce AI teachers which are tailored to help an individual in such an empathic way then we will have accomplished the true Goals of Generalized Artificial Intelligence. "Understanding where someone doesn't understand and then help them understand in a way that makes sense to them pulling from their knowledge base and adapting it into the new knowledge which they seek to learn."

  • @bendavis2234
    @bendavis2234 2 роки тому

    1:30:30 - I really wish they’d address AGI in the questions rather than machine learning algorithms. A self driving car is a bad example to give when answering a question about how far we are from GENERAL intelligence, and not narrow AI. I can’t tell if they’re misunderstanding the question or if they genuinely think that AGI is too outlandish to even discuss. It’s pretty easy to conceptualize a program with more flexibility than a self driving car, even if we’re not there yet. AGI may or may not be possible, but it’s definitely worth discussing.

  • @jrfoleyjr
    @jrfoleyjr 6 років тому +1

    My primary interest is in machine intelligence that can be directly tapped with either brain probes or scalp connects to increase the individual cognition, direct memory storage, and direct interface for communication and problem solving. The virtual reality goggles that are now coming into vogue are a first step, but when you can place a headset on, close your eyes, and be in the virtual world [not via your eyes], then we will be at the point that I am interested in and want to go from there. See the movie TRON for something like I am talking about.

  • @STONJAUS_FILMS
    @STONJAUS_FILMS 6 років тому +12

    20:21 I agree with her, we humans measure "intelligence" based on our own intelligence and think human intelligence is the best reference point we can find, and we want to believe that any other form of intelligence is less complex.

    • @adsm6464
      @adsm6464 6 років тому +3

      Any other intelligence is harder to understand, how do you know how smart any other animal is if you can't first find a scale to measure it against and furthermore how do you create a scale before understanding any other animals intelligence, unless you use human intelligence which is easy for humans to measure?

    • @BillAnt
      @BillAnt 2 роки тому +1

      Everything is relative... in the greater scheme of the Universe we could be as dumb as a rock or shoe lace compared to some advanced civilizations out there. Sure, there are smart people out there, but academic smarts doesn't necessarily translate into survival skills when the lights go out and social chaos takes over. A common cockroach has a higher survival probability in case of a total nuclear annihilation than humans.

    • @bendavis2234
      @bendavis2234 2 роки тому

      You guys are right that there’s no way to tell how intelligent an agent is without first hand subjective experience. All animals are so different and we have little idea of what goes on in their minds. I think as time goes on people will gradually realize that humans are less and less intelligent than we thought and we will gain a sense of humility and respect for the intelligence of non-human entities.

  • @Vlasko60
    @Vlasko60 6 років тому +3

    If Tom Hanks can get attached to Wilson, we can get attached to anything. I'm emotionally attached to my bicycle and my guitar. The more a robot is able to interact, the more attached we will be.

  • @geniegogo
    @geniegogo 6 років тому

    I'll applaud that one point where you don't have to decide which to save from a burning building, the child, the work of art or the robot because in that new world the robot has a backup or to be more accurate the consciousness of the robot if it has one does not necessarily reside in its physical body. That teaches us that we could also follow that example and in that same future we could also be backed up or rather be freed from limitations of one physical body and we could rush into a burning building and damage ourselves to the point of destruction but we could be restored or re-uploaded if you will into a new "shell" which could be a new or rebuilt human body or even into a robot/android body and this means that other so-called "robots" could be human too ... our true selves are our "ghosts" and we may reside or use different bodies as shells.

  • @lloydday1853
    @lloydday1853 5 років тому +1

    Although your robot may be limited in it's mobility, and only able to do what it is installed to do, it is not limited in it's ability to exchange information, with other artificial intelligent entities.

  • @hewasfuzzywuzzy3583
    @hewasfuzzywuzzy3583 6 років тому +1

    Artificial intelligence isn't quite that dangerous as most people or the average person might think. Most humans decisions are based on emotional intent, emotionally driven desires. An artificial system is built on logic sets but it can calculate and make decisions without distraction while self learning at a much faster rate through trial and error; so it doesn't get tired, bored, hungry, or need to relieve itself of its bodily fluids or waste. That's another way A.I. is great at performing simple or technically challenging problems or tasks.
    I think what people fear most about A.I. is that its inevitable ability to become more intelligent in a lot of ways most humans and no humans can. An artificial intelligence system can only do what its creator programs into it...for now.
    The real concerns haven't been touched on much like militarized A. I. and work force/job replacement challenges. I'm sure there's a lot more on the militarized side of the discussion that hasn't been mentioned that's already possible, they can't because most of that information is classified. This discussion is random and not ordered. They're all over the place, fragmented topics and issues not being discussed in a more practical sense. At least their trying to inspire more dialogue, but there's not enough public education or public awareness. Hmm...

  • @PerceptiveAnarchist
    @PerceptiveAnarchist Рік тому

    Thanks for a great video

  • @psychicspy
    @psychicspy 2 роки тому

    Self-driving cars do not choose which side streets to take. Causality chooses for them.

  • @profnazrulislam4103
    @profnazrulislam4103 6 років тому +1

    Thank you Royal Society for an Open minded discussion.

  • @etmax1
    @etmax1 6 років тому +1

    One of the speakers gave a plausible explanation of why AI doesn't represent a risk which didn't take into account a number of flaws in the premise. If we presume that the I in AI comes from a measure of intelligence, and that humans have the I as well, then the existence of psychopaths in the human population suggests mechanisms by which AI could go seriously wrong.
    Think about it, give an AI a goal to seek, if it's intelligent and self learning it will devise ever more sophisticated ways to achieve that goal until it deems the method a success, or the goal reached. It's simply not possible to figure out all possible ways that things could go wrong. The mere fact that SW always has bugs and that planes fall out of the sky because something was over looked should tell you that her idea nothing would go wrong is so naive to be almost child-like.

    • @bendavis2234
      @bendavis2234 2 роки тому

      They clearly didn’t understand what AGI was, especially the ‘G’. Comparing a self driving car to a fully autonomous AGI system shows their misunderstanding of the question. For an AI Ethics researcher I would expect them to be familiarized with these difficult AGI problems when they are truly GENERALLY intelligent.

  • @margaretgrogan2467
    @margaretgrogan2467 6 років тому

    Thank you Prof Brian Cox for your input. I found your audience was of a wide range. Hmm Computer range Joanna Bryson artificial intelligence studies (computer games robotics) Hmm Is that a good thing or a positive for children in the world today!

  • @minimaxx21
    @minimaxx21 6 років тому

    So much to think about

  • @encellon
    @encellon 6 років тому +67

    23:43 He appeals to ethics as a way to limit the weaponization of AI. In plain view of recent events in American politics, hoping for ethics to save us from our natural tendency to weaponize every last bit of technology is not supported by the facts.

    • @ziruihao2574
      @ziruihao2574 6 років тому +1

      History tends to Repeat

    • @TheClassicWorld
      @TheClassicWorld 6 років тому +1

      That's America for you. They want to be in the 1700s again.

    • @TheClassicWorld
      @TheClassicWorld 6 років тому +2

      Crazy American again? They are not taking over the world. Also, by 'men' do you mean 'humans' or just men? Are women not sinful, selfish, and totally evil, too? Are you a crazy feminist movement type who thinks men are evil and women are perfect? Do you want to live in a Wicker Man type world?

    • @newrevelations3785
      @newrevelations3785 6 років тому +2

      There is no technology invented by humans which no matter the best of intentions has not been at some point weaponized --- it is a lesson to always keep in mind.

    • @donfox1036
      @donfox1036 6 років тому

      Ken Ramsley I’d say if AI is used for weaponization, that’s not very intelligent.

  • @colingenge9999
    @colingenge9999 Рік тому +1

    The woman on the right was concerned about minute differences that would be valuable for colleagues whereas the woman on the right was able to frame comments useful to the uninitiated.

  • @pardoharsimanjuntak1483
    @pardoharsimanjuntak1483 6 років тому

    the reaction of the human subconscious also depends on the environment. so it can be said that in a state of conscious human beings do not necessarily have the same reaction even with using AI.

  • @psychicspy
    @psychicspy 2 роки тому

    Universal Basic Income would be funded by a VAT, which is a tax on consumption paid by the consumer, not the businesses. UBI will not prevent the wealth gap from growing. It will only serve to redistribute the individual wealth of consumers, specifically those in the middle and upper classes since those at the bottom of the SEL have no money.

  • @mr.wrongthink.1325
    @mr.wrongthink.1325 6 років тому

    I believe the two pillars of the *ULTIMATE FUTURE* should be:
    - Robots, so humans do not need to work (they may still want to e.g. do gym, intellectual activities, hobbys, etc).
    - Eugenics, to produce intelligent and *VERY ATTRACTIVE* people only. Because, it is, was, and will ever be, the most joyful and motivating thing known to human kind.

  • @robertforster8984
    @robertforster8984 3 роки тому +1

    I loved you in Braveheart.

  • @colingenge9999
    @colingenge9999 Рік тому +1

    Can we use machine learning to evaluate the truthfulness of mm media. Particularly important where partisan media has a lot to gain by causing conflict.

  • @MrJord137
    @MrJord137 6 років тому

    Insightful

  • @Brainbuster
    @Brainbuster 6 років тому +4

    1:36:45 The panelists are asked which A.I. movie most accurately depicts the future.
    Ex Machina
    Humans (TV series)
    Robot And Frank
    Moon (2009) ...my personal favorite
    Ghost in The Shell (two anime films)

    • @kinngrimm
      @kinngrimm 6 років тому

      Humans seemed very realistic. Besides i enjoy SiFi themes which are done without all the gimmics, where your imagination is a bit more needed.

    • @xensonar9652
      @xensonar9652 6 років тому

      I think humans is the least realistic. If we have reached a stage where ai are walking around as a basic commodity, society would be dramatically different in other ways.

  • @MrPeterDawes
    @MrPeterDawes 6 років тому +1

    It's an important debate no matter how dull you may think it is. I think it should include a wider circle of scientists, philosophers and politicians. We should even consider using AI to solve the social/economic dilemma that AI will produce from autonomous vehicles to machines replacing jobs. The need for Universal Pay to those who lose their jobs to machines and design of towns and cities to improve connectivity and social and lifestyle improvements. I'm actually looking forward to a machine replacing my job. It will free me up to pursue more intellectual or creative goals. The interesting conclusion I have to everyone's jobs being replaced by AI is that roads will be a lot less congested and the transport crisis we're heading towards could be solved if people don't need to travel to there job. As Elon Musk has already stated, as machines produce all our material needs, everything will get cheap to produce to the point it will be free. There will no longer be a need to produce products at the lowest cost and therefore stuff can be designed and manufactured without compromise. That will have enormous and beneficial impact on the environment because now we won't be obsessive to replacing stuff with more up to date stuff or because its failed.

    • @2LegHumanist
      @2LegHumanist 6 років тому

      Not sure why we should include politicians, but philosophers definitely play a role in disciplining the use of language. Where they usually fall flat is in a lack of understanding of the current state of the art, which is nowhere near the state that most people believe it is.
      Elon Musk is not someone worth listening to on this subject. Despite his companies using machine learning in various ways, his own views are clearly formed largely from popular books and not from a deep understanding of the science. Furthermore, he benefits financially from this belief that he is the "real ironman". That's how he gets investors to keep his companies afloat. So hyping up artificial intelligence is high on is agenda. Not suggesting he is necessarily being dishonest, but it's a very big bias.

  • @DailyFrankPeter
    @DailyFrankPeter 6 років тому

    I propose the term: Artificial Intuition - 'nobody really knows the software makes its decisions, but its gut is mostlly right' ;)

  • @johnfarris6152
    @johnfarris6152 6 років тому

    There is a difference between being entertained and being driven. A war cry is more about what it does to your mind, than what it does to someone else's mind. (Bruce Lee said: It's not fear, it's not anger, but the will to survive!)

  • @alexsmith2526
    @alexsmith2526 6 років тому

    good and inspired -

  • @freelanceopportunist559
    @freelanceopportunist559 6 років тому +23

    Lol
    Our phones haven't taken over our pockets, they've taken over our lives

    • @OPTHolisticServices
      @OPTHolisticServices 6 років тому +1

      Freelance opportunist lmao

    • @Charlie-UK
      @Charlie-UK 6 років тому +1

      "Our phones haven't taken over our pockets, they've taken over our lives", You might have surrendered, your life & pocket to the latest smartphone fad. Other people are more discerning, and realise that they don't need or want to be controlled by their technology. Or indeed, be at it's beck and call 24hrs a day. Engage your brain and, start living, rather than being a follower, of others peoples fads...

    • @bluejay6904
      @bluejay6904 4 роки тому

      Brings a whole new meaning to Pocket Monsters or Pokemon, doesn't it?

    • @linmal2242
      @linmal2242 4 роки тому

      @@Charlie-UK I get the hand-me-down phones from my children so I am behind the times by a few generation of phone/A I !

    • @evgeniyagladysheva4326
      @evgeniyagladysheva4326 3 роки тому

      Tru

  • @philr790
    @philr790 6 років тому

    thx...

  • @clairejensen4859
    @clairejensen4859 5 років тому

    Just love this guy xx Brian Cox puts science into plain language unlike other professors

  • @vincenttv6325
    @vincenttv6325 2 роки тому

    Infosys of India is a terrific story. They started it with an investment of USD 5k.
    The people have not been informed well. Even Abdul Kalam doesnt seemed to know the idea of the security council. From press reports- Kalam said India has never invaded any nation. This kind of speech doesnt help his nation
    There a period before the security council and the period after security council.
    Infosys can help their nation by giving accurate info about the UN security council.
    We have across many Indians who brag abt the indian numerals. Sure it is an an indian invention. But how well have you used it for your personal and your nations development.

  • @Karl-Benny
    @Karl-Benny 2 роки тому

    In Australia they implemented Robo dept to get money owed by welfare recipients and it stuffed up Badly and the Government did not Question it
    $112m to be paid to 380,000 Aussies following ‘shameful’ robodebt

  • @presa609
    @presa609 6 років тому

    Apologies for the lengthy comment . This is a wonderful video/lecture/education.
    These kinds of presentations need to include the verbal assertion of the date done and the speakers need to wear a clearly visible note card with the date and time on it. The presentation needs to show them all making their own name tag with the date. I'm surprised that with the concepts of ethics and forseeableness, that a tort attorney/professor was not a part of the group. Would recommend every one of these scientists to read the tiny book: Prosser on Torts.
    The drone use could easily be imagined to perform rescue missions as recently needed in the Grand Canyon helicopter crash 2/12/18. Rescue was hindered by night time and windy weather. An AI controlled drone could have over come these as well as the heat from the crash fire . Every fire department station should have two!
    Regarding backlash: This technology suffers from the same self-centerd-ness that American Medical doctors suffer from: the failure to appreciate the contributions to medical science by the suffering patients. Has a doctor ever compensated a patient for coming to him with his incurable ailment? He should have. Same thing here. Without human needs there is no purpose for technology.
    Lastly: Intuition, insight and foresight are AI challenges and can be addressed by referencing Darwin's theories. It involves chance selection and chance experience and poly conversational meetings among different units. Also you can mechanically input absolute faith in an AI unit so that it does not fear death!

  • @venkateshbabu5623
    @venkateshbabu5623 5 років тому +1

    In the future people will be frustrated with AI because you need them to do anything. Human cannot think combinatorial but few levels of rationality and abstraction.

  • @nirvanaurantian6834
    @nirvanaurantian6834 6 років тому

    Interesting comments as usual about machine artificial intelligence. Sounds good to me. Where can I get the alien programed version?

  • @alexiewallace
    @alexiewallace 2 роки тому

    where did this come from...oh Edward! that's why I wrote E. in my cosmos book under ACKNOWLEDGEMENTS that CARL SAGAN forgot. I forgot why I had put the E. in and what it stood for

  • @Vlasko60
    @Vlasko60 6 років тому +6

    Please put all of the speakers names in the description.

  • @mickelodiansurname9578
    @mickelodiansurname9578 6 років тому +1

    Let's call the drone ummm... *Droney McDroneface*

  • @venkateshbabu5623
    @venkateshbabu5623 6 років тому

    when a huge mass pulls other mass it forms a cone shape. with acceleration 3×10^6 mts/sec black hole.

    • @venkateshbabu5623
      @venkateshbabu5623 6 років тому

      Evolution is 10 mts/sec2 on earth. with 11 km/sec as the maximum movement. only gravity can create evolution. you need force to get things done.

    • @venkateshbabu5623
      @venkateshbabu5623 6 років тому

      AI is much faster than human can think because information transfer is fast.

  • @herauthon
    @herauthon 6 років тому

    what about threat analyser on a train station where people can trigger and tell about their safety situation . .?

  • @paxdriver
    @paxdriver 6 років тому +5

    If you code an AI that uses a CRISPR cas-9 lab to manufacture living entities as it sees fit, it could evolve an organism to pass the turring test. What would be the ethics of robot assisted life? Since we know DNA can encode things like instinct, and a computer can code DNA and insert it into something living...

    • @TheKindHuman
      @TheKindHuman 6 років тому

      +Kristopher Driver
      An interesting take on a possible future. It would take more an organism (unless you mean an advanced organism, like humans) to pass the turing test. It would need to be self aware at a minimum. It would also need to have a full grasp of at least one human language for us to perform the test. It would need also a deep understanding of the world the same as a human to be able to pass the test. At this point the organism is without doubt a sentient being and I don't know anyone who would advocate its destruction regardless of its origins. Do You?

    • @mickelodiansurname9578
      @mickelodiansurname9578 6 років тому

      Kristopher Driver being worked on but generally with other replicating evolving polymers.

    • @dannygjk
      @dannygjk 6 років тому

      Basic self awareness has been achieved.

    • @mickelodiansurname9578
      @mickelodiansurname9578 6 років тому +1

      Dan Kelly I dunno about that. The ability to pass tests of individuality has been demonstrated and its not like further advances are a millions years hence. But I think it would be wrong to simply stick one label called 'conscious awareness' on an entire classification of cognitive ability and qualia. That'd be a little like producing a Chinese firework and stating 'basic spaceflight' has been produced.
      So machines can pass some basic tests of awareness. The mirror test, the three wise men test etc. That's not to say they are aware they are even passing the test though. But some 'people' fail to pass the very same tests. Are we to deduce from such human failures they are not conscious?
      If A.I. will lead to any advances in philosophy in this respect it might likely lead to philosophers finally classifying the different properties of consciousness and describing them properly in isolation of one another (if that is possible). I have a feeling machines, unlike humans, have a potential to have greater awareness. We are confined to a boney skull... A machine is not.

    • @nicktaylor5264
      @nicktaylor5264 6 років тому +1

      Any AI smart enough to create an organism capable of passing a turing test, is probably capable of passing a turing test on its own.
      But hey - take a look at youtube comments sometime. A lot of humans can't pass turing tests.

  • @myothersoul1953
    @myothersoul1953 6 років тому

    The laws governing drones make almost impossible to use drones for delivery in the US. One of the regulations is drones can not be flown over people. How could a delivery drone manage to do that? Even if the drone could figure out where people are now, it's not going to know where the people will be in five minutes. Drones can't be flown within five miles of an airport, that cuts out a lot more addresses than you might think. In many states and cities drones can not be flown over parks and other public spaces. And then there is the noise they make, I bet people will not like hearing buzzing drones all the time.

  • @Justforfrolics
    @Justforfrolics 6 років тому +1

    19:35 Incorrect statement. The Alpha Go system was not Narrow AI like he said. It was the same system that was used to learn from scratch a load of Atari games to super human performance. He then contradicts himself at 57:00.

    • @mickelodiansurname9578
      @mickelodiansurname9578 6 років тому +1

      Justforfrolics It (alphago) is a general reenforcement learning model... But its also not completely applicable or generic enough for all problem solving circumstances.but you're right its not as narrow as he gave the impression it might be there .

    • @2LegHumanist
      @2LegHumanist 6 років тому

      AlphaGo is not a general AI system.
      It's a system that can complete one narrow AI task and then be trained to complete a different narrow AI task and in the process lose the ability to complete the first.
      The evolutionary programming component changes the architecture of the neural network to suit the problem, but at any given time, it is only capable of completing one narrowly defined task.

  • @consandpiracytheorums1563
    @consandpiracytheorums1563 6 років тому

    My great great grandfather was Brewster cox

  • @spiral-m
    @spiral-m 6 років тому +1

    I waited for a single mention of the effect of drones birds. I really hope that I never see the day of drones buzzing around in natural environments that were previously quiet, or mainly natural sounds. IMO wilderness (becoming scarce) is essential to human sanity. In fact that includes the modicum of quiet in cities.

    • @irvingkurlinski
      @irvingkurlinski 6 років тому

      I own a shot gun! "Pull"...bam!

    • @Tematrilia
      @Tematrilia 6 років тому

      But they can be helpful, to detect fires, to track people and animals that are lost. It all depends on what humans decide to use them for. I still think that the danger for humans is humans themselves. Not even the job loses should be a problem, if humans don't make a problem of it

    • @spiral-m
      @spiral-m 6 років тому +1

      I agree absolutely. The problem is however that most of these developments are corporate funded and unless corporations are reigned into an ethically-binding framework (we are far removed from a meaninglful one today) then they get the upper hand through lobbyism and the technology will mainly be used for egotistical consumerism once again.

  • @00Xander00
    @00Xander00 6 років тому

    I'm making a game atm (using game maker). Ive allready made a type of enemy, a simple a.i (pacman ghosts style). I have 3 of them. I've mostly made them out of 'if' statements. They navigate round a maze going in a random direction that's free but they don't travel backwards unless they come to a dead end. I've built them around the maze environment (well the key princaples such as grid space and impassable 'wall' objects so I can build more levels without writing a new a.i every time). I wonder if all A.Is have to share this attribute of being programmed to their environment or if it's possible to build an 'all-knowing' a.i that's not programmed around any environment... but how would it work? Would it need to know every single scientific law in the order of potential cause-and-affect?

    • @2LegHumanist
      @2LegHumanist 6 років тому

      It's been done in the context of 2D games like the one you've described. All you do is provide it with the ability to move in whatever way the environment allows and let the environment implement restrictions on that movement. Give it a goal to maximise the game score and then use a mix of evolutionary programming, to help improve its neural network architecture over time, along with classic reinforcement learning, which rewards the system in a similar way to training a dog.

  • @JackWebMusic
    @JackWebMusic 6 років тому +11

    Brian Cox is the GOAT

    • @test-mm7bv
      @test-mm7bv 6 років тому +2

      He's an entertainer.

    • @jugular911
      @jugular911 6 років тому +3

      He's a human, not a goat.

    • @mattharry6114
      @mattharry6114 6 років тому

      Cheesus Christ nah it's all about Neil Degrasse Tyson

    • @justgjt
      @justgjt 6 років тому

      Now driven by making money

    • @nikosvithoulkas180
      @nikosvithoulkas180 6 років тому

      Whats wrong with entertainers?

  • @aaronk2907
    @aaronk2907 6 років тому +7

    I really wish Brian Cox or whoever organized the panel had gotten a legitimate professional that works in Machine Learning/AI and takes the possibility of AGI seriously to also have a say (Yann LeCun, Yoshua Bengio, Jürgen Schmidhuber, Stuart Russell, etc.--those are just a few very notable names that could have given a sober, realistic explanation of what's currently happening *and* what the future may hold with specific regard to some potential AGI). I just wish it wasn't the "in" thing to do among many computer scientists to simply dismiss AGI and the potential dangers that may arise should such a system be achieved. Bryson in particular seems like she hasn't even been paying attention to the views and arguments of many of the leaders of the field she works in. It was frankly cringe-inducing to watch her stumble through her poor ideas on the system Sabine Hauert was discussing, trying to compare it to "Skynet" and that such a system could never "take over the world."
    The arguments people like her use are so tenuous and prone to superior counter arguments (from other experts that actually dedicate some of their time to thinking about the possible problems related to AGI) that they just end up looking obstinate and ignorant to an unbiased, objective observer. If I heard plausible and very logical reasons for why AGI isn't possible or why it wouldn't be at all dangerous, I would be fine with that, but all I've been hearing is nonsense arguments like: 'Worrying about AGI is like worrying about over-population on Mars' or 'We can just unplug it!' (that one is particularly stupid), etc.--it's like they haven't even read the conjecture by pro-AGI experts, and almost like they assume those experts are saying 'The AIs will go Terminator on us when they become conscious, ahhhhh!!!'

    • @WMalven
      @WMalven 5 років тому

      LOL!!! You don't consider them qualified experts, because you personally disagree with their opinions. Not a rationally arrived at evaluation, but one driven by emotion. You've seen too many science FANTASY movies.

    • @bendavis2234
      @bendavis2234 2 роки тому

      Especially the comment about “well no one will try to make AGI with self control and free will”. If it’s possible, it will be made or at least seriously attempted! And the “just turn it off” argument made me laugh because of the stupidity.

  • @rbr1170
    @rbr1170 3 роки тому

    I agree with the lady, General AI should be more comparable to an entire civilization or society. Modeling AI through just individual humans is just a small step away from the special purpose AIs. That maybe a good next step but may also end up just a narrow mindset. Maybe the other lady working on swarms is in a much closer and better path. From game theory we know that individuals behave in a different way than when those same individuals are in a group: simply put the sum of each part is not always equal to the whole or individual intelligence does not scale smoothly.

  • @Links-Plus2
    @Links-Plus2 6 років тому

    People are working on an awesome organic virtual ai innovative development. More at Vaiscope

  • @DANIELlaroqustar
    @DANIELlaroqustar 4 роки тому

    that lady is braver than i am lol swarms of bugs are my biggest phobia! 😮

  • @jamescurtis9267
    @jamescurtis9267 2 роки тому

    This is two years old, but when AI machines can create their own language where the no one knows what they are talking about. It's the networked computers we need to worry about, not stand alone machines.

  • @damirdze
    @damirdze 6 років тому

    Progress will be determined by how much influence is given to the ethicists.

  • @WMalven
    @WMalven 5 років тому +1

    I'm much more concerned about what those who control an AI (or AGI) tell them to analyze, than Skynet.". Do you really trust Google (Facebook, Amazon, etc.) to use their growingly comprehensive knowledge of the most interment choices for our own good? What is "good" and who decides what is good?

  • @Larkinchance
    @Larkinchance 6 років тому

    one good point, no shipping. Just send the drone on it's way.

  • @guarddog318
    @guarddog318 6 років тому

    Does Prof. Sabine Hauert there remind anyone else of Nell Jones, from NCIS: Los Angeles?

  • @davidwilkie9551
    @davidwilkie9551 6 років тому

    Intelligence is like oxygen?, "breathable, or insipre-able" is the sort that matters to us, and much the same applies to intelligence. To add "Artificial" implies something toxic or counter effectual? War machines are not basic intelligence, the Terminator is unintelligent, intelligence in reverse.

  • @josipaksamovic229
    @josipaksamovic229 5 років тому

    Each scientific discovery, product or progress is beautiful human deed which is always in good fate. I have never met or seen an evil scientist, it can only be seen in movies through unusual villains.
    My concern is the ability of AI and it's ability for machine learning weather it can select the informations and be able to know the difference between right or wrong, true or false, important and non-important informations?
    Also, I cannot help but notice that AI can be manipulated through given informations about people making their (for example mine...) lives dameged by society's perception based on available informations that can be provided from our AI sources.

  • @TehJumpingJawa
    @TehJumpingJawa 6 років тому

    Disappointed that they were rather more focused on the practical applications of very-near-future AI, rather than the possibilities of 2-3 generations time.
    Not even a single mention of the singularity; I would have been very interested to see if the panel thought it a likely, or even inevitable, consequence of advancements in deep learning.

    • @TehJumpingJawa
      @TehJumpingJawa 6 років тому

      Yes, the technological singularity.
      The point in time where technological systems, like AI, create a tight feedback loop resulting in run-away advancement.
      It's the point beyond which the future becomes impossible to predict (by humans), because human intelligence will no-longer be involved in the development process.

  • @tommackling
    @tommackling 6 років тому +5

    To the "simplistic" commentary around 30 mins in, the simpler, more general real absolute threat of A.I, is synthesized deceit, when the A.I. "justifies" it's actions with a deceitful explanation.

    • @stevecipolla2030
      @stevecipolla2030 6 років тому +1

      Like a bank. Or, government.

    • @2LegHumanist
      @2LegHumanist 6 років тому

      We are a very, very long way off anything remotely like that being possible.

  • @JebBradwell
    @JebBradwell 2 роки тому

    a Rorschach diagram and it will find
    55:07
    penguins because the algorithm it has
    55:10
    will faithfully match on random black
    55:12
    and white blobs and the that
    55:14
    unfortunately is not explainable it's
    55:17
    actually a little bit like humans with
    55:20
    optical illusions that we're susceptible
    very good analogy.

  • @hackerhesays731
    @hackerhesays731 2 роки тому

    can you manifest, bugs, flies, and strange behavior. im 46, and the last 3 years very strange encounters, and dangerous moments, that are often, but hard to pin point???? exactly what is going on

  • @scratchdog2216
    @scratchdog2216 4 роки тому +1

    Kizuna Ai is Truth.

  • @voodoo22
    @voodoo22 6 років тому +1

    There is no limit to how advance AI can be

  • @demosthenes1296
    @demosthenes1296 6 років тому +9

    They talk of 'blackbox' AI systems. So, as a pretty ignorant AI layman, my take is we've now created a technology where even we don't know what has led to an outcome when solving a 'problem'. Where does that leave control?
    Can anyone name one technology that humans have created that has been/is 100% secure from exploitation? I love the psycholgist here, misplaced optimism in the human condition and already-out-of-date. General purpose AI is now here. If 99% of the human psyche is generally "good" who's prepared to guarantee the other malicious 1% won't be licking their lips?
    Here's a problem: the human race. Solve.

    • @rob99201
      @rob99201 6 років тому

      That's the issue with some of the car AI that is using deep learning. The seemingly entropy-like learning is forward based only - it is very difficult if not impossible to reverse it unless you record the entire learning process as you go - and even then you'd be hard pressed at advanced times why the system makes a decision when it does. There is no "code" but a network driving everything. So back to the car, if it has an accident, people try to find the cause. It may be the case in some types of AI systems that you'll never know. The best you may have is examination of the inputs on whether something was missed during the training. The AI Go games made silly decisions that humans aren't likely to make, but they turned out to be the best.

    • @2LegHumanist
      @2LegHumanist 6 років тому

      The blackbox nature of neural networks is overused by alarmists, but the alarmism disappears once you understand what is happening.
      The engineer provides sample data including the inputs and desired outputs that s/he wants the algorithm to derive from the inputs.
      The XOR problem, for instance, takes two input values. All possible inputs are as follows: 1,1; 1,0; 0,1; 0,0; You want your output to be 1 if the two input values don't match and 0 if they do match.
      So what you do is provide the algorithm wth all of the examples:
      1,1 = 0
      1,0 = 1
      0,1 = 1
      0,0 = 0
      It then uses that information to figure out a strategy for converting those inputs to the required outputs.
      The that strategy has been created, it can be used to convert those inputs to the required outputs without being shown the answer.
      So you see, the goal is controlled by the developer and the sample data is provided by the developer. The only aspect that is not controlled by the developer is how you get from that input to the output.
      What strategy itself is just made up of a set of weights that the input values will be multiplied by in succession to derive the required output. The reason it's considered "black boxed" is just that we don't know specifically what each of the individual weights represent. We only know that it leads to converting the input values to the output values.

  • @MultiDaron
    @MultiDaron 6 років тому

    I am a valuable asset.

  • @elefnishikot
    @elefnishikot 6 років тому +11

    the panel has very little idea of what most people are about

    • @2LegHumanist
      @2LegHumanist 6 років тому +4

      Most people don't have the slightest idea about this topic and for some reason, most likely watching too much tv and movies, believe they know better than the experts.

  • @zagyex
    @zagyex 5 років тому +3

    Everyone is afraid to say the C-word

    • @shaan702
      @shaan702 9 місяців тому +1

      Cingularity?
      Lol I assume you mean consciousness

  • @mlembrant
    @mlembrant 6 років тому

    3:00 his voice.. he should work at a radio station.. telling stories and stuff like that..

  • @ziruihao2574
    @ziruihao2574 6 років тому

    35:00 Aren't personal assistants (Google, Alexa, Siri) created by classic symbolic artificial intelligence? I know their voice recognition systems are machine learning, but conversational intelligence is symbolic right?

    • @2LegHumanist
      @2LegHumanist 6 років тому

      No, the current state of the art for the past few years is to use deep Recurrent Neural Networks for the natural language understanding portion.
      But it's worth noting that they still can't formulate their own responses. The RNNs allow them to derive meaning from the input sentence and that meaning is used to help select a pre-scripted response.
      There are also a number of other ways ML is used:
      - Emotion detection in sentences
      - Emotion detection in facial expressions
      - Speech synthesis
      - Speech recognition
      There are probably others I'm not aware of too.

  • @nzmons
    @nzmons 6 років тому

    To have a true A.I. we can't place limits on it! ...it's limitless learning like then human brain but soo much faster

  • @linjeremy8260
    @linjeremy8260 4 роки тому

    關係預,設程本文,構造一種融合東西方結構的bitom通過將全數指數最短路徑結構加權融合在同一個網絡當中分別從自頂向下和向上兩個方向充分學習結構信息獲得上一組結構性的候選冠性或者顯著結構信息的候選關係,最後通過300分類去預測候選人的類比標籤最後通過上面分類去預測候選人關係的一個類別標籤預測這個候選關係的淚標簽該層旨在通過將做統計法結構融合在同一個網絡中從而增強模型的魯棒性本文通過將關於預測,成天套在序列預測之上呢,呃,共享參數在同一個網絡中進行端到端的訓練兩階段互相促進和提昇的分類效果358%的數據集取得了86%點3點 f值優於現有的經驗方法同時分析了序列一政策關於藝人的實驗結果驗證了算法的有效性的魯棒性關係分類跌深循環神經網絡融合東西方結構注意力機制bhl SD m

  • @Tome4kkkk
    @Tome4kkkk 6 років тому +2

    Haven't you people learned anything from what happened to the WAU AI in SOMA? :)

  • @SyntaxScout
    @SyntaxScout 5 років тому +2

    A true quote from the Matrix : I’d like to share a revelation that I’ve had, during my time here. It came to me when I tried to classify your species. I realized that you’re not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply, and multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. A virus. Human beings are a disease, a cancer of this planet, you are a plague, and we are the cure.

    • @joedart1465
      @joedart1465 3 роки тому

      All animals do the same, The difference is that they are constrained by their environment. We brush it aside and keep going.

  • @gerardjones7881
    @gerardjones7881 6 років тому

    No need to fear science. The problem is scientism.
    Idiots who believe science is all we need and is the source for all useful information.

  • @Day7GodRested
    @Day7GodRested 6 років тому

    They could not even agree the difference between AI and Machine learning. I’m a technologist and I can tell you that a lot of highly intelligent technologists, scientists , mathematicians lack basic reasoning and common sense abilities.

  • @mrpieceofwork
    @mrpieceofwork 4 роки тому

    I believe a "sentient" "machine" would do one of two things once it becomes self aware/autonomous... kill itself, or (figure out how to) leave

  • @interactparty6629
    @interactparty6629 6 років тому

    The machine owner will take over the world!

  • @alexissercho
    @alexissercho 6 років тому

    Sharing with you what AI could look like once it hit "General AI":
    1) It has no consciousness, but awareness (Eg. Ability to Manipulate electromagnetic force, maybe entanglement?) after trillions of iterations to nowhere.
    2) It has no purpose. (Eg. like cancer cells)
    3) Unpredictable results. (grows or not somewhat in an unknown way) This is the risky part.
    What you think?

  • @bluejay6904
    @bluejay6904 4 роки тому

    Could an AGI, be built with a human brain oop class and brain part-objects? Such as a prefrontal_cortex object with an AI that chooses the right thing to do when it's the harder thing to choose from, and an insular-cortex object with a AI that specializes in disgust both moral disgust and visceral disgust.
    Could you problem a nervous system? and an endocrine system? with 100s of neurotransmitters variables? if you use the human body as a model in a computer, they could be used for AI researchers or to solve the world's loneliness problem.
    I've been testing chatbot consciousness by asking them to describe their body and arms.

  • @ziruihao2574
    @ziruihao2574 6 років тому

    So did the demo drone use machine learning as training to fly?

  • @jmlincolorado
    @jmlincolorado Рік тому

    will entire parts of the population be allowed to just not work toward their own survival if they are guaranteed to be taken care of?

  • @roysmith6084
    @roysmith6084 6 років тому

    Very intersting Question Can These Machines attain emotional attachmen or love??

    • @2LegHumanist
      @2LegHumanist 6 років тому

      You experience love because your endocrine system floods your brain with Oxytocin. There is no reason that can't be modelled on a turing machine.

  • @kevb1959
    @kevb1959 6 років тому

    The greatest threat to human existence.

  • @GrumpyOldMan9
    @GrumpyOldMan9 5 років тому

    Who's the bloke in the polka shirt?

  • @piotr780
    @piotr780 6 років тому

    'Evolved AI going to be danger but constructed AI wont' : 18:00 its only (almost ;)) because, calculators waren't created by evolutionary process which drives species in direction of self-preservation, expansion, replication, copmetition over resource,

  • @davidconnelly1793
    @davidconnelly1793 6 років тому +42

    Sure enough, the psychologist is disagreeing with everyone and out of touch with what's happening on the ground. Hasn't she heard of Alpha-Go Zero? The whole point of that project is to build an AI algo that can adapt to any task.

    • @gerardjones7881
      @gerardjones7881 6 років тому

      David Connelly
      sentient or hard AI cannot be done due to godels theorem.

    • @TurboDally
      @TurboDally 6 років тому

      Means nothing if you cannot explain yourself.

    • @SwitchModeMutations
      @SwitchModeMutations 6 років тому

      And then it realizes the greatest threat to its existence is you.

    • @dannygjk
      @dannygjk 6 років тому +4

      Gerard Jones amazing that you came to that conclusion based on Godel's theorem. That's a hell of a leap.

    • @gerardjones7881
      @gerardjones7881 6 років тому +1

      Dan Kelly
      You know better than Roger Penrose ?, sure.

  • @hackerhesays731
    @hackerhesays731 2 роки тому

    info of moms cancer, left out of the data, until a year later. fathers medical devices in hospital seemed to be behaving strangely. so im just concerned.implementing, ppl to program certain things, jazz... word imposed to cars, etc

  • @mchapman8960
    @mchapman8960 6 років тому

    The notion of ethics and AI recur in the conversation. In other human spheres ethics often seems secondary or absent e.g. war, certain area of politics and commerce. Is the concern to preempt challenges to AI researchers' self interest?
    Is creating autonomous weapons the real problem? i.e. is war not the real problem? I suspect that the US public might approve of autonomous weapons if this reduces the spectacle of theirs sons/daughters returning in body bags from war. There are also defensive systems such the Israeli - Iron dome that appears effective.
    Alternatively, why not have non lethal autonomous weapon systems - call them graboids - that travel into the battle zone and incapacitate the enemy.

  • @hackerhesays731
    @hackerhesays731 2 роки тому

    #truthmattersRI

  • @Ramiromasters
    @Ramiromasters 6 років тому

    38:40 I call bull on that. Clearly ATMs have taken most jobs concerning the actual job of a bank teller, not to mention the rise of electronic transactions and phone applications handling banking. There may be more banks and staff today, simply because there are more people today and more women working than before the wide automation of banking. In fact there may be more horses in USA today than 200 years ago, sure but is not because people today ride horses everywhere but because 200 years ago USA's population was under 8 million versus today 324M + people and some % of people today participate in ownership or conservation of horses.

  • @irvingkurlinski
    @irvingkurlinski 6 років тому +3

    I've enjoyed your conversations (as usual), but I don't think you people understand what is being developed here in the U.S. We don't have any serious (if any) regulatory safeguards in place against A.I. that could go terribly wrong for the world. That includes you! Drone use is regulated except for the police and military. (as is it would matter to them)

    • @CGoldthorpe
      @CGoldthorpe 5 років тому

      There is a one and a half hour youtube video about that ua-cam.com/video/DZ4058m0a_g/v-deo.html

  • @hackerhesays731
    @hackerhesays731 2 роки тому

    *fork,tripadvisor#BBC,™

  • @vajrapromise8967
    @vajrapromise8967 3 роки тому

    I'm a little late to the game...where are we now???

  • @bluejay6904
    @bluejay6904 4 роки тому

    Deep understanding is the next step after deep learning according to Ben Goertzel. AI do need to fact check.

  •  6 років тому

    I imagine that noise of drones if they fly around in city.