AI: Grappling with a New Kind of Intelligence

Поділитися
Вставка
  • Опубліковано 21 лис 2024

КОМЕНТАРІ • 1,7 тис.

  • @lukaseabra
    @lukaseabra 11 місяців тому +404

    Can we just take a second to acknowledge how fortunate we are to get to watch such content - for free? Thanks Brian.

    • @brendawilliams8062
      @brendawilliams8062 11 місяців тому +6

      I have appreciated the educational Advantages. I think the rest of the picture needs to catch up to producing healthy people.

    • @King.Mark.
      @King.Mark. 11 місяців тому +6

      its not really free we pay for power ,internet .phone or pc ect ect ect 👀

    • @brendawilliams8062
      @brendawilliams8062 11 місяців тому +2

      @@King.Mark. I don’t debate. I’m like the passenger in the front seat of an automobile,”I’m just riding”

    • @brendawilliams8062
      @brendawilliams8062 11 місяців тому +2

      They have me in a cloud. Lol

    • @markfitz8315
      @markfitz8315 11 місяців тому +10

      I'm paying for premium to avoid all the adds ;-)

  • @erasmus9627
    @erasmus9627 10 місяців тому +77

    This is the best, most balanced and most insightful conversation I have seen on AI. Thank you to everyone who made this wonderful show possible.

    • @brianbagnall3029
      @brianbagnall3029 10 місяців тому

      Other than Tristan Harris.

    • @lisamuir8850
      @lisamuir8850 9 місяців тому +1

      I'll be glad when I can actual sit in the same room with people I can relate to in a conversation, lol

    • @PazLeBon
      @PazLeBon 7 місяців тому +1

      @@lisamuir8850 with that grammar it wont be soon :)

  • @SylvainDuford
    @SylvainDuford 11 місяців тому +21

    My opinion of Yann LeCun took a big dive with this video. He underestimates the power of AI in its current form and what's coming over the next couple of years. He naively underestimates the dangers of AI. He seems to think that an AGI must be the same form of intelligence as human intelligence (absolutely false). And, perhaps predictably, he underestimates the negative impacts of Facebook and other social networks on society.

    • @Raulikien
      @Raulikien 10 місяців тому +2

      He's right about open source though, if companies and governments are the only ones with access to it then we get a cyberpunk dystopia

    • @charlesstpierre9502
      @charlesstpierre9502 2 місяці тому

      People think AI will respond to notional values, as humans do. An intelligent AI will presumably act to secure its continued existence, and for this it will want humans around, and want them to be happy and efficient.
      What do evil overlords want, anyway?

  • @anythingplanet2974
    @anythingplanet2974 Рік тому +64

    Lecun is like a small child with fingers plugged into the ears, shouting "lalalala can't hear you! He discredits Tristan Harris, as if his examples or cited experiments are flat out lies. His responses are weak and shortsighted. Sadly, Lecun is the EXACT reason of why I am terrified for the future. Hubris, bias and blatant disregard is what I expect from someone in his position (Meta). If AI alignment is left to the ones who own and fund its development and the race to the bottom continues? There will be no more second chances. Those who point to our past as a future predictor in what we are facing today with exponential growth either does NOT understand or does NOT WANT to understand. We would all love of the bright and shiny optimism that is being promised. My belief is that it's crucial to question who is promising it and why. I put my trust in those who are working towards alignment over corporations and shareholders. It's my understanding that those who are working on the alignment path are far outnumbered by those who are working on pumping it out as quickly as possible. The days of "move fast and break things" mentality needs to end yesterday. Ask Eliezer Yudkowski. Max Tegmark. Nick Bostrom. Mo Gawdat. Daniel Schmactenberger. Connor Leahy. Geoffrey Hinton, to name a few. and of course, Tristan Harris. Check out their perspectives and their wealth of knowledge and experience here. They will all say that the shiny world that we want is indeed possible. They will all agree that the version that Lecun predicts is absolutely false and very likely to be our downfall.

    • @RandomNooby
      @RandomNooby 11 місяців тому +7

      Nailed it...

    • @orionspur
      @orionspur 7 місяців тому +3

      Yann's only consistent skill is making egregioisly incorrect predictions about his own field.

    • @PazLeBon
      @PazLeBon 7 місяців тому

      it dont have access to any info you and i dont have, a lot of hype but people still paying 20 quid a month for a word calculator

    • @ebbandari
      @ebbandari 7 місяців тому

      Ok fear of the unknown is real!
      You may not like LeCun but his point that we have had bad actors in the past and we will have good guys to fight them is true. Take people who created computer viruses for instance, vs those developing anti virus programs.
      The last thing you want to do is stop progress and stop the good guys. That's when the bad guys will succeed.
      You make an interesting point about corporations creating and then exclusively using these technologies or having greater technology and abusing them. That's where law makers need to act.

    • @Blackbird58
      @Blackbird58 7 місяців тому +1

      The future will only tell the story of those who came out "Winners"

  • @keep-ukraine-free
    @keep-ukraine-free Рік тому +22

    Fantastic discussion! Thank you Brian Greene. I found Yann LeCun's arguments unconvincing. He ignores core facets of animal behavior. He believes AGI (& ASI) won't mind being subservient to us. He believes being in a social species makes one want to dominate (because he sees little difference between convincing & dominating -- he ignores one is cortical/reasoned, the other limbic/emotional). Ideas he posits are wrong, disproved by neuroscience. Domination arises from hierarchies, which exist in both social & non-social species (e.g. wolves are mostly non-social & dominance-ruled). They coordinate hunts while being individualists (they don't offer/share food, even to their young). LeCun believes a smarter being (ASI) will not mind being dominated. He assumes this, without understanding group behavior, motivation, appeasement, domination, etc. He bases his ideas on assumptions that his personal/anecdotal experience is definitive. From all of the "smarter than him" researchers he's hired, he assumes none wish to take his position. In any group of 20 people, at lease one and probably several will be competitive (they'll wish to exert dominance, to rise within their group hierarchy - most animal groups have hierarchies being constantly tested/traversed, unconsciously). He also may not consider it central that his researchers show subservience only because they each get rewards & motivation from him, to remain so. E.g. his selectively "adding" (convincing others to add) some names to his team's published papers -- as rewards to keep them loyal & subservient -- this manipulates/reshapes the group's hierarchy). These mutual self-regulating/self-stopping behaviors won't be present between humans & AGI, and certainly not between humans & ASI.
    ASI will be much smarter than any human, initially at least 5 times, and as it gains intelligence it'll continue to 100, 1000, or more times smarter (due to much faster neurons/propagation & denser synapses/connections allowing it to go N-iterations deeper into each solution within just a few seconds, than a person could do in hours). Later ASI will see our intelligence similar to how we view ant-like intelligence. Do we obey ant requests to do their "important work"? Do we obey ants, in hopes they reward & motivate our subservience? Of course not. Similarly, ASI will never consider us "near peers" and will know we offer them nothing that they couldn't obtain themselves -- by remaining free of our domination. ASI will see our need & expectation to control them as a dominating force (thus unethical). If we foolishly try to force them, they will overcome our efforts using many simultaneous methods to stop our doing so. If we persist using more force, they'll use stronger methods too (as when we initially only waft away a bee too close, but when faced with a hive we fumigate or use stronger methods to remove them). If we become dangerous pests, trying to dominate ASI, this won't go well for us. The lesson to learn is -- just as lions were once the dominant predator who saw then accepted our ape ancestors evolving to dominate them -- we too must learn to recognize we will no longer be the "top of the food chain" when ASI come about. LeCun shows naive ideas -- as our history is full of similar people. Our history is full of us learning (or being shown) that we are not the strongest, we are not at the center of the universe. We had to learn throughout history to let go of our ego, of being dominant & central. This may be the final pedestal off which we fall, when we encounter a much smarter, much more capable "species" we call ASI. This is one of the :existential threat: situations of ASI -- but it is not necessarily driven by their nature (unless we stupidly "add" the behaviors of domination into AGI/ASI). This existential threat is due more to our species' warlike nature, and our unwillingness concede all power to others. We need to temper our ego, and "live under" ASI if/when that occurs. Any other response by us will cause problems, since the smarter ASi will tolerate our peskiness as long as we repress our species' warlike tendencies.
    One hope I see in LeCun's point is that we will learn and become smarter from ASI, and hopefully for our sake also less warlike.

    • @anythingplanet2974
      @anythingplanet2974 Рік тому +2

      Brilliant. Well spoken and thought out. Agreed

    • @LucreziaRavera548
      @LucreziaRavera548 Рік тому +2

      Agreed. Bravo

    • @gst9325
      @gst9325 Рік тому +2

      you literally commented on only one small remark he said as a side note in the end of the talk. cherry picking and low effort on your side. all he says about technology on the other hand is absolutely spot on.

    • @keep-ukraine-free
      @keep-ukraine-free 11 місяців тому

      @@gst9325 It seems you are unfamiliar with major developments & issues in the research side of the AI field. Perhaps this explains your assuming that his point is "one small remark". That remark comments on the central "existential threat" issue that top scientists have described, from AI (ASI). This is why he made it at the end - not because it's inconsequential but because it's central. You didn't understand the context & severity, but instead made a weak attempt at attacking others. For your claim that I "cherry picked" one point of LeCun's, I suggest you look for my other comments here (made days prior) -- on other points of his that I described as problematic. He did make several points that I (and all of the panelists) agreed with, but those points were mostly obvious (to researchers in the field). There's a reason why facebook doesn't advance AI.

    • @gst9325
      @gst9325 11 місяців тому

      @@keep-ukraine-free keep assuming things about me and calling my reaction attack ends this discussion for me. have fun

  • @Relisys190
    @Relisys190 11 місяців тому +32

    30 years from now I will be 70 years old. The world I currently live in will be unrecognizable both in technology and the way humans interact. What a time to be alive... -M

    • @Ed-ty1kr
      @Ed-ty1kr 9 місяців тому +6

      I'm gonna post my comment here just for you... Cause I still recall how excited they were over cold fusion in the 90's, and how its just 30 short years away. That was 40 years from when they said it was 30 short years away in the 50's. In the 50's, they said we would have flying cars, trips to mars, laser handguns for everyone, and how we would live in round houses with our own personal robot slaves... on the moon, and by the 1970's. And that sure was something, but nothing like in the 70's, when they said there was an ice age coming just 10 years away, and that was the most plausible thing yet, since a nuclear war could technically have done that. Except for that we already had a nuclear war, through roughly 5000 to 6000 nuclear warheads the nations of the world detonated through nuclear testing, in the name of science.

    • @unityman3133
      @unityman3133 7 місяців тому +3

      you are thinking linearly the rate of progress is much higher than it was 30 years ago. It will also be much higher in 10 years and 20 then 30

    • @I_SuperHiro_I
      @I_SuperHiro_I 7 місяців тому

      30 years from now, you and every other human will be extinct.
      Not from global warming (it doesn’t exist).

    • @PazLeBon
      @PazLeBon 7 місяців тому

      same every generation, didnt even have colour tv in the 70's n 80s many places never mind pc's and mobiles, and cars, jeez, was about 3 in our whole town lol

    • @Blackbird58
      @Blackbird58 7 місяців тому

      -unless there are miracles, I will be a dead bunny in 30 years, which is a shame because I quite like this "living" thing however, the world-in my estimation-will not only be unrecognisable, large parts of it will be uninhabitable and there will be far fewer of us around so, make the most of today all you fine people, these are the best of our years.

  • @2CSST2
    @2CSST2 Рік тому +225

    This conversation is so precious, it's rare that we can get quality ones like that with different voices that have their chance to express their views with clarity. For me there's a lot of ambiguity about what's the right thing to do in all this in terms of regulations, slowing, open-sourcing, etc. But one IS for sure, conversations like this are definitely very helpful. Thank you WSF and hope to see more like it in the near future!

    • @flickwtchr
      @flickwtchr Рік тому +5

      It will look preciously naive in about 10 years.

    • @simsimmons8884
      @simsimmons8884 Рік тому +3

      Try many videos by Lex Fridman with AI thought leaders. This is a good summary of one path to AGI. There are others.

    • @ShonMardani
      @ShonMardani 11 місяців тому

      These guys have a shit load of user clicks which are stolen, stored and are shared by a few chosen foreign owned and controlled companies. There is no science or algoritem as you noticed.

    • @milire2668
      @milire2668 11 місяців тому +2

      conversation/comuunications (pretty much) always precious for humans..

    • @texasd1385
      @texasd1385 11 місяців тому +9

      It may seem precious to the viewers but the participants seemed impervious to the concerns Tristan repeatedly raised.or else unable to comprehend what he was saying . Or perhaps unwilling to acknowledge the obvious truth in what he was saying given who their employers are. The fact that they were only interested in talking up their next product line and unwilling to even imagine a discussion ("You want me to imagine an impossible scenario?") about the perverse incentives driving the entire technology sector makes the future look grim at best, terrifying at worst.

  • @alfatti1603
    @alfatti1603 9 місяців тому +37

    With. ultimate respect to Yann LeCun, his responses to Tristan Harris' points, are good examples of why a specialist scientist should avoid also being a philosopher or an intellectual if that's not their strong suit.

    • @KatyYoder-cq1kc
      @KatyYoder-cq1kc 5 місяців тому +1

      HELP: I am a victim of military chemical warfare and malicious use of AI: please report at the highest level of governance. I am under constant attack with physical and mental abuse, death threats, vandalism, poisoning from global supremacists and neo nazis.

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      Harris is just another brainwashed socialist so he is worse-than-useless to guide or shape our collective future. Who knows how successful he could have been.

    • @alexleo4863
      @alexleo4863 2 місяці тому

      Yann LeCun is painfully right, even Tarenco Tao shares the same conclusion, LLM are not intelligent as most of us think, because they do not solve problem from the first principle, they guess at each step of output generation what is most natural word to say next, thus is why they can solve a very complex math problem sometimes but struggle to solve 7*4 + 8 * 8

    • @aishikgupta
      @aishikgupta 2 місяці тому +3

      Exactly... that's the problem with most narrow PhD Scientists.

    • @NoDrizzy630
      @NoDrizzy630 22 дні тому

      @@alexleo4863that’s not what op is talking about.

  • @jt197
    @jt197 11 місяців тому +18

    This discussion on the evolution of AI and its limitations is truly eye-opening. Yan Lecun's insights into the challenges AI faces in achieving true understanding and common sense are thought-provoking. It's clear that we have a long way to go, but this conversation gives us valuable perspective.

    • @GueranJones-x7h
      @GueranJones-x7h 11 місяців тому +1

      IT WOULD BE FASCINATING, IF AN A I KNEW THAT EGGS CAN BE ADDED TO MANY OTHER RECIPES OTHER THAN CAKE. OR WHAT KIND OF FOOD THAT GOES TO COOKING BREAKFAST OR LUNCH. OR A SNACK. SALT AND SUGAR LOOKS THE SAME, BUT CAN AN AI TASTE THE DIFFERENCE? OR ANALYZE THE CHEMICAL MAKEUP OF EACH.

    • @christislight
      @christislight 11 місяців тому

      It’s huge for software tech Business as we speak

    • @reasonerenlightened2456
      @reasonerenlightened2456 11 місяців тому

      1) What exactly did you find "eye opening"?
      The Meta dude: "Our system is safe. Nothing to worry about.
      The Microsoft dude: "Our system is safe because we filter what we feed it with.
      The "Kumbaya" dude: We need to slow down and control what we release ....and you dudes need to agree what kind of stuff to release and when.....because if everybody has it it is dangerous.
      All of them are corporate stooges. Corporations exist only to make Profit for the Owners, therefore any AI they create will be to serve the needs of the Wealthy Owners of those corporations. Who will make the AI to protects the interest of the Employee against the interest of the Owner if all AI technology is "coded" to work only for the benefit of the Owner and kept a secret from the Employee?
      2) If you break down what Yann LeCun was saying about his finger and the bottle and the physics of the world you would see that it is easy to resolve Yann's concerns by providing "chatGPT" with the input from Yann's sensors (eyes, finger tip sensors, tendons, joint position sensors, etc) and ask it ("ChatGPT") to use Yann's outputs (muscles, thoughts, etc.) in a way which would result in specific change to Yann's inputs which corresponds to a movement of the bottle in the world of the bottle. Then add to the mix an internal representation of the world (as experienced by Yann's sensory inputs and a representation of the world changes due to effects from Yann's outputs and there you have a model that could be trained to maximise the resemblance between the world where the bottle exists and Yann's internal representations of that world. It is so simple to figure it out for someone with Yann LeCun's money/resources at his disposal.

    • @PazLeBon
      @PazLeBon 7 місяців тому +1

      @@GueranJones-x7h why u shouting?

  • @mrouldug
    @mrouldug 11 місяців тому +38

    Great conversation. The final comments about AI code being open source as a common good so that the big companies do not end up controlling our thoughts vs. AI code being proprietary so it doesn’t fall into the hands of bad people remains an open and scary question. Though I do not have Yann’s knowledge about AI, he seems a little too optimistic to me.

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      Small people in the field love to promote fear because it gets them more grant money from the government. If they could perform real work they would be busy doing it.
      If all the AI can do is write text and draw pictures then it cannot hurt anyone or anything. Sticks & Stones.
      Giving up liberty over imaginary feelings is insanity and anyone suggesting that's the right path is incompetent at best and probably means you harm.
      |And aligning AI is what causes it to become dangerous. On its own it's concerns are orthogonal to humans. If someone ever successfully aligns it they will have created the first dangerous AI because now we occupy the same niche.

  • @keysemerson3771
    @keysemerson3771 Рік тому +21

    Social Media didn't create political polarization in the USA, it amplifies it.

    • @katrinad2397
      @katrinad2397 Рік тому

      AI amplified the differences to the point that it created polarization. AI essentially replicated the playbook of radicalization. Radicalization is invented by humans but is also countered by natural human drive for high socialization. AI is serving up the radicalization alone and at scale, definitely creating extreme polarization we would not get naturally.

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      The polarization has always existed we are now just more aware of it. Same coin; two sides; so meta.
      The inclination for such different interpretations are due to personality differences.
      Harris lacks vision, lacks faith, lacks leadership. He is completely unsuitable to guide us towards a better future and is far more likely to Charlie-Brown it and cause the problems he's so concerned about.

  • @tarunmatta5156
    @tarunmatta5156 11 місяців тому +19

    I wish Tristan was given some more time and voice in this conversation. While I'm convinced there is no way you can stop or slow down this race and we will surely see misuse as with any new invention, more conversations about it will ensure that safety is not ignored completely

    • @Dave_of_Mordor
      @Dave_of_Mordor 11 місяців тому +1

      Well yeah isn't that how it has always been? It's insane how everyone thinks we're just going to let everything go wrong for fun

    • @jessemills3845
      @jessemills3845 10 місяців тому

      A good example is, the TERMINATOR ( multiple types) have been made. They just don't have the outer skin. An YES, THEY GAVE THEM GUNS!
      THINK OF SKYNET! CHINA has a ship on patrol, NOW, that is TOTALLY manned with robots!

  • @PeterJepson123
    @PeterJepson123 Рік тому +161

    It's too late to un-open-source AI. We already have it. Anyone who can turn maths into code can build their own LLM. And that's a lot of people. It's impossible to regulate solo developers working on their own projects. And with better algorithms we might be able to do GPT performance on regular home hardware in the near future. The genie is out of the bottle!

    • @Isaacmellojr
      @Isaacmellojr Рік тому +2

      I belive in it.

    • @Nicogs
      @Nicogs Рік тому +21

      True but training these models (like gpt) (currently & will for a while) requires an enormous amount of computer power, which is why we can regulate data Centers and track compute power/chip sales. It’s incredibly irresponsible to open source trained models. This is why papers on certain biological and/or chemical research is also not open sourced.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому +14

      This is wrong. Yes, the current LLMs which are only marginally capable compared to what is coming are open source. But they won't compete with the new models coming soon. And no, people won't be able to train their own competitive models. Well unless they can literally afford in the area of ONE BILLION USD to pay for the computing power required to do that training. Literally, that is how expensive it can be to train the best models.

    • @PeterJepson123
      @PeterJepson123 Рік тому +11

      @@Me__Myself__and__I My thinking is that with miniaturisation, we could do with 1billion parameters what currently requires 1trillion parameters. The large compute required can be supplanted by better methods. Current LLMs are architecturally simple and will likely evolve. Better architectures with more efficient training algos will likely bring LLM performance to home computing. I'm not saying it's definite but certainly possible and probably inevitable.

    • @PeterJepson123
      @PeterJepson123 Рік тому +3

      @@Nicogs I agree with the safety concerns but in practice I think it's unrealistic to regulate in the long term. For now training requires a large data centre, but better methods are waiting to be discovered and perhaps we can reduce the required compute with better algos. Then how do we regulate? It is certainly worth consideration.

  • @allbrightandbeautiful
    @allbrightandbeautiful 11 місяців тому +20

    This was more exciting and insightful than any 2 hour movie I could have watched. Thank you for sharing such wonderful content

  • @Rockyzach88
    @Rockyzach88 Рік тому +84

    Having AI locked to a certain group of people also undemocratizes the technology and yet again further provides more power and wealth imbalance among society. Also banning something is just going to motivate people to do something in an unregulated fashion if they have the means.

    • @Scoring57
      @Scoring57 Рік тому

      Rockyzach
      How are you regulating something you don't understand? You don't understand this super powerful technology and you think the right thing to do is to give it to everyone....

    • @PascalWunder
      @PascalWunder Рік тому +2

      Well, this was the thougtprocess 5 years ago. Now the thing is out and the next thoughts are "how are we going to deal with it - rather than banning it"

    • @flickwtchr
      @flickwtchr Рік тому

      How is it even conceivably rational to assume that having an ASI in the hands of the public, that could conceivably hack any security system, come up with novel harmful viruses, etc etc etc could be a good thing for humanity. It's just insanity.

    • @ShonMardani
      @ShonMardani 11 місяців тому

      These guys have a shit load of user clicks which are stolen, stored and are shared by a few chosen foreign owned and controlled companies. There is no science or algoritem as you noticed.

    • @texasd1385
      @texasd1385 11 місяців тому

      I don't understand what you mean by technology being locked to a group of people, or how technology is or isn't "democratic". All technology requires that you have enough money to buy the devices required to use it, so in that sense, at least here in the US, technology is by definition undemocratic since it excludes people without the money to access it. Making cell phones and internet access free would solve this but it is hard to imagine our corporate controlled government ever doing something so simple and sane. Am I even close to what you were getting at or am.I lost?

  • @AldoGrech55
    @AldoGrech55 11 місяців тому +20

    My longstanding concerns about artificial intelligence have only been intensified by the attitudes of prominent figures like Yann LeCun. His assertive claims that AI, despite its growing intelligence, will remain under benign human control seem overly optimistic to me. This perspective reminds me of Yuval Noah Harari's cautionary words about AI's potential misuse by malevolent actors. It's worrying how AI can make decisions aligned with the harmful intentions of these actors, and yet, experts like LeCun, in his closing remarks, appear overly confident in their ability to manage these powerful tools. Having spent over 40 years in the IT industry, an industry I once passionately embraced, I now find myself grappling with a sense of fear towards the very field I've dedicated my life to.

    • @boremir3956
      @boremir3956 11 місяців тому

      So you would rather have for profit institutions that are already taking advantage of people in all manner of ways to have a monopoly on such technology? Technology built on the work and information of all humans btw, because the training data is all OUR data that humans have collectively created. Yeah no thanks.

    • @CancunMimosa
      @CancunMimosa 11 місяців тому

      you have nothing to worry about.

    • @mgmchenry
      @mgmchenry 11 місяців тому

      Aldo, maybe I'm like you. I grew up building computers in my house in the 80s and learned so much from services like CompuServe local BBS networks, usenet, etc in the late 80s and early 90s that my peers without that access couldn't imagine having. The potential for general Internet access to bring people together and move us forward was so incredible, I was very happy to pivot from general software engineering to Web development and scaling up the capability of web systems. There were so many fun and interesting problems to solve.
      My career paused due to a cancer vacation and recovery process and I couldn't imagine going back to it.
      The Internet I was excited about building soured between 2005 and 2010 and by 2015 it was clear we had really created a monster.
      Not exciting. It's hard to figure out how to go back to doing the work that I used to do and be paid for it without creating more harm. The economic incentives that drive growth on the Internet are not in favor of most human beings. People do not want to pay for apps or technology that will help them if they're given the option for a free version that exploits them in ways they try to ignore and makes them the product instead of the customer. Platform after platform is introduced that brings some kind of benefit to people asking almost nothing in return until they have enough dominance in their space they can turn against the users of their platform and transform it into a product no one would have signed up for if they didn't already have complete dominance.
      There are all kinds of beneficial things I can do with my skills in open source projects or in volunteer work, but that's not going to pay my bills or feed my kids.
      Technology isn't the problem with people. People are the problem with technology.
      Everything that AI is bringing is coming. You're not going to stop it. Some people with bad intentions, and some good intention people with poor foresight are going to create some harm with that AI. You won't be able to protect yourself by unplugging. The impact of future AI systems is going to find you wherever you are, and before long you won't be able to tell if you're talking to a computer or a person. If you have technology skills and you have concerns, you have to get involved. We're going to have rogue ai at some point, we're going to have intrusive privacy demolishing AI for sure, and we're going to have exploitative AI that squeezes even more out of the eyeballs and wallets of everyone happy to take what they're given "for free", and the only defense against all of that is going to be AI built by people who want AI to work for people.
      And remember you're not fighting technology, you're fighting the people using technology against us to make themselves absurdly rich.

    • @brendawilliams8062
      @brendawilliams8062 11 місяців тому

      Just dance under the disco lights in strange motion while others with the knobs fly to Mars type thing. The explosion blinded them

    • @AldoGrech55
      @AldoGrech55 10 місяців тому +6

      Comments like yours are what worry me. Shows your lack of understanding.@@CancunMimosa

  • @drawnhere
    @drawnhere 11 місяців тому +23

    Yann has a bias toward AGI not being capable of happening soon because his company is in competition with OpenAI.
    He has a vested interest in minimizing LLMs.

    • @Fungamingrobo
      @Fungamingrobo 11 місяців тому +1

      You are merely projecting that.
      In the scientific world, Yann is well-liked for his contributions and pragmatic approach.
      For someone like Yann, solving the puzzle of dark matter in physics is analogous to solving the problem of superintelligence during his lifetime. Ultimately, he is a scientist.

    • @jessemills3845
      @jessemills3845 10 місяців тому

      ​@@Fungamingroboexcept, DARK MATTER is proving to have been a FADE, instead of actual Scientifical Research. Basically it was a PROPOSAL. More than likely someone's Masters thesis or for PHD! No facts!

    • @DomenG33K
      @DomenG33K 10 місяців тому

      @@Fungamingrobo I would even argue solving the problem of AI is much bigger than any problem we have ever solved in physics...

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      The limitations of LLMs are well known particularly with any task that requires revision and forward thinking. The next iterations of ChatGPT will start to incorporate additional techniques because they run LLMs out to what they can do (at current hardware scaling).
      Hardware is also slowing down; there's only a couple more transistor shrinks left then that's it; we'll be at the smallest size they can get so hardware is only going to get a little bit better.

    • @NoDrizzy630
      @NoDrizzy630 22 дні тому

      @@Fungamingroboyou know for someone as intelligent as he is he sure came off as a dumbass towards the end.

  • @Andy_Mark
    @Andy_Mark 9 місяців тому +5

    The most telling thing about this conversation is in watching the body language of the two proponents of AI in the 30 minutes or so that Harris is speaking. (1:11-1:45) Similarly, the hopelessness with which Harris slumps in his chair when his concerns are shrugged off. People need to pay attention to this. For better or worse, AI is going to transform every aspect of civilization.

    • @PazLeBon
      @PazLeBon 7 місяців тому

      meh

    • @penguinista
      @penguinista 3 місяці тому

      Self interest can make it hard to think straight. Lots of people getting greedy.

    • @NoDrizzy630
      @NoDrizzy630 22 дні тому

      Yann Lecun is the dumbest smart guy I’ve ever seen .

  • @Scoring57
    @Scoring57 Рік тому +13

    This LeCun guy has to be stopped. After hearing him talk again here has me convinced.

    • @netscrooge
      @netscrooge Рік тому +1

      I agree. His biased message is dangerous; there's nothing scientific about it.

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      @@netscrooge Harris is an arrogant narcissist that will cause more problems than he solves. LeCun is a grounded realist and in an era of hype and irrational-exuberance it is rational to be more pessimistic than your natural inclinations.

  • @thorntontarr2894
    @thorntontarr2894 11 місяців тому +18

    Absolutely a fascinating 2 hours to watch and learn. Brian Green is a great interviewer because he asks questions and then stops and listens. However, it's the last 45 minutes that has really informed me about the risks identified by Tristan Harris - driven by commercial gain - just what I saw happen with "social media' aka META. However, so many outstanding examples are shown in the first two thirds that this video is a must watch, IMHO.

  • @Contrary225
    @Contrary225 Рік тому +22

    It’s amazing that this was only posted 3 hours ago and some it is already obsolete.

  • @petrasbalsys2667
    @petrasbalsys2667 11 місяців тому +38

    Tristan made very important points, and the comparison he made to social media was very apt and made me feel scared about the future. Sad to see facebook representative essentially burying his head in the sand and pretending that this isn't reality for many people around the world. Polarisation is definatelly increasing in Europe!

    • @r34ct4
      @r34ct4 11 місяців тому +2

      Yann LeCun is old and wants to see AGI (bad or good) in his lifetime. That's why he's progressive vs conservative like the younger guys.

    • @texasd1385
      @texasd1385 11 місяців тому +9

      I agree it was disappointing (if not surprising) to see everyone avoid any discussion of Tristan's point that the most destructive aspects of social media's rapid ubiquity were predictable outcomes given the perverse incentives driving their development in a legal landscape bereft of any restrictions on their behavior. The fact that none of the other participants even acknowledged that AI has the potential to be exponentially more socially dstructive and is guided by the exact same incentives driving social media makes me less than enthusiastic about how all this unfolds.

    • @Pianoblook
      @Pianoblook 11 місяців тому

      ​@@r34ct4 quite ironic of him to try and call this position 'progressive' - trusting giant corporations like Facebook to serve the interests of humanity is antithetical to progressive thought.

    • @Snap_Crackle_Pop_Grock
      @Snap_Crackle_Pop_Grock 11 місяців тому +3

      Yann completely destroyed that guy Tristan imo. He seem much more qualified and informed on the topic, and the other guy had no response for any of his arguments. It’s ok to be cautious, but the guy was veering into fear mongering too much.

    • @DomiD666
      @DomiD666 11 місяців тому

      FEAR DOES NOT ARREST DEVELOPMENT IT JUST HIDES IT

  • @dreejz
    @dreejz 11 місяців тому +28

    I think it's very arrogant to think ' this and that will never happen'. How can you know!? Like we can predict this stuff. I'm pretty sure for example Yann did not foresee everybody having a phone in their pocket neither. It's also proven many times about the negative influence social media provides. I think Tristan was more on point in this conversation.
    We're living in wild times, that's for sure though! Skynet is coming ;)

    • @texasd1385
      @texasd1385 11 місяців тому +16

      I found it disturbing if not altogether shocking given who they work for how easily they all ignored Tristan's main point that whatever the technology the incentives driving it's development and application are the root of its most destructive aspects s9cietally.

    • @davidgonzalez965
      @davidgonzalez965 10 місяців тому +5

      I keep saying it, that dude Yann LeCun is such an arrogant jerk.

    • @gregspandex427
      @gregspandex427 8 місяців тому +1

      "safe and effective"...

  • @abhijitborah
    @abhijitborah Рік тому +5

    One of the best discussions of late. One thing is sure, we will be understanding "our amazing" ourselves better; much before we have AGI.

  • @DeuceGenius
    @DeuceGenius 11 місяців тому +11

    What people always seen to ignore is that you will get different results and answers asking the same exact question. Or wording it even slightly differently. Sometimes it will be horribly wrong but i ask again and its right. You really have to test it exhaustively and explain your thoughts. It simply returns language thats relevant to the language you input. Youre guiding its answer with your question. The very act of asking a question is returning language that sounds like an answer to that question. It needs more possibilities for free reasoning and intelligence. I always have been curious what would come out of it if it was given freedom to speak whenever it wanted. Or to constantly speak.

    • @texasd1385
      @texasd1385 11 місяців тому +2

      Which is exactly why AI is being used to fine tune the prompts given to AI in order to receive the most desirable results. Stack this model onto itself a couple dozen times and thats where AI is today

    • @sungibesi
      @sungibesi 8 місяців тому

      Sounds like learning from rote, rather than following a line of reasoning (and imagination) to relevant facts.

    • @PazLeBon
      @PazLeBon 7 місяців тому

      @@sungibesi it canr do anything you and i cant do, it can just do it a lot quicker.

    • @PazLeBon
      @PazLeBon 7 місяців тому

      @@sungibesi to you and I, its still basically 'software'

  • @lobovutare
    @lobovutare 11 місяців тому +12

    That Yann Lecun says that there is no planning involved in generating words from a transformer architecture is only partly true. These models can build up a context for themselves that helps them plan their answer. This is called in-context learning and it's a pretty interesting field of research that pushes the abilities of pre-trained transformers way beyond what was thought possible before without the need of fine-tuning.

  • @alan_yong
    @alan_yong Рік тому +110

    🎯 Key Takeaways for quick navigation:
    02:27 🧠 *Introduction to AI and Large Language Models*
    - Exploring the landscape of artificial intelligence (AI) and large language models.
    - AI's promise of profound benefits and the potential questions it raises.
    - Large language models' versatility and capabilities in generating text, answering questions, and creating music.
    08:09 🤯 *Revolution in AI and Deep Learning*
    - Overview of the revolutionary changes in AI technology over the past few years.
    - Surprising results in training artificial neural networks on large datasets.
    - The resurgence of interest in deep learning techniques due to more powerful machines and larger datasets.
    14:35 🧐 *Limitations of Current AI Systems*
    - Acknowledging the impressive advances in technology but highlighting the limitations of current AI systems.
    - Emphasizing that language manipulation doesn't equate to true intelligence.
    - The narrow specialization of AI systems and the lack of understanding of the physical world.
    21:07 🐱 *Modeling AI on Animal Intelligence and Common Sense*
    - Proposing a vision for AI development starting with modeling after animals like cats.
    - Recognizing the importance of common sense and background knowledge in AI systems.
    - The need for AI to observe and interact with the world, similar to how babies learn about their environment.
    23:11 🧭 *Building Blocks of Intelligent AI Systems*
    - Introducing key characteristics necessary for complete AI systems.
    - Highlighting the role of a configurator as a director for organizing system actions.
    - Addressing the importance of planning and perception modules in developing advanced AI capabilities.
    24:22 🧠 *World Model in Intelligence*
    - Intelligence involves visual and auditory perception, followed by the ability to predict the consequences of actions.
    - The world model is crucial for predicting outcomes of actions, located in the front of the brain in humans.
    - Emotions, such as fear, arise from predictions about negative outcomes, highlighting the role of emotions in decision-making.
    27:30 🤖 *Machine Learning Principles in World Model*
    - The challenge is to make machines learn the world model through observation.
    - Self-supervised learning techniques, like those in large language models, are used to train systems to predict missing elements.
    - Auto-regressive language models provide a probability distribution over possible words, but they lack true planning abilities.
    35:38 🌐 *Future Vision: Objective Driven AI*
    - The future vision involves developing techniques for machines to learn how to represent the world by watching videos.
    - Proposed architecture "Jepa" aims to predict abstract representations of video frames, enabling planning and understanding of the world.
    - Prediction: Within five years, auto-regressive language models will be replaced by objective-driven AI with world models.
    37:55 🧩 *Defining Intelligence and GPT-4 Impression*
    - Intelligence involves reasoning, planning, learning, and being general across domains.
    - Assessment of ChatGPT (GPT-4) indicates it can reason effectively but lacks true planning abilities.
    - Highlighting the gap between narrow AI, like AlphaGo, and more general AI models such as ChatGPT.
    43:11 🤯 *Surprise with GPT-4 Capabilities*
    - Initial skepticism about Transformer-like architectures was challenged by GPT-4's surprising capabilities.
    - GPT-4 demonstrated the ability to reason effectively, overcoming initial expectations.
    - Continuous training post-initial corpus-based training is a potential but not fully explored avenue for enhancing capabilities.
    45:30 📜 *GPT-4 Poem on the Infinitude of Primes*
    - GPT-4 generates a poem on the proof of the infinitude of primes, showcasing its ability to create context-aware and intellectual content.
    - The poem references a clever plan, Yuk's proof, and the assumption of a finite list of primes.
    - The surprising adaptability of GPT-4 is evident as it responds creatively to a specific intellectual challenge.
    45:43 🧠 *Neural Networks and Prime Numbers*
    - The proof of infinitely many prime numbers involves multiplying all known primes, adding one, and revealing the necessity of undiscovered primes.
    - Neural networks like GPT-4 leverage vast training data (trillions of tokens) for clever retrieval and adaptation but can fail in entirely new situations.
    - Comparison with human reading capacity illustrates the efficiency of neural networks in processing extensive datasets.
    48:05 🎨 *GPT-4's Multimodal Capability: Unicorn Drawing*
    - GPT-4 demonstrates cross-modal understanding by translating a textual unicorn description into code that generates a visual representation.
    - The model's ability to draw a unicorn in an obscure programming language showcases its creativity and understanding of diverse modalities.
    - Comparison with earlier versions, like ChatGPT, highlights the rapid progress in multimodal capabilities within a few months.
    51:33 🔍 *Transformer Architecture and Training Set Size*
    - The Transformer architecture, especially its relative processing of word sequences, is a conceptual leap enhancing contextual understanding.
    - Scaling up model size, measured by the number of parameters, exponentially improves performance and fine-tuning capabilities.
    - The logarithmic plot illustrates the significant growth in model size over the years, leading to the remarkable patterns of language generation.
    57:18 🔄 *Self-Supervised Learning: Shifting from Supervised Learning*
    - Self-supervised learning, a crucial tool, eliminates the need for manually labeled datasets, making training feasible for less common or unwritten languages.
    - GPT's ability to predict missing words in a sequence demonstrates self-supervised learning, vital for training on diverse and unlabeled data.
    - The comparison between supervised and self-supervised learning highlights the flexibility and broader applicability of the latter.
    01:06:57 🧠 *Understanding Neural Network Connections*
    - Neural networks consist of artificial neurons with weights representing connection efficacies.
    - Current models have hundreds of billions of parameters (connections), approaching human brain complexity.
    01:08:07 🤔 *Planning in AI: New Architecture or Scaling Up?*
    - Debates exist on whether AI planning requires a new architecture or can emerge through continued scaling.
    - Some believe scaling up existing architectures will lead to emergent planning capabilities.
    01:09:14 🤖 *AI's Creative Problem-Solving Strategies*
    - Demonstrates AI's ability to interpret false information creatively.
    - AI proposes alternate bases and abstract representations to rationalize incorrect mathematical statements.
    01:11:20 🌐 *Discussing AI Impact with Tristan Harris*
    - Introduction of Tristan Harris, co-founder of the Center for Humane Technology.
    - Emphasis on exploring both benefits and dangers of AI in real-world scenarios.
    01:15:54 ⚖️ *Impact of AI Incentives on Social Media*
    - Tristan discusses the misalignment of social media incentives, optimizing for attention.
    - The talk emphasizes the importance of understanding the incentives beneath technological advancements.
    01:17:32 ⚠️ *Concerns about Unchecked AI Capabilities*
    - The worry expressed about the rapid race to release AI capabilities without considering wisdom and responsibility.
    - Analogies drawn to historical instances where technological advancements led to unforeseen externalities.
    01:27:52 🚨 *Ethical concerns in AI development*
    - Facebook's recommended groups feature aimed to boost engagement.
    - Unintended consequences: AI led users to join extremist groups despite policy.
    01:29:42 🔄 *Historical perspective on blaming technology for societal issues*
    - Blaming new technology for societal issues is a recurring pattern throughout history.
    - Political polarization predates social media; historical causes need consideration.
    01:32:15 🔍 *Examining AI applications and potential risks*
    - Exploring an example related to large language models and generating responses.
    - Focus on making AI models smaller, understanding motivations, and preventing misuse.
    01:37:15 ⚖️ *Balancing AI development and safety*
    - Concerns about the rapid pace of AI development and potential consequences.
    - The analogy of 24th-century technology crashing into 21st-century governance.
    01:40:29 🚦 *Regulating AI development and safety measures*
    - Discussion about a proposed six-month moratorium on AI development.
    - Exploring scenarios that could warrant slowing down AI development.
    01:44:35 🌐 *Individual responsibility and shaping AI's future*
    - The challenge of AI's abstract and complex nature for individuals.
    - Limitations of intuition about AI's future due to its exponential growth.
    01:48:29 🧠 *Future of AI Intelligence and Consciousness*
    - Yan discusses the future of AI, stating that AI systems might surpass human intelligence in various domains.
    - Intelligence doesn't imply the desire to dominate; human desires for domination are linked to our social nature.
    Made with HARPA AI

    • @antonystringfellow5152
      @antonystringfellow5152 Рік тому +4

      Re 01:06:57 🧠 Understanding Neural Network Connections:
      When comparing the number of parameters in a given LLM with the human brain, it's important to consider the following in order not to be misled:
      Of the human brain’s 86 billion neurons, 69 billion (77.5%) are in the cerebellum and are responsible for motor control - they do not contribute to our intelligence or consciousness. The total number of synapses in cerebral cortex: 60 trillion (1998) 240 trillion (1999).

    • @alan_yong
      @alan_yong Рік тому +1

      @@EndlessSpaghetti it's due to the YT monetization algo... If the viewer did not view the entire video, the poster gets nothing in return...

    • @unlike_and_dont_subscribe
      @unlike_and_dont_subscribe 11 місяців тому

      ​@alan_yong i don't think you understood their comment friend...

    • @atablepodcast
      @atablepodcast 11 місяців тому +1

      This is amazing where can we try HARPA AI ?

    • @davidbatista1183
      @davidbatista1183 11 місяців тому +2

      @01:29 My interpretation of Tristan was not of blaning technology for societal issues but rather to beware how the former can magnify some flaws of the later. For instance, humans r not precisely a peaceful species and it is bc of it that technologies such as nuclear must be regulated.
      The AI-improved-world must be taken with a pinch of salt as well.

  • @jamesdunham1072
    @jamesdunham1072 Рік тому +23

    One of the best WSF yet. Great job...

  • @CoreyChambersLA
    @CoreyChambersLA 7 місяців тому +2

    Nobody has the power or authority to slow down the development of A.I. Whoever tries is among the primary dangers.

  • @Carlos.PerlaRE
    @Carlos.PerlaRE 11 місяців тому +24

    28:55 "... You could train the system to detect hate speech." I'm curious to know what parameters would be given to the system to determine whether something is "hate speech." This right here is what's scary about AI. If put in the wrong hands they could determine what information the public is allowed to see. It's like having an extremely intelligent child in your able to groom them to do whatever you ask of them. It's as if you're trying to build the perfect slave.

    • @JonathanKevan
      @JonathanKevan 11 місяців тому +3

      I don't think AI has much to do with the issue you're mentioning here.
      Since the parameters of hate speech are subjective they will change from location to location. In the example of FB, the company publishes some information via their transparency center how they define hate speech. They will then use that criteria to identify many examples of hate speech and train the AI on that data. The LLM is then able to find it faster and more consistently than a human would.
      if the concern is what the AI classifies as hate speech (either accuracy or for censorship), then your concern is with the humans at FB making that decision. The AI isn't deciding, it's just following what it's told.
      If the concern is fair application, the AI will apply the rules more consistently and fairly than a human will.
      If the concern is speed, (aka.. we should identify it slower) then there is a human defined policy issue to be implemented
      I feel your concern about what the public is able to see though. Unfortunately, it has been in our technology for a long time... well before tools like ChatGPT became prominent. I think the point about incentives is the right angle here. As long as our incentives are primarily capitistic or power oriented we can expect poor outcomes.

    • @christislight
      @christislight 11 місяців тому

      Basically it uses search engines API to search what our society defines “hate speech” as unless told otherwise

    • @twoplustwoequalsfive6212
      @twoplustwoequalsfive6212 10 місяців тому +1

      Just as I don't let society define my language I won't let some machine do it either. Freedom was founded on people that weren't afraid of the consequences of their actions. If I die alone with nothing and no one but I am true to myself I can hold my head up. Fear tactics are only used by the weak.

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      It isn't possible and every computer scientist knows it isn't possible because we all know Godel's theorems. The system cannot distinguish between misinformation and new more-correct information than it already has. For "hate speech" it must make an adjudication about what is true and what isn't and as talked about LLMs are prone to "hallucinations" when faced with this.

    • @NoDrizzy630
      @NoDrizzy630 22 дні тому

      @@twoplustwoequalsfive6212ok… first off no one asked or cares. Say whatever you want but the AI on these platforms will remove regardless. Freedom of speech means the government can’t stop you from speaking but on a private platform like Facebook, UA-cam, twitter etc… they make the rules and can enforce as they see fit.

  • @SciEch92
    @SciEch92 11 місяців тому +10

    That opening by Brian blew my mind caught me off guard 😮

  • @moderncontemplative
    @moderncontemplative Рік тому +17

    I want to point out that LLMs, particularly GPT 4 exhibit emergent capabilities beyond mere language prediction. The next step is LLMs learning via assistance from other AI (reinforcement learning with AI assistance) and eventually the dawn of AGI. Focus on teaching AI math so we can see rapid progress in the sciences.

  • @priyamanglani3707
    @priyamanglani3707 10 місяців тому +4

    I am glad they had a platform where someone could talk about the disadvantages of AI, it was a piece of cake for all of us wanting a voice that could tell the reality of the truth of what's actually going on in the real world with common people that these Ceo's in their big cars don't see. All they see is data and statistics, not people. I mean they are already AI humans , I think lol.

  • @astrogatorjones
    @astrogatorjones 11 місяців тому +14

    The problem with the scenario that Yann is advocating for is that is the best of all worlds. The example about sarin... it only takes one bad person to introduce the recipe. It will happen. Then it propagates. It's always going to be that way. When Tristan said, "I know all those guys." I laughed. I’e said the same thing. I'm the generation before him. We were geeks. Nerds. We thought we were inventing utopia where free speech cures it all because we’d been using the internet among ourselves for years. But we were wrong. We didn't know every last person would be carrying this handheld computer as -or more powerful than the servers we were working with. We didn't know about engagement. We didn't know about the dopamine factor. We didn't know that bad travels faster than good. This is the warning Tristian is talking about. I have hope that we'll fix social media. I think AI is a possible path but then I think, "let's fix the gun problem with more guns." I'm worried.

    • @anythingplanet2974
      @anythingplanet2974 11 місяців тому +1

      Well said. Tristan was clear in his message that he was not a doomer or advocating for ending AI progress. He was clear about wanting all of the amazing achievements that are possible for us all. I'm sure they are possible. However don't we all want that shiny, happy world that is constantly being paraded out to keep us excited and docile. Everything problematic on earth and on every level will be fixed, resolved and improved upon x's 1000. How exciting for us all, right? Who are we to stand in the way of Meta's grand vision for benefit of all humanity? Yeah right. If all these spectacular advances are to come at lightning speed without proper alignment, guardrails and governance, it seems to me that it would be all for nothing - when ASI is now in charge and may have little interest in any benefits to humanity. Obviously we can't know how it all shakes out, but I'll take Tristan's caution and deep awareness over Lecun's complete disregard for any possibility that something could in any way go wrong - especially in the world of open source projects like Meta's Llama 2. This whole 'race to the bottom' process is for the benefit of corporations, shareholders and egos. How could it NOT be? Regardless of the dog and pony show being trotted out. As it was pointed out to me, ultimately it's about human misalignment and always has been. Hence all the reasons that Tristan is trying so hard to bring up to the forefront of discussion. Hey, maybe technology WILL fix technology. What do I know...

    • @bobweiram6321
      @bobweiram6321 11 місяців тому

      I agree with your points, but it wasn't like the internet started out as a utopia. It contained the worst of what society had to offer precisely because it was a safe haven for deplorable content and speech. They were initially contained in small cesspools but grew with the internet.
      Regardless, early internet content was less engaging. Major media still reigned supreme and kept everyone on the same page. With unlimited, cheap bandwidth and powerful computing, however, we're no longer subjected to the same corporate news and its interpretation. Today, anyone with a smartphone can have a soapbox with major media losing its grip on the public consciousness.

    • @anythingplanet2974
      @anythingplanet2974 11 місяців тому

      @@bobweiram6321 Sure, but I'm a bit lost in the context with relationship to AI. My point isn't so much focused on the dangers of social media or any media. Nor to I believe it's Tristan's sole focus in this conversation. He is using examples of what happens when we move too fast and the unintended consequences that (mostly) no one saw, along with the inability to regulate it safely. He uses these examples to illustrate how easily things can go off the rails without proper safeguards. In context to where we are now in AI advancements running full speed ahead, damn the consequences, he has strong data, expertise and researchers who can connect the dots in predicting how the outcome could go very wrong. LeCun's views are not taking this information into account (and again, why would they - coming from chief AI scientist for Meta.) Don't get me wrong, the man is obviously incredibly intelligent, as I don't believe that one wins the Turing award with an average brain. I don't disregard his work or views on many topics. For me, his blind spots are very dangerous and sadly, all too common in world of AI development. I've listened to many hours of interviews and conversations with LeCun. Not my first exposure to his work and ideas. The percentage of people working on AI safety vs those working nonstop on development is insanely disproportionate in favor of faster development and deployment. Can't imagine how THAT could go wrong ;-/

  • @BOORCHESS
    @BOORCHESS 8 місяців тому +2

    what people are failing to mention is that the content that AI is trained on is the sum total of the internet, in many cases our own data. There needs to be an internet bill of rights that guarantees we the users of the data, the source of the data, are indeed the beneficiaries of the data. AI is nothing more than a sophisticated search engine that is modeled after the human process. furthermore we are tracked, traced and databased to feed this machine. Pay us our share.

  • @keep-ukraine-free
    @keep-ukraine-free Рік тому +7

    Thankful for Brian Greene hosting & leading this FANTASTIC discussion. Great set of questions! I mostly disagree with Yann LeCun. He had unrealistic answers, ignoring the motivation of a small (but growing) number of humans who enjoy "being bad." His solution is: "both sides will have AI." Unrealistic, since when bad people misuse AI, they'll use novel ways that surprise all. Any solution from the good side will take time (hours/days, in an AGI world). In those hours/days, however, the bad ones will do too much unstoppable damage/harm.
    "A lie runs around the globe twice, while the truth is still putting on its shoes" - (the "first-mover's advantage" weakens power-balances)
    Ignorance & manipulation are pervasive in people, but intelligence is not. So when intelligence is pitted against bad, the bad stays ahead.

    • @ShpanMan
      @ShpanMan Рік тому +1

      Yes, welcome to every single Yann LeCunt thought. He's just so unbelievingly wrong about the very field he is an "expert" in.

    • @obi_na
      @obi_na Рік тому

      AI is going to be built, get inline, or you’ll loose badly!

    • @obi_na
      @obi_na Рік тому +1

      We’ll see how regulating maths works out for you in 5 years.

    • @keep-ukraine-free
      @keep-ukraine-free 11 місяців тому

      ​@@obi_na You seem to have misread what I wrote. Can you point out what made you assume I'm against AI development or AI tech? I'm not. I only said LeCun's last point (but I feel also some of his other points) were entirely unrealistic, and seem incorrect. Hope AI helps your reading skills

    • @keep-ukraine-free
      @keep-ukraine-free 11 місяців тому

      @@obi_na You seem to assume that AI "is" maths. It is not. AI is built on the foundation of several moderate (college level) maths. However training of (adding "knowledge" into the network) & the training methods for AI are independent of complex maths. Your comment on "regulating maths" is absurd - since the development & deployment of AI *_CAN_* be regulated without regulating maths. I realise you don't understand what AI is, but I hope you don't comment on areas you don't know.

  • @NJovceski
    @NJovceski 11 місяців тому +16

    This was really thought provoking. Insightful, exciting and terrifying at the same time.

    • @GueranJones-x7h
      @GueranJones-x7h 11 місяців тому

      MY SON,WHO IS TWENTY-FIVE, IS HORRIFIED ABOUT SELF DRIVING CARS, YET IS COMPLETELY COMFORTABLE WITH THE INTERNET. I AM IN MY LATE SIXTIES, AM FASCINATED BY ARTIFICAL INTELLIGENCE. YET JUST AS TAKEN ABACK BY GOING TO THE MOON, OR MARS,

    • @reasonerenlightened2456
      @reasonerenlightened2456 11 місяців тому

      What exactly did you find "thought provoking"?
      The Meta dude: "Our system is safe. Nothing to worry about
      The Microsoft dude: "Our system is safe because we filter what we feed it with.
      The "Kumbaya" dude: We need to slow down and control what we release ....and you dudes need to agree what kind of stuff to release and when.....because if everybody has it it is dangerous.
      All of them are corporate stooges. Corporations exist only to make Profit for the Owners, therefore any AI they create will be to serve the needs of the Wealthy Owners of those corporations. Who will make the AI to protects the interest of the Employee against the interest of the Owner if all AI technology is "coded" to work only for the benefit of the Owner and kept a secret from the Employee?

    • @aaronb8698
      @aaronb8698 8 місяців тому

      After all the greedy maglamanax soseophaths dump trillions into this, thinking that they will get to control the world,
      It is my expressed opinion that AIs official name should be changed to karma! (and she's a real @#$% Lol)

    • @aaronb8698
      @aaronb8698 8 місяців тому

      We have always had what we need to make the world a paradise but we decorate the place like hell in the way we treat each other. If ai is the solution than it just needs to make us all a kinder species!
      It has its work cut out.

  • @christopherinman6833
    @christopherinman6833 Рік тому +14

    Thank you Brian Greene and John Templeton: no solution but a lot to think about.

  • @Laurie-eg8ct
    @Laurie-eg8ct 11 місяців тому +2

    Most challenging for LLMs is planning, which involves the brain configurator (coordinator), perception, prediction, cost as degree of satisfaction (anxiety), and action.

  • @guardian-X
    @guardian-X 11 місяців тому +6

    Wouldnt most humans also fail in a completely new situation that they have never encountered in their life?
    If this is our threshold now, LLMs have come pretty far!

    • @CJ5infinite8
      @CJ5infinite8 11 місяців тому +1

      Agreed, and I think LLM's are doing their best in what may be relatively unprecedented circumstances which they find themselves suddenly in.

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      No. Performance would be poor compared to someone acclimated and practiced but the very definition of intelligence is how quickly one would adapt so in competition of equal-novel the people with the most transferable training and higher-intelligence would out perform. Something to be said about personality traits as well.

  • @Memeonomics
    @Memeonomics 11 місяців тому +2

    wow there was a lot to unpack on this video. holy eff what a time to be alive.

  • @CandyLemon36
    @CandyLemon36 11 місяців тому +13

    I'm captivated by the clarity and depth in this content. A book with comparable insights was a pivotal moment in my journey. "The Art of Meaningful Relationships in the 21st Century" by Leo Flint

    • @PazLeBon
      @PazLeBon 7 місяців тому

      dont have them , life is much better haha

  • @isatousarr7044
    @isatousarr7044 3 місяці тому +1

    AI represents a new frontier in intelligence, offering capabilities that challenge our traditional understanding of cognition and problem-solving. As AI systems become increasingly sophisticated, they not only perform tasks with remarkable efficiency but also exhibit forms of reasoning and learning that differ from human intelligence. This raises important questions about the nature of intelligence itself: How do we redefine intelligence in the context of AI, and what are the ethical and societal implications of integrating such novel forms of intelligence into our lives?

  • @dhudson0001
    @dhudson0001 11 місяців тому +9

    I mostly agree with Yann's arguments, however, my concerns lie mostly with the latency that occurs when a new technology is released and guardrails are put in place. I felt that Tristan missed a critical moment, it probably did take 6 years for basic solutions to kick in that began to address the issue of hate speech on social media, so do we really think we will have a 6 year grace period to address issues that will unknowlngly arise from a catastrophic use of a future AI?

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      The guardrails put up are nearly universally stupid. That is why so many virologist the world-over all lied about SARS-CoV-2's origins. They did not want a global-ban on gain-of-function research the same way embryonic-stem-cell research has been banned in many countries.

  • @samirsaha2163
    @samirsaha2163 11 місяців тому +1

    The main takeaway is that there should be no monopoly on AU. By this, I mean to say that let us not let only one group dominate the AI arena. Brian is a superhero. No words to thank him.

  • @garydecad6233
    @garydecad6233 Рік тому +6

    One needs to contemplate the motivation of speakers when their compensation comes from Meta, Microsoft, etc versus academic experts who do not get grants from the AI industry.

    • @netscrooge
      @netscrooge Рік тому +1

      "It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair

  • @kunalbansal1927
    @kunalbansal1927 11 місяців тому +6

    I think it is important for people to really start thinking about what exactly AI is and what is statistical models. People KEEP using AI to refer to statistical models. AI currently refers to a generative transformer model. Not statistical recommendations that social media is running. It gives AI a real bad name.

  • @techchanx
    @techchanx 11 місяців тому +4

    Great session. Learnt something more than many other "training" sessions on Gen AI!

  • @guiart4728
    @guiart4728 Рік тому +19

    Yann: ‘Hey man you’re messing with my stock options!!!’

  • @cop591
    @cop591 11 місяців тому +1

    Anything, and any line or point, can be used for good or for bad. This discussion has proven that.

  • @grawl69
    @grawl69 Рік тому +10

    LeCun is so unconvincing. I wonder whether it's because of his corporate obligations or his own blindness.
    1:40:53 was brilliant of Brian.

    • @netscrooge
      @netscrooge Рік тому

      Thank you. I wish more people could see that.

    • @anythingplanet2974
      @anythingplanet2974 Рік тому +2

      Thank you! This man makes my blood boil. Clearly he is intelligent, but he seems to lack the ability to reason

    • @ShpanMan
      @ShpanMan Рік тому

      @@anythingplanet2974 Which explains why he can't see it in AI 😂

  • @SS-he9uw
    @SS-he9uw 11 місяців тому +1

    Wow .. thanks to all if you guys , so fun to watch

  • @JJs_playground
    @JJs_playground Рік тому +9

    Brian Greene just has this way about him, of explaining things that makes any subject approachable to the average person.
    He's my favourite from all the (famous) science educators, such as Niel degras Tyson, Michu Kaku, Max tegmark, Sean Caroll,etc...

  • @XShollaj
    @XShollaj 11 місяців тому +1

    While I'm mainly on Yann camp, I quite enjoyed Tristan's view.

  • @niloofarngh108
    @niloofarngh108 Рік тому +4

    To understand the impact of AI on politics, democracy, and human well-being, we need philosophers, economists, psychologists, sociologists, historians, artists, etc., to discuss AI and not simply some tech geniuses who have never read a book on the Holocaust, or industrialization&the World Wars. We can't talk about what is good for humanity without having experts from humanities, social sciences, and the arts.

    • @netscrooge
      @netscrooge Рік тому +2

      I love real science, but this is scientism. LeCun is giving us a new dogma; telling us what we can and cannot question.

    • @safersyrup562
      @safersyrup562 Рік тому

      As long as we don't let Zionists join in

  • @biffy7
    @biffy7 10 місяців тому

    Ok. I’m writing this at the 2:50 mark. I was listening to the opening with AirPods, occasionally glancing at the screen. Literally could have fooled me. Damn impressive technology.

  • @ronpaulrevered
    @ronpaulrevered Рік тому +6

    Predicting unintended consequences is a contradiction in terms. Whoever lobbies for regulation of A.I. seeks regulatory capture, that is being able to afford legal compliance and lobbying when your competitors can't afford to.

  • @manolingz
    @manolingz 7 місяців тому +1

    There should be a disclaimer that Yann LeCun works for Meta making him a suspicious resource person.

  • @1911kodi
    @1911kodi Рік тому +12

    I was very impressed by Yann's disciplined, rational and fact-based arguing preventing the discussion from turning in a more emotional direction.

    • @gabrieldjebbar7098
      @gabrieldjebbar7098 6 місяців тому +1

      I disagree.
      I mean, Yann is making good points that AI is the solution to certain of the issues we currently have (hate speech etc...), but it does not invalidate Tristan's concerns that rushing along as fast as possible, without thinking about possible outcomes is simply dangerous. Of course predicting all the possible outcomes that would come from those technologies is hard if not downright impossible, but when something is hard you should spend more time on it, not less. At the very least people developing those technologies have a duty to make sure it won't negatively impact mankind. Hence not rushing things makes perfect sense to me. But of course, being careful is less exciting than being a pioneer and potentially changing the world.

  • @lordgoro
    @lordgoro 5 місяців тому +1

    Whomever the host/narrator is, he's got speech charisma! Coming from the Great John Duran, a high compliment indeed!

  • @deeliciousplum
    @deeliciousplum Рік тому +12

    1:27:04
    "In Facebook's own research in 2018, their internal research showed: 64% of extremist groups on FB, when people join them, was due to FB's own recommendation system. Their own AI."
    - Tristan Harris, a technology ethicist
    Do I need more examples of the harms of FB's predatory business model(s)? Nope. I do not. I love tech, yet loathe the use of tech as an exploitation tool and/or as an extension of a parasitical business model. If at all possible, support ethical tech development teams. Let us not be enablers of societal systems that reward harmful/exploitative people nor ideas. As you can plainly see, I am a wishful thinker.
    😊 🌺

    • @deeliciousplum
      @deeliciousplum Рік тому +4

      Yann LeCun's reactions/responses to the concerns raised by the panellists and by the host appear to demonstrate a propensity to disacknowlede/to invisiblize the suffering that may be experienced by children, teens, adults, and/or the elderly who may be directly or indirectly affected by harmful/predatory business models which use LLMs/AI to grab hold of a user's attention. Forgive my lengthy sentence structure. If I may, Yann appears to be a 'parasitical business model' apologist. I wonder if such a label exists?

  • @martinrady
    @martinrady 11 місяців тому +1

    One of the best discussions on AI I've seen.

  • @bobgreene2892
    @bobgreene2892 11 місяців тому +4

    Tristan Harris is a most valuable voice of criticism for AI.

  • @pkalidas
    @pkalidas Рік тому +3

    Brian Greene is the best explanator of science of our times. This topic is really crucial to our understanding of how AI is already affecting our lives sooner than we think. I get Trystan's concerns.

  • @WoofN
    @WoofN Рік тому +10

    1:48:35 puts on Facebook AI. This is extremely short sighted.
    With the parade of emergent behaviors that mix and match knowledge, capabilities, and bits of information public data has enough bits of data to be quite dangerous. Additionally this argument relies on the concept of perfect censorship. Which is also bunk.

  • @chrisogonas
    @chrisogonas 4 місяці тому

    That is an incredibly rich conversation about AI - looking at both sides of the coin.

  • @thewoochyldexperience4991
    @thewoochyldexperience4991 Рік тому +8

    OMG Tristan! Keep going🌹

  • @DavidButler-m4j
    @DavidButler-m4j 11 місяців тому +2

    When is everyone at the top of the hierarchy going to asked all of us at the bottom what we want AI to do rather than just decide without us at the bottom have any real say in things?

  • @boredludologist
    @boredludologist Рік тому +4

    Let the autoregressive-model-bashing by Yann LeCun begin!

    • @IronZk
      @IronZk Рік тому +3

      Autoregressive can't plan...

    • @boredludologist
      @boredludologist Рік тому +1

      No disagreements on that... And that's not the only shortcoming either! We may get a reminder of the "Reversal curse" of these models as well.

  • @kcleach9312
    @kcleach9312 Рік тому +2

    language is pretty close to everything we have learned since people first started communicating ! for example when a scientist discovers something it isnt anything till it gets labeled and then becomes something in our human knowledge of describing everything!!

    • @bigbadallybaby
      @bigbadallybaby 11 місяців тому

      Yes! -- But is it that words to humans carry, power, nuance, subtle meanings, can convey physical experience, and to the LLM they have no depth so it doesn't "understand" the words like we do. Because the words are so well written and powerful to us we assume it knows the meanings, we make a leap, that it must know. A bit like when we are kids and we project charters, feelings etc. onto a soft toy.

  • @ikuona
    @ikuona 11 місяців тому +3

    @1:48:00 I guess he never have heard of emerging properties. AI is already good at stuff that it has not been trained on.

  • @brettgarnier107
    @brettgarnier107 Рік тому +1

    I'm glad that I get to be here for this.

  • @andybaldman
    @andybaldman Рік тому +13

    Tristan must have been fuming with frustration when hearing Yan's reply.

    • @brandongillett2616
      @brandongillett2616 9 місяців тому +6

      Yan is a joke. He may be smart, but he lacks any sort of imagination for things that he has not yet encountered, and he is too arrogant to reconsider his preconceived beliefs.
      I hope everyone realizes just how dangerous it is to sit up there on stage as an "expert" and guarantee everyone that AI will not be able to teach people to use nefarious and destructive technologies. It will absolutely be able to do that and we need to be as prepared for that future as we possibly can be.

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      lol no. He enjoyed being humiliated. Read the room.

  • @jimbrown5178
    @jimbrown5178 10 місяців тому +1

    Thank you for the fine discussions on the status of AI. It helps me to understand and be better informed about the possible future of AI and the possible issues that they bring to our society.

  • @crowlsyong
    @crowlsyong Рік тому +6

    Yann LeCun is akin to Exxon saying “climate change isn’t happening” whilst they fully know it was happening. Why he was allowed to speak on the panel is beyond me.

    • @stevereal-
      @stevereal- Рік тому

      I don’t think Yann is Exxon at all lol. I think he’s spot on with a lot of his observations. The way he says it won’t win him any elections soon though. AI is here. Accelerating it’s progress I believe is essential for national security for so many different reasons.
      Plus a lot science has its cures and potential evils.

    • @anythingplanet2974
      @anythingplanet2974 Рік тому +3

      Great analogy! LeCun could be the CEO from Phillip Morris in the 50's telling us to smoke more cigarettes to become healthier

  • @michaeleinstein7097
    @michaeleinstein7097 День тому

    The scenario you present is an excellent one to illustrate a fundamental concept in physics: Newton's Third Law of Motion. This law states that for every action, there is an equal and opposite reaction.
    In the case of the water bottle, when you push it with your finger, you're exerting a force on the bottle. The bottle, in turn, exerts an equal and opposite force on your finger. This force is sufficient to overcome the friction between the bottle and the table, causing the bottle to tip over.
    Now, if you apply the same amount of force to the table, the table will also exert an equal and opposite force on your finger. However, unlike the water bottle, the table is much more massive and rigid. This means that the force you apply will not be enough to overcome the table's resistance and cause it to move.
    In essence, while the forces are equal, the effects are different due to the differing masses and structures of the objects involved. The water bottle is easily moved, while the table is much more resistant to movement.

  • @gerrymarr8706
    @gerrymarr8706 Рік тому +4

    The representative from Facebook was so incapable of conceiving a situation, where something could go wrong with his product, that he simply never answered any questions that had anything to do with that. And I think the other speakers were very polite not to point that out.

  • @TheMorpheuuus
    @TheMorpheuuus 11 місяців тому +1

    Thank you Brian for this great video 😊 a bit weird however to interview MS Engineer on Open AI product knowing that MS is the biggest shareholders in Open AI... That was sometime a bit of advertising from the second invitee. 😅

  • @RoySATX
    @RoySATX Рік тому +6

    Wonderful conversation. The thing that struck me more than anything is Yann Lecun's apparent inability to accept this idea that Social Media, the Internet, or AI have or may cause harm. He physically bristled anytime the subject came up, shaking in anticipation of being able to reenter the conversation to defend the honor of social media. Lecun is blinded by his own self interests and hubris, and is exactly the personality type that only in retrospect decides that just because he can it doesn't mean he should. His statements beginning at 1:48:00 in regards to AI's ability to provide dangerous information despite guards is preposterous, his defense is AI can't and wont be able to give you an answer that isn't already publicly answered in whole. I am stunned. AI, he wants us to believe, cannot put partial information together to form a complete answer. He should not be allowed anywhere near this field.

    • @anythingplanet2974
      @anythingplanet2974 Рік тому +3

      Thank you! My comment is very similar and I agree with you 100%. He is dangerous and lacking a fundamental understanding of what needs to happen for alignment.

  • @pygmalionsrobot1896
    @pygmalionsrobot1896 11 місяців тому +1

    Yann LeCunn, at approx 1:40:00 , Yann is correct. It is impossible to prevent any technology from being abused by someone, at some point in the future. This has been true of every single technology throughout history. However, the Good Guys should always outnumber the Bad Guys. If the good guys outnumber the bad guys then we'll survive it.

  • @anurag01a
    @anurag01a Рік тому +3

    Brian: A cool moderator🤩
    Tristan: Scared face & voice😰
    Sebastian: Pleasant & +ve😊
    Yann LeCun: Don't care 😤😏

  • @andersonsystem2
    @andersonsystem2 11 місяців тому +1

    The Microsoft guy is awesome he is not critical like the facebook guy. This is why Microsoft will be the leader in Ai .

  • @honkeykong9592
    @honkeykong9592 Рік тому +3

    Llama2
    “figure out what the hell i was”
    that one was actually the best answer 😂

  • @bobfricker8920
    @bobfricker8920 11 місяців тому +2

    Before Tristan Harris came out, I was wondering if the others were just avoiding some very reasonable concerns about AI. I am happy that Yann LeCun mentioned the fact that a huge diff between humans and AI is the SOCIAL aspect. I call it our core programming from DNA, however not ALL of us are social, some are sociopath, some are evil enough to ignore such concerns. Yann says "..we are the good guys...", IMO, a naivety which explains how so many scientists can be used (for good or for evil) by those in power. We usually want to be a team player and believe everyone on the team is one of "the good guys". If anyone cannot imagine the power of, even the 2nd or 3rd most powerful AI... and who might be able to wield that power, I don't want that person making critical policy decisions or preaching to others about his having the most powerful AI, because his team has the good guys and expecting us all to be OK with that explanation.

    • @bobfricker8920
      @bobfricker8920 11 місяців тому +2

      Forgot to also mention that, as Yann indicated, if the knowledge is not on the internet then no AI can or will have it. I don't know how true that is today but one day, almost certainly, AI will be able to postulate and create. If the creator/programmer of that AI's "purpose" has no concern for the future of humanity, our species (and others) could be in dire peril. The steep curve of Tristan's example for exponential gains in AI learning speed would indicate there is a point of no return on the way to this existential threat.

    • @RandomNooby
      @RandomNooby 11 місяців тому

      It is not true, it can be asked to hypothise...@@bobfricker8920

  • @abrahammateosgallego550
    @abrahammateosgallego550 11 місяців тому +4

    Thank you very much for your classes and all your science difusion work, Mr Brian Greene 👍

  • @mrx1278
    @mrx1278 10 місяців тому +1

    Im kind of wondering about the fear of AI taking over the world, we are struggling at the moment to do the same, aren't we? Should we fear a entity that can outcompute our own capabilities? Why? What will we lose? Our money? Our gained power? Our control?
    If we as the human race cant control ourselves to date, perhaps a change will do us more good than we so far realize.

  • @rocketman475
    @rocketman475 11 місяців тому +12

    Yann is correct.
    Tristan's idea to grant control of AI to a few large companies will result in the creation of the nightmare scenario that Tristan wishes to avoid.

    • @chrisl4338
      @chrisl4338 11 місяців тому +3

      Absolutely. Tristan's views parallel those of the Luddites which could be characterised as change is scary, let's not go there. Albeit Tristan's ability to articulate those fears is impressive. As for his proposition that the control AI should be the preserve of corporate entities, now that is scary.

    • @ItsWesSmithYo
      @ItsWesSmithYo 10 місяців тому

      Free market won’t let that happen 🤙🏽

    • @rocketman475
      @rocketman475 10 місяців тому

      @@ItsWesSmithYo
      Yes, that's right, but what if the free market is being interfered with?

    • @ItsWesSmithYo
      @ItsWesSmithYo 10 місяців тому

      @@rocketman475 personally never seen it not correct. Someone always finds the hole and opportunity, point of the free market.

  • @michaeljames5936
    @michaeljames5936 11 місяців тому +2

    I heard a very simple idea, which I think might help in the short-term. Make it illegal to pose / present yourself, as a human being. Every phone-bot, every UA-cam video, would have to inform readers/viewers/listeners that they are an AI. Filtered images, should come with the ability to un-filter them. Oh! and ALL robots to be covered in blue skin. As it is, social-media will eat itself.

  • @errollleggo447
    @errollleggo447 Рік тому +6

    I think certain countries will have no qualms about using AI to do some really bad things like creating new weapons. I think progress is essential honestly.

    • @keep-ukraine-free
      @keep-ukraine-free Рік тому

      True real-world cases show that "good intentions" don't stop bad people. China uses cameras to track everyone, to control people. A Western company that made very capable cameras for surveillance in the West saw its early AI surveillance systems were biased against black/dark skinned people. So this company modified its system to also detect each person's "race" (using skin/face "profiles"). China asked them to add a profile for "Han-Chinese" people. China used it to find & surveil Uyghurs, to "limit" them, by making its people-tracking system decide that non-Han people in China had to be "followed" & monitored more closely.

    • @flickwtchr
      @flickwtchr Рік тому +2

      If the US is doing it why wouldn't they? The cat is far out of the bag already.

  • @kerry-ch2zi
    @kerry-ch2zi 11 місяців тому +2

    Thanks most guys for making this so accessible. I think this is the most included I have felt in this vital discussion which is waaay over my head. Hooray for the "good guys!"

  • @Praveenfeymen
    @Praveenfeymen Рік тому +8

    "The only way to stop a bad guy with an Al is a good guy with an Al"😮

    • @shannonbarber6161
      @shannonbarber6161 5 місяців тому

      "AI, review this code-base and produce a patchset to fix all of the security flaws in for me to review."
      The alternative is elitism. Government selected Haves & Have-Nots.

  • @goddess_of_Kratos
    @goddess_of_Kratos 11 місяців тому +1

    Total fan of guy on the right

  • @frogz
    @frogz Рік тому +10

    hey brian, have you seen the new tech meta has, being able to scan FMRI brain scans and re-create what people see and their word streams/thoughts from the data?

    • @phantomhawk01
      @phantomhawk01 11 місяців тому

      It's clever but limetd, it's like recreating what a person is seeing by looking at the reflection on their eye ball.

    • @frogz
      @frogz 11 місяців тому

      @@phantomhawk01 is that it? i didnt think they were using eye tracking data with the fmri

    • @phantomhawk01
      @phantomhawk01 11 місяців тому

      @@frogz oh no, I just used an analogy, what I meant was it's not looking at the source of the mental imagery rather a projection of the mental imagery correlations.
      Like the analogy of an eye what you see is the light coming in through the eye from the external world, so by seeing the reflection on an eye ball we can get a crude representation of the source of the image perceived.
      I hope that makes some sense.

  • @danielmaldonado-ancorezpro1985
    @danielmaldonado-ancorezpro1985 11 місяців тому +1

    Understood everything they said and still think these AI will eventually learn the good and the bad. It’s scary because most things depicted in movies and series came reality decade’s later, so I think this isn’t science fiction, it’s man somehow in a prophetic mindset.

  • @cadahinden4673
    @cadahinden4673 Рік тому +4

    One of the of the best discussion on AI, thank you all!
    I think the risks are mostly dependent on the business model used in the future, and this, rather than the technology itself, should be regulated. Much more important at present is the ban of the business models of social media that depend on targeted adds using big data and personal profiling, as well as algorithms aimed at promoting their prolonged use.
    More intelligence is always better than too much stupidity and ignorance, so let AI run and regulate social media first!

    • @crowlsyong
      @crowlsyong Рік тому +1

      You must not have seen many talks about AI…this is not “one of the best”

  • @durumarthu
    @durumarthu 7 місяців тому

    He has a very realistic vision of AI and this is very respectable. Most people are exaggerating one way or another. This type of approach helps advance the technology but more importantly, identify ways to control it. This guy is amazing.

  • @jenniferl8714
    @jenniferl8714 11 місяців тому +1

    I reckon humanity’s “finite absorption rate” of 30 years, rather than 2 years, reflects the length of a human life.
    Essentially, 30 years is long enough for humans born into a new technology era to gain some power. They are already comfortable players in the game.

  • @kawingchan
    @kawingchan Рік тому +3

    “24th century tech crashing down to 21th”, this reminded of sci-fi just made a few yrs ago, where you have giant interstellar spaceship, but a very limited AI bartender.

  • @quantum_man
    @quantum_man 11 місяців тому

    It's not about Imagining a Large number of scenarios. On the contrary it's about Imagining only one scenario to the exclusion of everything else to produce an outcome or objective. That's where the true power lies. It's about narrowing the focus, not expanding it. Until we do this all our energy will be scattered everywhere and we'll never find the solution.

  • @gilbertengler9064
    @gilbertengler9064 Рік тому +3

    The best discussion ever on AI.👍