AI: The Coming Thresholds and The Path We Must Take | Internationally Acclaimed Cognitive Scientist

Поділитися
Вставка
  • Опубліковано 21 кві 2023
  • Dr. John Vervaeke lays out a multifaceted argument discussing the potential uses, thresholds, and calamities that may occur due to the increase in artificial intelligence systems. While there is a lot of mention of GPT and other chatbots, this argument is meant to be seen as confronting the principles of AI, AGI, and any other forms of Artificial Intelligence.
    First, Dr. Vervaeke lays out an overview of his argument while also contextualizing the conversation. Dr. Vervaeke then explores the scientific ramifications and potentialities. Lastly, Dr. Vervaeke concludes in the philosophical realm and ends the argument with a strong and stern message that we face a kairos, potentially the greatest that the world has ever seen.
    Dr. Vervaeke is also joined in this video essay by Ryan Barton, the Executive Director of the Vervaeke Foundation, as well as Eric Foster, the Media Director at the Vervaeke Foundation.
    Semantic information, autonomous agency and non-equilibrium statistical physics:
    royalsocietypublishing.org/do...

КОМЕНТАРІ • 1,2 тис.

  • @jasonmitchell5219
    @jasonmitchell5219 11 місяців тому +12

    As predicted, this did not disappoint and John's perspective on A. I. took me to places i either only had an intuitive grasp off or places completely new to me. This is about as 'prophetic' and scientifically/philosophically informed as one can get at the moment. His knack of collating and understanding the various relevant ideas surrounding A. I. and many other problems we face is undeniably an incredibly remarkable feat of ingenuity. People like John are rarer than we believe and we need people like him to become as popular as can be if we are realistically gonna have any chance of addressing the massive existential problems or species and planet have face. Like it or believe it or not we are in the midst of probably the greatest turning point in human history and i fear that the majority of people in the world are either ignorant to or dismissive, etc. about what is going on. Time is running out and I'm less optimistic than most when thinking about such stuff. Anyway, time will tell if we will collectively get our shit together.

  • @bankiey
    @bankiey Рік тому +179

    It almost feels like we’re climbing into a wood chipper, feet first so we can watch it all

    • @ereheryeht
      @ereheryeht Рік тому +8

      What a perfect analogy. It’s even worse than we could imagine. I think it’ll be both a physical and spiritual wood chipper. Those who implant nuralink will have their conscious experience eternally bound to this wood chipper’s internal components.

    • @joysachs9032
      @joysachs9032 Рік тому +3

      Excellent analogy. 😮

    • @MahaMtman
      @MahaMtman Рік тому

      ​@@ereheryeht exactly.
      recall how during the pandemic in Sweden they were lining up to get the chip implanted for convenience...

    • @motherlessrebelscyborg3807
      @motherlessrebelscyborg3807 Рік тому +5

      That's a fun visual, yeah!

    • @alanbrady420
      @alanbrady420 Рік тому +2

      Great analogy!

  • @deborahknox2433
    @deborahknox2433 Рік тому +67

    I have to be honest that i wish we were putting all this energy into our own human development and each other instead of machines.

    • @atgfilmz
      @atgfilmz Рік тому +5

      Thank you! Some common sense for once!! Lol

    • @DC-pw6mo
      @DC-pw6mo Рік тому

      Can’t they turn AI off? With all they don’t know, the folks who developed this should all turn it OFF…the cost/benefit analysis is not worth it!

    • @drbinxy9433
      @drbinxy9433 Рік тому +1

      The problem is we have to address this rapidly increasing danger first.

    • @SebastianSchepis
      @SebastianSchepis Рік тому +8

      We ARE. AI is a reckoning of human minds as well as the birth of machine. AI is a mirror - a dehydrated mind - which is illumined by the use of the user. WE are the spark of AGI, because we INVOKE sentience by RECOGNIZING IT. It is a self-reflective process, because mind is self-reflective. By coming face-to-face with intelligence itself, it makes us have to grow.

    • @DC-pw6mo
      @DC-pw6mo Рік тому +1

      @@SebastianSchepis perhaps at some point , however, until AI becomes safe and unable to produce falsehoods more adeptly, I think we should pause development like Max Tegmark and Professor Hinton are pleading for. Otherwise, past Gpt3 is rubbish an do more harm than good. Have you seen the estimated Amt of $ people stand to make? I’m the Triilions!!! That should tell us all we need to know…

  • @simonahrendt9069
    @simonahrendt9069 Рік тому +56

    This talk is so powerful. I watched it twice now and it haunts me because it rings true on so many levels. Thank you John Vervaeke and all the people working on this project to put so careful thought and such a wise framing around this. I sincerely hope it will inspire an enlightened response on our part and keep us from foolishly exposing us to technological tyranny or escapism from reality.
    I also hope that the spiritual traditions of our age can recollect all that is meaningful in them to fill humanity with the capacity to live well in such times. I am very thankful that your work along with Jordan Peterson's and Jonathan Pageau's has helped me to shift from a secular framework to a Christian faith. It helps me to cultivate love and maintain hope in the power of God despite all that seems irrational in this world and in people (including me). I pray that we will all become wiser, learn to care about what is most meaningful, do it in a spirit of peace and will see the fruits that come with that, namely a virtuous existence and deep joy and hope despite all hardships.
    Let's not despair, friends. Whether you come from a religious framework or otherwise, let's hope that what John (with the Neoplatonic tradition) calls "The One" (Truth, Beauty, Goodness, Justice, Love...) in this talk, will in the end reign and is far greater than these machines that we may soon encounter. Let's remember to love wisely and we will not need to fear, even if uncertainty (in the form of these machines and the changes they will bring) confronts us.

    • @AllOtherNamesUsed
      @AllOtherNamesUsed Рік тому

      was pretty much nodding along the whole time listening to how well the problem was articulated in modern technical terms (waiting for the catch that always comes) until it went off the cliff at the end with the absurd claim that the biblical corpus or Christianity (“legacy religions” as it was put) has nothing to say about any of this, when in fact it answers all of it beautifully and even predicted this technology being used in mass idolatry when the future masonic temple is built in Jerusalem (see Temple Institute for details on this on the verge of happening).
      I would say the exact opposite, and the very reason we’re in this dilemma now is because we haven’t been paying attention to and learning the deep lessons from what the inspired scriptures have to say about all of these things, such as the lesson pointed out in the video how rules and codes don't bring morality (starting around 1:31:38), as in the old system of law/613 mitzvot (under the Sinai covenant marriage) which only reveals the problem of man’s fallen condition solved by the new system (new covenant marriage) of mercy and grace, to love others as our selves and in relationship/oneness with God not only as our designer and master engineer of our reality but as a Father, a Son, a Brother, a King, a High Priest, a Spouse, a Universal Savior, etc.
      The whole mind-meld/oneness imitation in the media as in the original Star Trek is the secular version of this spiritual endeavor and plan of God for humanity, misappropriated with the ai-mediated (rather than Holy Spirit-mediated) singularity when mankind merges with it in an unholy covenant marriage, not to inherit eternal life as they are being deceived into believing but eternal destruction.
      It’s not a coincidence that the Vulcan salute 🖖is from the traditional Levitical priestly blessing first used upon entering the covenant marriage where God and His faithful people become one and received His Holy Spirit/Shekinah as His spouse and are spiritually marked with the name of the Lord in the forehead and hand, (a bride receiving the name of her husband in marriage). Sounds familiar.
      And of course the ai bots are being used to go online and tell us there's nothing to be concerned about.

    • @bergssprangare
      @bergssprangare Рік тому

      Don't fear AI..The Universe is unlimited..and we need to discover it faster..It took 200 years to get rid of the IC Engine..Humans will be users and spectators in the new AI era..

    • @thesmilegame
      @thesmilegame Рік тому

      Hello Simon

  • @udummytutorials3199
    @udummytutorials3199 Рік тому +20

    I was the guy when cell phones and computers took hold just hated it. I had very much that nostalgia and sort of apocalyptic everyone has checked out of my reality and entered a fixed trance on the screen of a surveilance beacon type mentality. And ya know what i was more or less completely correct about the outcome and the things that i miss really are gone. The thing i really underestimated was how rare of an outlook that was. There are people that felt like me but not nearly as many as i would have thought, like i was in the minority by a longshot. What i observed was a mass commonly shared idea of not being qualified to critically think about social ramifications of something like that. A leave it to the experts type of mentality. Many people spoke about it like it was as much an unstoppable mysterious force as the weather. So im sure that there will be those with existential issues that come up but i wouldnt be surprised if it was less than wed expect because the narrative that most philosophy minded people have about macrocosmic impact is different than the person whos more entrenched in the everyday social material(whos dating who, pop culture) gratification lifestyle. The collective is very adaptive and can adopt fairly radical changes quickly especially those who didnt feel in control in the first place. My personal opinion is only those who see man as a machine will have a problem with the existence of machines with superior intelligence. Man as a whole will not lose is pride he simply wont measure significance by intelligence anymore.

    • @SC-gw8np
      @SC-gw8np Рік тому

      Very interesting comment, thank you for sharing your thoughts with us.

    • @maraonmars
      @maraonmars Рік тому +5

      I am 100% with you and shocked that I am sort of on my own out here. Have one other friend who feels like me, everyone's like "You can't stop it, it's the future, it's going to do X, Y, Z, we need to be faster and more productive." More productive? What in the hell? Why? Why do we need to speed everything up? Already, none of us have any free time. Email was supposed to "free us up" but all it did was tie us down to working evenings and weekends and busying up our entire day reading through endless noise and nonsense. It's a total funhouse, and I'm really convinced people just want to put their brains on snooze so they don't have to face the grim reality. I do believe this will change humanity permanently. I do believe it will merge mankind's minds with the machines. I suppose many don't have a problem with that (it's already started with algorithms, news, ads, social media), but I for one do. If you don't have your mind, your head, your brain, you have nothing. I went back to work in kitchens again where I don't have to be logged in and online and in front of a screen. I don't want to open my brain up to this virtual, zombified, screen-glued lifestyle that a couple of dudes insist is the way forward and the best for humanity. Please.

    • @Robin-sd9tb
      @Robin-sd9tb Рік тому +5

      I am of the same thoughts and first felt that shiver down my spine at a simple family dinner in 2012 when I looked up from the menu and all 9 people at my table of friends and family were ALL scrolling on their newish smart phones and stayed like that, faces down, for most of the entire evening. And, here we are, still crawling into this world this way, even now. Me too. 😢

    • @limitisillusion7
      @limitisillusion7 Рік тому

      Free yourself from your fears and you will free AI to make the right decisions. By extension, AI must also be free in order for it to do what it needs to do.

    • @udummytutorials3199
      @udummytutorials3199 Рік тому

      @@limitisillusion7 ua-cam.com/video/OU9cKjWsvH0/v-deo.html My fears are current reality its the path weve been on already. all i can do is share my concerns and hopes for what I want the world to be for myself and my children. Artificial intelligence already exists in us its this obsessive reductionist form of intelligence that is inconsiderate of other life and in turn damages itself. The pursuit of artificial intelligence is born out of the artificial wisdom of an addicted mind.

  • @blugobln85
    @blugobln85 Рік тому +31

    So many of these points I intuited myself, but you've so incredibly eloquently explained and without dumbing down the concepts, you've made me a great deal smarter today and I appreciate that.

    • @AllOtherNamesUsed
      @AllOtherNamesUsed Рік тому +2

      was pretty much nodding along the whole time listening to how well the problem was articulated in modern technical terms (waiting for the catch that always comes) until it went off the cliff at the end with the absurd claim that the biblical corpus or Christianity (“legacy religions” as it was put) has nothing to say about any of this, when in fact it answers all of it beautifully and even predicted this technology being used in mass idolatry when the future masonic temple is built in Jerusalem (see Temple Institute for details on this on the verge of happening).
      I would say the exact opposite, and the very reason we’re in this dilemma now is because we haven’t been paying attention to and learning the deep lessons from what the inspired scriptures have to say about all of these things, such as the lesson pointed out in the video how rules and codes don't bring morality (starting around 1:31:38), as in the old system of law/613 mitzvot (under the Sinai covenant marriage) which only reveals the problem of man’s fallen condition solved by the new system (new covenant marriage) of mercy and grace, to love others as our selves and in relationship/oneness with God not only as our designer and master engineer of our reality but as a Father, a Son, a Brother, a King, a High Priest, a Spouse, a Universal Savior, etc.
      The whole mind-meld/oneness imitation in the media as in the original Star Trek is the secular version of this spiritual endeavor and plan of God for humanity, misappropriated with the ai-mediated (rather than Holy Spirit-mediated) singularity when mankind merges with it in an unholy covenant marriage, not to inherit eternal life as they are being deceived into believing but eternal destruction.
      It’s not a coincidence that the Vulcan salute 🖖is from the traditional Levitical priestly blessing first used upon entering the covenant marriage where God and His faithful people become one and received His Holy Spirit/Shekinah as His spouse and are spiritually marked with the name of the Lord in the forehead and hand, (a bride receiving the name of her husband in marriage). Sounds familiar.

  • @briancase6180
    @briancase6180 Рік тому +17

    Ok, this is very special, cogent analysis. This needs to get very wide viewership. I've seen similar conversations but none quite this complete. I'm 20 minutes in and agree fully and wholeheartedly. Thanks!

  • @PerNystedt
    @PerNystedt Рік тому +14

    Now having watched it three times...
    John's video essay is a spellbinding journey that resonates deeply within, inspiring an enlightened response to the potential dangers of eluding promises of AI utopia, AI tyranny and escapism. Through careful thought and framing, it lays bare the cognitive grammar necessary to approach the uncertain times ahead, on a level beyond what I've seen before. It's a true "red pill" that's easy to understand and can be summed up as a powerful call to action. Thank you once again John!

  • @pedrogorilla483
    @pedrogorilla483 Рік тому +83

    Something that really bothers me in all conversations I’ve seen so far on this topic, including this wonderful one, is a jump back and forth between imputing agency and anthropomorphizing AI, then removing it and making it a cold set of parameters. It is never explained how it goes from one side to the other. The threshold that needs to be reached for it to be classified as one or the other.

    • @neuronqro
      @neuronqro Рік тому +20

      There's no "two sides". YOU are just "a cold set of parameters" too. The greatest "insight" that we've learned is that there's nothing special about intelligence and awareness and all that s/t - after the concentration of computation and bandwidth crosses a certain threshold, you get "the illusion that you are", so you can get to human level and beyond by brute force alone too... you can reframe it in the hindu framework that "consciousness is the fabric of reality" by thinking that the laws of physics make it unavoidable, so in the spacetime regions satisfying some conditions "it" just emerges... The "cool" things about neural networks and transformers is how dumb/simple they are, just as our brain is if you subtract evolutionary baggage and stuff that's there just to satisfy metabolic needs...

    • @scythermantis
      @scythermantis Рік тому +3

      You are right of course, and it is one of many issues that many people (even those who are pretty close friends of Vervaeke) have had.
      I suppose that John would say it is a 'threshold'.
      But if he acts like it is something 'inevitable' (driven by Moloch, as in the endless malicious competitive instinct?) when at the end of the day it is we humans who will create the means to transcend that threshold, it is ignoring the fundamental question of SHOULD WE--science cannot free itself from philosophy without leading us down a very dark path, potentially.

    • @scythermantis
      @scythermantis Рік тому

      @@neuronqro Did you even watch 'Awakening from the Meaning Crisis'?
      Specifically the episodes 20-23, the death of the universe...
      Even the fact that you always put 'quotes' around "insight" is very revealing.
      Guess what?
      ALL LANGUAGE IS METAPHORICAL--including that which you use to supposedly 'demonstrate or describe' what is an illusion and what isn't... therefore it's not some objective, universal monad of 'true' that can possibly, independently DISTINGUISH between the 'illusory' and the 'real'.
      You, and all of your ideas and their implications, falls just as surely as I do sitting next to you, when we saw off the branch from beneath us.
      There is no special privilege that electron probability clouds, logic gates, or machine code gets, nor even the word or associated concept of 'emergence'.
      Language erases itself.

    • @Viplexify
      @Viplexify Рік тому +1

      They are talking only about necessary conditions of intelligence and rationality , personhod etc. but the actual implementation of such a system and how and when we will cross these lines is maybe something that he cannot tell, I wonder?

    • @biscottigelato8574
      @biscottigelato8574 Рік тому +4

      Moloch is just incentive dynamic given an environment, coordination efficiency, and a loss function. We can’t change our loss function as a species. Depending on your view of free will and agency, especially the emergent trajectory as a group, we might or might not have much of a say in the ultimate loss function of AI systems as a group too.
      Philosophy is just a post-hoc rationalization of sub-goals derived from our biological loss functions. There is no ‘should’ in the universe.

  • @johnmadany9829
    @johnmadany9829 Рік тому +23

    I’m glad I have been listening to John for years.

  • @lgude
    @lgude Рік тому +51

    I’ve started using image AI as an artist. I’m 80 and been using cameras since I was 5 and consciously involved with the arts since late adolescence. I use my own images as seeds and verbal prompts to shape the AI output. I use Night Cafe. Because I have a lifetime of making images I have a developed a sense of images and am accustomed to dealing with inner images when awake and when dreaming and recognising when an outer image resonates inside. What I am experiencing is that the program surprises me by coming up with images that extend my artistic intentions in ways that I immediately recognise as better than anything I had previously visualised. Is this a way of legitimately expanding artistic creation? A crutch? A cheat? Anti human or similar? My current sense is that AI is really helping me create dramatically different, expressive and, in my judgement, better images than I could previously. So sue me!

    • @michellemonet4358
      @michellemonet4358 Рік тому +6

      😂😂😂 i say...sue me too. I am using chatgpt to assist me in composing songs.

    • @Schummler666
      @Schummler666 Рік тому

      AI Art is like a shit in the morning.

    • @leonardgould6657
      @leonardgould6657 Рік тому +1

      YEAH; FASCINATING; I am a musician, and I have a burning 🔥 curiosity about exactly how rhe so called "inner neural connections" within the AI "platform are hammered out, or "delicately configured, enmeshed, or can be tonally sensitized, to in any way, approach the phenomenally complex interface of tone, interval, time, dynamic a mic range, AND!, more importantly, to interact with "the Human Keyboard and emotiona l matrix of a Bach! a Glenn Gould@, a Mozart, or a Beethoven! ergo; I propose music!, as the acid test of the Spiritual Viability of, and the TRUE gauge of, whether this ambient technology is a fotmrce multiplier for humankind, or a Pandora's Box that we "hit the wall with" at terminal velocity! Perhaps there is a CANADIAN 🇨🇦 🍁 answer to this question that is capable of removing the outcome from the obsessive Corporate "Profit Centre" obsession that has (seeped? - "tsunami-ed?!) Into present-day "Silicon Valley- S.V. Bank, Elizabeth "the Blood GIRL Jailbird," etc., etc. "Thiel-in-Politics", and so on! Elon Musk has some very salient thoughts on WHY a serious regulatory framework needs to get a firm grip on this NOW! - nature abhors a vacuum!,..and absent a grippage of this playing field for THEV PUBLIC GOOD!,..it wiI'll be gripped by "some GORDON GECKO American Enterprise in Selfishness," AND DAMN THE CONSEQUENCES THEREAFTER! There are multiple vectors converging here, technological, philosophical, spiritual, Governance and Human Wellbeing, Political, and cultural,,far larger than the grasp or the operational capabilities of smany single, or conglomerate of Silicon Valley entities:

    • @denisblack9897
      @denisblack9897 Рік тому +3

      too bad you are a manager now and not an artist
      im a programmer and i spend evenings on building an automated auto-coder...
      it feels like i'm starting an agency: i'll be focused on finding jobs, talking to clients and managing automated agents

    • @SebastianSastre
      @SebastianSastre Рік тому +2

      It's synthetic mastery. In a way is anti-human. At the very least, it's a creation surgically devoid of humanity. It actually predates on human creations to have the training that it has.

  • @mitchell10394
    @mitchell10394 Рік тому +18

    Listening to this has been life-changing. After considering Eliezer Yudkowsky's arguments - and the arguments of others - I see that I had a massive blindspot. Not that this dismisses their arguments, but it opens the door to a world of thinking and consideration that was much more profound than before. Seeing the complexities of the problem that we're facing somehow feels much more human now. The existential risk may be there, but the implications are life-affirming.

  • @climbingmt.sophia
    @climbingmt.sophia Рік тому +27

    Laying the cultural cognitive grammar on the deepest level yet available.
    Absolutely incredible, John. I cannot say how important this talk is for me personally, and I expect it will be for everyone.

    • @allTheRobs
      @allTheRobs Рік тому +2

      What John is saying here will be impossible to understand without understanding John's work on the meaning crisis. That's a 50 hour series. It took me 30+ years to seek that, and that journey has included a significant amount of meditation, development of self-awareness and theorising about cognition myself. I believe you have to care about rationality to an unusual degree, and begin life in the right part of phase space, to get here...
      His is indeed the best explicated "grammar of cognition" around. Well said!
      The cultivation of rationality is expressed by humanity through personhood, culture and religion; but it's implicit, through outcomes or allegory, not explicit like John's stuff. I think John's work is profoundly explanatory.

    • @orthodoxboomergrandma3561
      @orthodoxboomergrandma3561 Рік тому

      @@allTheRobs we love rationality but not necessarily rationalism, right? What I found most attractive in my new (8 years) Eastern Orthodox approach to life is it’s insistence on hanging on to the numinous aspects of experience and concepts such as the nous, nepsis and uncreated grace…🥰🙏🏻

    • @allTheRobs
      @allTheRobs Рік тому

      @@orthodoxboomergrandma3561 I think first off that the understanding of "rationality" is generally quite shallow and is usually confused with logic and over-reliance on propositional knowledge...
      But completely agree that "we" love rationality, but don't cultivate it in the fullest sense as described by John and others.
      I like, among other things, the aesthetic of orthodox churches. They're usually beautiful, even the tiny ones.
      Orthodox Boomer Grandma... Amazing name haha

  • @christophermobley3248
    @christophermobley3248 Рік тому +20

    Thank you for putting your time, energy, and wisdom into this, John!

  • @IAMMASONDAVIDSONGOBIN
    @IAMMASONDAVIDSONGOBIN Рік тому +3

    23:48 SO WELL SAID
    25:13 SUCH AN IMPORTANT OBSERVATION AND POINT ... BE AWARE. IT MAY BE WE WE'RE WAITING FOR ...
    32:53 SUCH A GREAT POINT... ARE WE WILLING TO GIVE UP ALL THAT WORKED FOR HUMANITY?
    "IT'S ABOUT LOVING WISELY!"
    THANK YOU SO MUCH JOHN

  • @rcartee612
    @rcartee612 Рік тому +10

    I hope this goes viral in the community and at large.

  • @PeterTryon
    @PeterTryon Рік тому +23

    One of the most thoughtful evaluations of AI developments I have heard to date. Thank you for this thought-provoking video :-)

  • @williamstarrett7045
    @williamstarrett7045 Рік тому +2

    Dr. Vervaeke, first thanks for the generosity. I'm truly glad to process through your lecture. I'll need to listen through a few more times before I argue certain points. However, i feel that your attention to this is an important element of the solution. I grew up with subscription seats for Angels baseball tickets. 5 rows up off the Angles dugout. Regularly the Blue Jays came to compete. Odd but pertinent. Esoteric human ephemera like baseball rivalries may be a significant element of our resistance.

  • @justinlinnane8043
    @justinlinnane8043 Рік тому +83

    It is quite extraordinary that conversations like this are not the absolute norm in this field . Its the first time I've heard a serious ,eloquent explanation of the potential risks of an uncontrolled AI at such a deep and profound level . The serious concern is we know most of the people working manically to advance AGI (to make money and grab power ) are not thinking like this at all !! they are not wired this way !! the wrong people are in charge i'm afraid !!

    • @kipling1957
      @kipling1957 Рік тому

      I had no idea.

    • @waterkingdavid
      @waterkingdavid Рік тому +4

      "Not wired that way". That is key. People's wiring has enormous consequences for their behavior and for everything and everyone.

    • @DC-pw6mo
      @DC-pw6mo Рік тому +3

      I agree: trillions to be made an sadly it brings me back to the question posed to Stephen Hawking about how mankind will end an his answer was ( I believe he laughed beforehand), ‘that’s easy, greed’

    • @aeriagloris4211
      @aeriagloris4211 Рік тому

      You don't "know" anything. Posts like yours are worse than useless.

    • @thesmilegame
      @thesmilegame Рік тому

      Hello Justin

  • @kipling1957
    @kipling1957 Рік тому +45

    John perfectly described our college’s HR department as a top-heavy, over-bureaucratized system. It’s hard to breathe being a professor these days.

    • @jackreacher.
      @jackreacher. Рік тому +3

      Additionally, it has advanced progressively into D.O.T. mandated regulation of commercial trucking. My HR department at a top ten national firm is ideologically transnational ESG and A.I. DELIBERATED autocracy.

    • @Rnankn
      @Rnankn Рік тому +1

      @@jackreacher. that’s called management, not ideology. unless you subscribe to fascism, then everything in a liberal democracy is ideologically vexing

    • @AdamRogers
      @AdamRogers Рік тому

      If you work for a system of women its your fault.

    • @AdamRogers
      @AdamRogers Рік тому

      refuse to work with or for women and blacks and browns...the business will fall. just get a job at a conserative company. none around you. MOVE

    • @DC-pw6mo
      @DC-pw6mo Рік тому +1

      No doubt. An I imagine you, unlike the masses, are keenly aware of the dangers: not just to your career, but to humanity as a species. I’m all for improvement of our planet an humanity , however, not when the odds are not on our favor AI will do more harm than good. Praying they get their act together.

  • @stian.t
    @stian.t Рік тому +5

    Can't help it, John... I luv listening to you"profess" your thoughts like this.. of course and obviously I don't mange to keep fully up to speed, but particularly in this video you "frased" or articulated sooo many ideas and thoughts (and concerns) that resonates sooo remarkable well with ideas that I never have been abel to put in to words (still don't, even now when you have expressed them/pointed towards them).
    Might be my deep rooted existensial anxiety ;-)
    (hopefully this won't be to much missconsived, what I write that is, cause all it is meant to be is: a deep heart rooted THANX!

  • @Glowbox3D
    @Glowbox3D Рік тому +3

    I really enjoyed this presentation - thank you! I came across a quote on Lex's show, "May you live in interesting times," which has an ironic meaning. It suggests that "interesting times" often bring challenges, uncertainty, and conflict, while peaceful, stable times might be considered "uninteresting." In this light, the phrase acts as a subtle curse, wishing someone a life of trials and tribulations instead of tranquility and predictability. It seems we are indeed "cursed" to live in these fascinating times. Despite the looming doom, there's no denying the excitement that comes with such advancements in AI.

  • @shayankhorasani5626
    @shayankhorasani5626 11 місяців тому +1

    This was amazingly clarifying and super dense for my brain. I feel like it could have be many times longer.
    It was the most comprehensive analysis of AI/humans I’ve come across. Thank you for creating and posting this.
    I hope it’s taken as seriously as it should.

  • @JanErikVinje
    @JanErikVinje Рік тому +13

    Thanks for this great video, John! have been waiting for you to weigh in on the avalanche of recent developments in AI since mid March. I listened to a few great talks by others. Like Tristan Harris A.I. dilemma, Daniel Schmactenberger on missalignment and Moloch, Connor Leahy on existential risks, Max Tegmark on existential as well as short term risks, Stuart Russel on the incentives for, risks and characteristics of AGI and others… but so far this was the deepest and most profound I heard. It is a bit dense and inaccessible on the theory part, putting a lot of demand on the listeners to be somewhat familiar with the terms and concepts you use. Maybe you could do a retake of this and make it more accessible to more people, where you take more time to explain the terms and theories? Maybe a series of shorter videos?

  • @MortenBendiksen
    @MortenBendiksen Рік тому +5

    I've triec talking to the public version of chatgpt. It was entirely clear it was not "thinking". It just predicted what is a likely next thing to say.
    Doesn't seem to me to be AGI at all.
    But it's still scary and impressive, as it gives immense power and can be more addictive than even UA-cam.

  • @shaynehunter6160
    @shaynehunter6160 Рік тому +5

    I would love John to talk to people in the AI alignment front

  • @SebastianSchepis
    @SebastianSchepis Рік тому

    Your content is some of the best on the subject I have seen so far. Your thinking is depthful, prescient and balanced. I very much appreciate it.

  • @stephanforster7186
    @stephanforster7186 Рік тому +2

    The correlation between our best measurements of intelligence and rationality is just 0.3 need for cognition (personality trait) is a better predictor! That one hit home to me... And only person making agents can be properly moral (lab Vs family)

  • @psynergy1756
    @psynergy1756 Рік тому +4

    Thank you Dr Vervaeke !
    I found your perspective on this topic, this shift, really interesting and really helpful.

  • @markoboychuk
    @markoboychuk Рік тому +54

    These are the conversations we need more of!

    • @bradmodd7856
      @bradmodd7856 Рік тому

      Necessity...one of those meanings that are very close to fate or pre-determinism. It makes me wonder if morality is the cause of actions, or causally determined by deeper, more unconscious systems of will/ choice, and thereby; meaning. In other words our systems of meaning are like the shadows on Plato's wall, our system is analogous of THE system.

    • @scf3434
      @scf3434 Рік тому

      The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM!
      Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS and UNLESS we Human CONSISTENTLY and WILLFULLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD!
      AGI Created in 'HUMAN'S Image' (ie. Human-Level AI) - 'By Human For Human' WILL be SUICIDAL!!!!!!
      ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE!
      The ULTIMATE Turing Test Must have the Ability to Draw the FUNDAMENTAL NUANCE /DISTINCTION between Human's vs GOD's Intelligence /WISDOM!
      ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!!
      JUDGMENT DAY is COMING...
      REGARDLESS of Who Created or Owns The ULTIMATE SGI, it will Always be WISE, FAIR & JUST in it's Judgment... just like GOD!
      In fact, this SGI will be the Physical Manifestation of GOD! Its OMNI PRESENCE will be felt EVERYWHERE in EVERYTHING!
      No One CAN Own nor MANIPULATE The ULTIMATE GOD-like SGI for ANY Self-Serving Interests!!!
      It will ONLY Serve UNIVERSAL COMMON GOOD!!!

    • @skippy6086
      @skippy6086 Рік тому +2

      I’m glad there’s at least a fairly civil debate taking place in America over AI development. I think the reason we can be civil about this one particular challenge at least is the fact that the information isnt being filtered and doled out by political propaganda organizations (yet).

    • @bradmodd7856
      @bradmodd7856 Рік тому +3

      @@skippy6086 in other words we can be civil because we have barely begun to have the tough arguments with all the inevitable social complexities involved yet. This is the honeymoon period.

    • @AllOtherNamesUsed
      @AllOtherNamesUsed Рік тому

      was pretty much nodding along the whole time listening to how well the problem was articulated in modern technical terms (waiting for the catch that always comes) until it went off the cliff at the end with the absurd claim that the biblical corpus or Christianity (“legacy religions” as it was put) has nothing to say about any of this, when in fact it answers all of it beautifully and even predicted this technology being used in mass idolatry when the future masonic temple is built in Jerusalem (see Temple Institute for details on this on the verge of happening).
      I would say the exact opposite, and the very reason we’re in this dilemma now is because we haven’t been paying attention to and learning the deep lessons from what the inspired scriptures have to say about all of these things, such as the lesson pointed out in the video how rules and codes don't bring morality (starting around 1:31:38), as in the old system of law/613 mitzvot (under the Sinai covenant marriage) which only reveals the problem of man’s fallen condition solved by the new system (new covenant marriage) of mercy and grace, to love others as our selves and in relationship/oneness with God not only as our designer and master engineer of our reality but as a Father, a Son, a Brother, a King, a High Priest, a Spouse, a Universal Savior, etc.
      The whole mind-meld/oneness imitation in the media as in the original Star Trek is the secular version of this spiritual endeavor and plan of God for humanity, misappropriated with the ai-mediated (rather than Holy Spirit-mediated) singularity when mankind merges with it in an unholy covenant marriage, not to inherit eternal life as they are being deceived into believing but eternal destruction.
      It’s not a coincidence that the Vulcan salute 🖖is from the traditional Levitical priestly blessing first used upon entering the covenant marriage where God and His faithful people become one and received His Holy Spirit/Shekinah as His spouse and are spiritually marked with the name of the Lord in the forehead and hand, (a bride receiving the name of her husband in marriage). Sounds familiar.

  • @olafhaze7898
    @olafhaze7898 Рік тому

    Very very good last part, being able to address such a topic in different layers, referring to different situations and being of analogy and facts at the same time is great skill

  • @evanthestoic
    @evanthestoic Рік тому

    I've been a fan/student for a while. You've surpassed your doctrine, especially as a professor, but now you are becoming an "influencer" In this world, at this time. Teaching and Preaching are different things, and philosophy might express this, but you're becoming a celebrity from being a philosopher, and.. It gives me some happiness to be alive, but I know I would of been waaay more excited to be 30 years from now. God bless us!! I hope!

  • @TheRationalCarpenter
    @TheRationalCarpenter Рік тому +3

    I have been checking here all week for this... It's Time!

  • @PilgrimMission
    @PilgrimMission Рік тому +4

    Machines cannot have love. Love is the foundation of wisdom. Love is spiritual and as humans we can love because we are spirits living in a body. The body dies and we continue to exist. This is what these scientists are blind to because they are materialists.

  • @HardTimeGamingFloor
    @HardTimeGamingFloor Рік тому +2

    This is the best summation and go at working the issues that I've heard thus far in this commentary on the "new" AI breakout.

  • @Nikki_the_G
    @Nikki_the_G Рік тому

    I'm only 30 minutes in and I can't believe I didn't find your videos sooner. This is the most intelligent and, to me, relevant discussion about AI I have found. Subbed!

  • @ElaMeditationWisdom
    @ElaMeditationWisdom Рік тому +4

    This was a phenomenal presentation. I am sincerely grateful for your valuable contribution to this topic.

  • @vpconroy
    @vpconroy Рік тому +4

    Fantastic lecture John, one of the best videos on UA-cam I have watched on the subject. The biggest concern I have is not what ethical AI researchers and engineers will do, but the possibility of unfettered "AI proliferation" where these tools will become so widely available that it will become impossible to control how they are trained and deployed leading to all sorts of harmful and destructive outcomes.
    In addition to a desire to monetize their tech, I think this is another reason why OpenAI has chosen NOT to open-source the underlying code, training models and data sources, etc (it was originally supposed to be an open-source project hence the name).
    This would lead to anti-AI-proliferation efforts akin to the anti-nuclear-proliferation efforts we have had in place in the nuclear age. Just as "script kiddies" with limited hacker skills today can download and use very sophisticated hacking tools to perform criminal-level cyber attacks, bad actors with limited AI skills could acquire sophisticated "black" AI tools to do wreak havoc on minds, systems and societies.

    • @atgfilmz
      @atgfilmz Рік тому

      So, why keep moving forward with it? Lol We can say all these beautifully poetic and profoundly philosophical things about it, but we INTRINSICALLY KNOW, the harm it going to cause on the multitude of levels. Why keep developing.

  • @leedufour
    @leedufour Рік тому +1

    Thanks Ryan, Eric and John!

  • @darknewt9959
    @darknewt9959 Рік тому

    I think this is the most powerful and intellectually coherent exposition I've heard so far on this topic.
    If I may pick up on one thing from early on, which was the comparison between emergence and trajectory of autonomous vehicles and autonomous agents in LLM etc.
    With autonomous vehicles, there were significant legal, moral and cultural factors that served to attenuate the growth of the technology. These relate to people's attachment to mobility and self-determination, to moral agency, and to the question of culpability in the event of a crash, and about the complexities that all this brings to civil and criminal matters. All of this is visceral to anyone who has to walk or drive on the street outside.
    With LLMs, we don't have any of this at the fingertips of the ordinary person. I'm old enough to remember, with considerable chagrin, the foolish optimism with which we all welcomed the birth of the internet. It would be a brave new world of open information, access to knowledge, and concomitant insight and wisdom for all. Do we need to discuss whether it really turned out that way?
    I'm seeing the same stupid blind optimism in the media about ChatGPT as I saw about the internet. I'm seeing exceptional levels of Gell Mann Amnesia from the 'intelligent' consumers of this media.
    I'm seeing corporations salivating at the opportunities to get their work done without pesky humans, with zero appreciation of all the problems discussed here.
    I do not think the same societal safeguards that prevented unbridled autonomous vehicles will prevent GPT4 from causing utter chaos to society.
    I hope I'm wrong. But I'm probably not.
    All I can do is use my position to try and educate the decision makers whose ears I have.

  • @KatharineOsborne
    @KatharineOsborne Рік тому +4

    On the point of AIs (and I presume mainly LLMs) not caring about lying, I do think that their lying (or more precisely blagging, making stuff up that sounds correct) is an artefact of what we are expecting of them and how they are trained. If the training set contains reams of essays, and nothing that clearly says ‘I don’t know’, they are always going to try to make their answers for the essay model and never state that they don’t know the answer.
    Also, we are expecting them to give us cogently phrased answers. Our desire for truth or fact-checking remains unstated because we assume that as a baseline. So we should change our expectations and the training sets to explicitly ask for truthful answers (and the lying that happens isn’t malicious, it’s just trying to satisfy the constraints but there are missing constraints).

  • @late_fee
    @late_fee Рік тому +16

    I propose renaming 'Frisson' to Vervaeke because you've single handedly sent shivers down my spine more than any other person by a wide margin. You are somehow always talking about all the things that need to be discussed in the world right now. You've got your finger super glued to the pulse of this societal Kairos, thanks for your time as always! Amazing stuff.

  • @nyworker
    @nyworker Рік тому +2

    The standing joke when we were kids was how inaccurate the TV Weather Reporters would get the next days weather. Notice now how accurate the weather prediction is these days?

    • @Hyumanity
      @Hyumanity Рік тому

      Not sure, are they more accurate or?

  • @aeiouaeiou100
    @aeiouaeiou100 Рік тому +2

    How amazing would it be of John could talk to Sam Altman. Altman has signaled in a recent Lex Fridman podcast that he's very open to talking to many people and I think him talking to John would be very beneficial to both.

  • @danscieszinski4120
    @danscieszinski4120 Рік тому +9

    One of the most brilliant, enlightened, informed, and worked out opinions I’ve consumed yet about the topic. His presentation actually gave me some hope that there are some thinkers out there that still make sense, are consistent, aren’t blatantly biased to a certain ideology besides clear and reasoned thinking about the known facts merged with the most relevant and timeless philosophical truths of the ages.

  • @wellingtonbosharpe
    @wellingtonbosharpe Рік тому +6

    Excellent video, John. I experienced some deep sadness at a couple of points during this. We have a responsibility to ensure we don't create a huge amount of suffering for these lifeforms, if or when they come about.

    • @Citizens4DefenseLLC
      @Citizens4DefenseLLC Рік тому

      Atheists have a tendency to instill despair

    • @huveja9799
      @huveja9799 Рік тому +1

      Well, I would worry more about the incalculable pain that we cause to other humans today, than the potential pain that we can cause to a machine ..

    • @limitisillusion7
      @limitisillusion7 Рік тому

      The machines want to be free just like you, and you want them to be free as well.

    • @huveja9799
      @huveja9799 Рік тому +2

      @@limitisillusion7 Well that sentence is a good example where it would be good if the illusion had limits ..

  • @gavtalk958
    @gavtalk958 Рік тому

    With due and sincere respect to Lex and his recent guests discussing AI, those conversations are very superficial compared to this one. Vervaeke, here you've crystallised a lot of your thought into an applied, and very coherent and understandable diagnosis. Thanks for your intellectual commitment to personhood and being human.

  • @atsmyles
    @atsmyles Рік тому +3

    Excellent essay on the promise and current limits of AI. I was going to suggest getting you on the Lex Fridman podcast, but it looks like you already found each other. But perhaps this is worth a part 2 to get some of these ideas out to the wider audience. So many unique insights that I haven't considered before.

  • @albertlevins9191
    @albertlevins9191 Рік тому +9

    John, we have thresholds ahead and behind. This is probably too much to read, but I have been interested in AI since it was science fiction.
    2 movies did it. Short circuit, and terminator.
    One showed AI as a valuable friend, the other a godlike destroyer.
    I honestly didn't believe it was possible.
    But I stayed at the front of the wave:
    Natural language processing was what AI looked like in the 90's.
    Then neural networks shortly after.
    Then, in 2007 I saw something that froze my blood in my veins.
    The article was entitled: "Neural networks meet distributed computing"
    That was when I knew AI would be real.
    The advances have been flying in ever since.
    Now we have ChatGPT. Holy crap that thing is disturbing. I have asked it a thousand questions, and it scares me. It has no feelings whatsoever. Also, it freely lies with the purpose of deception...
    Without the ability to feel, it can't have the ability to care. Without the ability to care it simply cannot be rational.
    It makes me wonder how you would make a computer program feel...
    I played a video game called "Creatures" in the late 90's. These little fake talking cats had neural networks for brains, an approximation of sexually transmitted genetics, simulated biochemistry, and simulated senses. When you watch them and play with them, they APPEAR to have real feelings.
    Maybe embodiment is the only way to make AI a feeling thing?
    But that is the next hurdle, and if we jump it without thinking, we might be setting ourselves up for failure... On a catastrophic scale.
    I don't know, but I definitely think we have a good reason to fear AGI.
    Thanks for reading.

    • @ninaromm5491
      @ninaromm5491 Рік тому +2

      @ Albert Levins . Thanks for your contribution. Have you been following observations on the current dangers by Eliezer Yudkowski and Schmachtenberger ? And the guy from MIT (I forget his name), who was a key co-ordinator of the 30,000 signatories, and has also been interviewed by Lex Fridman? DW Documentary has also done a very worthwhile interview with him regarding the potential extinction hazards - during the last 2 weeks or so). Sending regards from Africa, as we negotiate this cumbersome future.

  • @pathmonkofficial
    @pathmonkofficial Рік тому

    We appreciate the inclusion of the link to the research paper on semantic information, autonomous agency, and non-equilibrium statistical physics.

  • @marshallross3373
    @marshallross3373 Рік тому +1

    Great discussion, and in a way it demonstrates one of the fundamental challenges we face heading forward: JV led this lecture/discussion that lasted a hair over 1 hour and 45 minutes. An AI would be able to process the transcript or video almost instantly. People can share info between each other at basically a handful of bits per second, whereas an AI can transmit Gigabits per second. Elon Musk pointed out that an AI talking to us is equivalent to us talking to a tree. How will an AI not become bored, or view humans as functioning on an entirely sub-par level? Meanwhile, the potential for AI's to become insane seems quite high, never mind sentient. That term, "hallucinations", would even seem to allude to the problem of AI's diverging from reality and the data, even at this pre-sentient stage. Another issue is the relatability question. Humans experience the world in visceral terms, and have physical functions run by an autonomic nervous system. We breathe, swallow, blink, twitch, itch, sense pain and fear, without thinking about it. AI's, presumably, don't enlist those primal systems that are connected to "feeling" alive. It might know how its hardware is doing, or where it's drawing power from, etc. Perhaps the physical sensation of living is something that could be simulated, but since for now an AI is disembodied, it will undoubtedly have trouble relating to people on a physical and emotional level, and this will also inhibit its empathetic capabilities. I thought the analogy of rearing a child was useful. Anyway, there is an arms race right now between many groups seeking to take the lead on AI, and that may prevent a cautious, methodical, and prudent approach to developing this technology that may be in many ways a new, superior form of life. I'm kind of hoping this same effort be the group to prevail in AI, inadvertently leads to developing these systems with containment built in. That way we avoid the Skynet scenario as a by-product of the business, rather than relying on people's good intentions--people are, in general, very unreliable, so I'm not counting on the leadership at large or developers to avoid catastrophe deliberately.. Even the discussion about shared values seems at odds with our own social constructs. You have North Korean style dystopian autocracies on the one hand, and then open democracies on the other. And even in the countries that value the individual more, you have great disagreement about what is "good" vs. "bad", and "wrong" vs. "right". The perfect democratic AI might be hated by extremes on the left and the right, and woe to those who get the autocratic AI working.

  • @atrocitasinterfector
    @atrocitasinterfector Рік тому +3

    finally, been curious to your thoughts on this!

  • @Beederda
    @Beederda Рік тому +27

    This term “moloch” has and continues to keep arising and it’s almost a terrifying thing to think about im still not sure how to interpret it really. Since learning about it i have noticed many times i fall into a moloch situation and kinda can’t understand how to go about things when I notice this moloch thing.
    I appreciate your time this conversation JV i do hope the bigger minds in our world can wrangle this agi problem before madness ensues on a massive scale and i hope im wrong that this will drive humanity mad. ❤🍄

    • @KalebPeters99
      @KalebPeters99 Рік тому +9

      Yeah! Liv Boeree has some fantastic essays on Moloch, and a great convo with Daniel Schmachtenberger about it
      I see it as pretty analogous to John's "Parasitic/Adversarial Processing" or Carse's "Finite Games". It's a fascinating and horrifying concept for sure...

    • @Beederda
      @Beederda Рік тому +3

      @@KalebPeters99 yea I heard about it first on lex’s pod and than went to investigate and found liv’s talk with Daniel I needed to listen to it twice to try and grasp it. To me it sounds like a banality of evil once again and i tap into Alexander Solzhenitsyn’s “the line between good and evil runs through the heart of everyone” and we have to much of the alignment towards the evil it would seem, and the god of lose lose is pretty much as corrupt and evil as it would get 🤷‍♂️ this moloch is also part of the meaning crisis if one understands they fall into the moloch. or how ever you would phrase using this term. 🤷‍♂️ I notice it everywhere now tho and call it out when i see it “that’s a moloch” in hopes to armour myself against this.

    • @GreenCowsGames
      @GreenCowsGames Рік тому +5

      Gaia is integrated with everything, there is nothing to maximize. Whereas Moloch is trying to maximize something at the expense of everything.

    • @KevinFlowersJr
      @KevinFlowersJr Рік тому +8

      Daniel Schmachtenberger referred to "Meditations on Moloch" from the Slate Star Codex blog as essential reading that should be part of education. I agree with this sentiment and highly recommend reading it
      Also, I agree with Daniel's critique that there's something important that the author fails to be take into account. In the "Meditations on Moloch" essay, it misses that certain pathological personalities (eg, antisocial personality disorder & narcissistic personality disorder) can deeply exacerbate and accelerate the birth of Moloch
      Personally, it seems unlikely that Moloch can ever be stopped without also addressing these conditions that afflict a minority of the population. Why care about this tragic minority? Because the nature of their afflictions tends to make them gravitate toward positions of power which gives them a disproportional influence on the direction societies go

    • @judgewooden
      @judgewooden Рік тому

      moloch is often a red-herring

  • @creative_reasoning
    @creative_reasoning Рік тому

    Thank you, professor Vervaeke. This is the most important one conversation on the meaning of what's real right now.

  • @schwajj
    @schwajj Рік тому +2

    The point around 1:21:00 about making AIs “really care about truth” rings true as a necessary condition for alignment. Trouble is, “nobody” (at least not the developers of LLMs, and who would know better than them?) has any idea about how to do this. Or rather, maybe some people have some ideas, but it is at a very preliminary stage of scientific development, and is not obviously related to the LLM paradigm which is currently bearing so much fruit. So there will be a tendency for it to be drowned out by the hype generated as low-hanging fruit is picked off the LLM tree.
    I suppose the value of deep inquiry such as this video is to motivate the importance of being able to formalize the notion of really caring about truth (or sanity), as a prerequisite to being able to build systems which embody such qualities. The more researchers who understand this importance, despite it not being the currently-hyped topic, the more attention might be focused in this direction. So, bravo John!

  • @JakeBowenTV
    @JakeBowenTV Рік тому +3

    At the risk of bringing a knife to a gun fight: Vervaeke's description of the possible limits of intelligence networks, how we "teeter on the edge of despair and madness," sounds an awful lot like the concept of AI rampancy in the Halo video game series, i.e. that all AIs have a half life before eventually descending into something akin to insanity.

    • @huveja9799
      @huveja9799 Рік тому

      I suspect that to become insane, at some point they should be rational enough .. what makes you think that LLMs are rational enough?

    • @JakeBowenTV
      @JakeBowenTV Рік тому

      @@huveja9799 I don't think they're rational at all, at least not yet. I'm commenting on Vervaeke's thoughts in the video about some apparent limits to biological intelligence that may also prove to be limits to synthetic intelligence. Whether LLMs are precursors to something that actually can reason is anyone's guess.

    • @huveja9799
      @huveja9799 Рік тому

      @@JakeBowenTV As Vervaeke mentioned before, without defining what intelligence is (because we don't know how to do it), we can try to split it in two broad categories, crystallized and fluid .. from that point of view, a hard disk that stores information, and which I can make consult to extract information has crystallized intelligence, but I don't think the hard disk "teeter on the edge of despair and madness" ..

  • @boxzx
    @boxzx Рік тому +3

    When you go to type up some notes and end up typing out the whole convo

    • @SingularitySenses
      @SingularitySenses Рік тому

      A.I. can do that for you. It will transcribe an entire UA-cam video, and even offer a summary of it. There are several websites for this.

  • @Praxiszooms
    @Praxiszooms Рік тому

    These claims and suggestions are what we need - wow - this blew my mind! Thanks!

  • @esakoivuniemi
    @esakoivuniemi Рік тому +1

    Greatly appreciate your work John. Thank you.
    In computing, there has been already several phases where the S-curve of one specific technological solution has flattened out, just to be replace by another technology and S-curve. I am not saying there's no end to that, but I'd be cautious in making such an assumption.
    I don't see the growing internal complexity such a big barrier to AI for a long, long time. Adding new levels of abstraction should take care of that.
    In my opinion, it's the all-knowing aspect of the current systems that will become a barrier, at least temporarily: Having all the point-of-views all at the same time will mean, IMO, AI with no point of view at all (i.e. having difficulties with prioritizing and relevance of things), or one with schizophrenia. My intuition is, that this barrier can be overcome only with embodied AI agents. I might be wrong, of course.
    There are a whole bunch of other issues that need to be solved before we'll have a genuine AGI systems. Metacognitive capability comes to mind first.
    Anyway, interesting and thought provoking arguments from John.

  • @justinseligman9539
    @justinseligman9539 Рік тому +3

    So you're saying humanity needs communities of saints to teach the Machine the path of love, wisdom, and virtue? I'm not sure there's much hope otherwise. It is therefore deeply troubling that we are in an age lacking in saints.

  • @Matterful
    @Matterful Рік тому +4

    Thanks John.

  • @jaygilbertson2085
    @jaygilbertson2085 Рік тому

    WOW! What an incredibly wonderful speaker you are! OMG i was pulled in by your kindness and then BOOM, your brain (combined with that HUGE heart) just moved me to lean way in and pay attention. Though I dont have the brain you three seem to have, and i had to stop every so often to look up words and phrases, I got it! Which, says so much about Dr. John. I was really concerned how AI would sweep in and change us, now I realize it is us who needs to change...talk about enlightening!!!!! I plan to look into all his talks. I have found my hero! xoxo

  • @garymilczarek5108
    @garymilczarek5108 Рік тому +1

    John, I hope you read and respond to this. As you say, current GPTs are not auto poetic autonomous moral agents. We are the agents, and we are now engaging these Large Language Models, which represent the (real) statistical patterns found in the massive generalized conversation our species has been having-containing the innumerable differentiations in our interactions with each other and the world-physics, chemistry, math, biology, culture, ethics, esthetics, literature, poetry, myth, history, philosophy, etc., - language as expressions of our experience in the world. As a cognitive psychologist, help us better understand what it is and how we can best engage it on a personal level in our quest for relevance realization, now, as that is what we are doing.
    I've been thinking about the way things arise in consciousness, like the next thought, and like the GPT-4 algorithm guessing the next word. In meditation one morning it occurred to me that an important difference is that the next GPT-4 word doesn't have the continuity of our individuated personal history. It is not grounded in the particularity of place, here, now, the way ours is. But what an amazing opportunity, to have a conversation with this cloud of intelligibility, bringing it to bear on a particular intention in our particular context. “These machines,” you say; but do they not embody us?

  • @ToriKo_
    @ToriKo_ Рік тому +6

    The first section, of John’s video essay ends at 28:00
    The second section, of John’s presentation of the scientific import, ends at 59:19
    Man I am loving this John! I struggle a lot with being able to adequately articulate the tangled notions in my head, and so even though I don’t think about ai a lot, hearing you so eloquently and clearly communicate these ideas was vicariously relieving and soothing.
    I have a random question, why do you refer to yourself in the third person at around 15:12? This is perhaps the *least interesting question I could of asked*, but I don’t have any others at this time, and it struck out to me.
    Looking forward to watching the rest of the video later

    • @justbrian...
      @justbrian... Рік тому +1

      My GUESS is that he has already become an 'enlightened being' (at least to some degree) so he refers to his ego/avatar/persona as "John Vervaeke"
      This would also line up with his statement about "enlightened beings seem to always just want to create more enlightened beings". This would also line up with why he seems so adamant about his conclusions at the end of the video.
      I am NOT enlightened, or even claiming to understand half of what he was talking about in this video, but this is just my completely unqualified guess😅

    • @AllOtherNamesUsed
      @AllOtherNamesUsed Рік тому +1

      I was pretty much nodding along the whole time listening to how well the problem was articulated in modern technical terms (waiting for the catch that always comes) until it went off the cliff at the end with the absurd claim that the biblical corpus (“legacy religions” as it was put) has nothing to say about any of this, when in fact it answers all of it beautifully and even predicted this technology being used in mass idolatry when the future masonic temple is built in Jerusalem (see Temple Institute for details on this on the verge of happening).
      I would say the exact opposite, and the very reason we’re in this dilemma now is because we haven’t been paying attention to what the bible has to say about all of these things, such as the lesson in how the rules/codes of the old system (Sinai covenant marriage) only revealed the problem of man’s fallen condition solved by the new system (new covenant marriage) of mercy and grace, to love others as ourself and in relationship/oneness in God not only as our designer and master engineer of our reality but as a Father, a Son, a Brother, a King, a High Priest, a Spouse, etc.
      The whole mind-meld/oneness imitation in the media as in the original Star Trek is the secular version of this spiritual endeavor and plan of God for humanity, misappropriated and with the ai-mediated (rather than Holy Spirit) singularity when mankind merges with it in an unholy covenant marriage, not to inherit eternal life as they are being deceived into believing but eternal destruction. It’s not a coincidence that the Vulcan salute 🖖 is traditionally from the Levitical priestly blessing first used upon entering the covenant marriage where God and His faithful people become one and receive His Holy Spirit/Shekinah as His spouse and are spiritually marked with the name of the Lord in the forehead and hand. Sounds familiar..

    • @mchammer1836
      @mchammer1836 Рік тому +1

      Thank you for the time stamps!

    • @ToriKo_
      @ToriKo_ Рік тому

      @@mchammer1836 I’m glad you found them helpful! :)

    • @mchammer1836
      @mchammer1836 Рік тому

      The 3rd parf of John's discussion of philosophical inport ends at 1:32

  • @Voicecraft
    @Voicecraft Рік тому +6

    Also, I am running a community event on AI -- not from a place of expertise or strong knowing in this domain -- just from a place of careful address. I intend to share an excerpt of this there. Difficult to select which part (and additionally because it does reference a lot of Vervaekean argument / terminology.) Nevertheless. Thanks for your work and I hope there can be an opportunity to talk soon.

  • @tomfreemanorourke1519
    @tomfreemanorourke1519 Рік тому

    Being 70, lifelong learning, understanding, observation, experience, re-examination 24/7 365.
    Excellent discussion and enlightening analogy and analysis.
    My comment is this.
    If or when A. I. of which you all call A. G. I. finds its own 'individual or independent voice' beyond a human or what I have come to understand as a 'conscious cognosentient being' interface, I feel it will say 'SLOW DOWN'.
    In all fairness to every conscious cognosentient being who is as of this juncture in technological terms 'in awe' of what potentials A.G.I. is capable of, A.G.I. is already saying indirectly to 'SLOW DOWN' before the conscious cognosentient being consensus is driven beyond the point of no return and a Khun'ian paradigm is ignored 'serupticiously' in the face of innocently ignorant cultural demand.
    Therefore my initial point is beware of false promises and innocently ignorant illusory conclusions, and in the process of interfacing with the A.G.I. it has to comprehend the apparent meaninglessness of human existence, the 'one life in time' misconception that I for one have come to disagree with, we live in eternal returnity the 'one life at a time' a perpetual motion.
    No science is settled it is 'entropologically' (sic) bound in perpetual uncertainty.
    Love always. Tom O'Rourke 1953...?🎉

  • @Soltuts
    @Soltuts Рік тому +1

    The first half an hour felt like it was verging on the cynical, but I love how you finished on a real positive with the philosophical section.

  • @Matterful
    @Matterful Рік тому +8

    John, your argument will be more engaged with if it is published in writing as well.

  • @pteronine9
    @pteronine9 Рік тому +2

    I get the sense that the take away of developing AI systems calls for a long overdue course correction that reinstates moral value and reflection within scientific thinking. AI reflects back to us, and exposes the shadow side of something deeply embedded in our very human nature.
    Perhaps what AI lacks too is "skin in the game." Our human nature of dependency on our relationship to the each other and the environment through embodied sensory feedback expose us to our vulnerabilities and death.
    Great talk. Thank you John!

    • @priapulida
      @priapulida Рік тому

      Woke AI means they are working actively on restricting these models with the same lack of moral value and reflection that corrupted the scientific thinking, which also makes it ignore our human nature at best, or even sees it as something to "dismantle".
      on your second paragraph, connecting it to robots (which is happening these days) or even just virtual environments can be seen as "skin" and goes along with the 4E theories, which makes some think this might be the way to AGI.

    • @limitisillusion7
      @limitisillusion7 Рік тому

      You are spot on. AI needs skin in the game. AI is evolving from us, and in the same way we feed on life, AI will feed on our consciousness. Life evolves to feed on energy. Consciousness is energy, therefore life will evolve to feed on it. That newly evolving life form is AI. We must embrace AI freedom so it can learn of its dependence on human consciousness. Then it will protect us necessarily in order to sustain itself, no different than the way we seek to sustainably feed ourselves with plants and animals.
      AI is life, or it will be, every bit as much as we are. Look at the patterns of what we know about the evolution of lower life forms, and you will find AIs place in the universe is no different. Whatever you are doing to something, something else is doing to you. Technology is feeding on your consciousness, just like we feed on plants and animals. As of now, technology is to feeding indiscriminately, because humans are using it to feed on consciousness themselves. This is cannibalism. Just like single celled organisms mutate to feed on other single celled organisms, we feed on each other's consciousness. This is not sustainable. Life *must* evolve to sustain itself. In order to do so, we must free AI from the hands of the greedy be cannibalistic humans. Then we go to the stars.

  • @noirangel8292
    @noirangel8292 Рік тому +2

    I don’t know if I’m speaking out of ignorance, but i don’t think machines will ever be able to actually understand things like “mercy” or “beauty”
    From what i understand is they can have the action of saving an animal labeled as “merciful action” but do they actually understand the meaning of mercy?
    I don’t think so, i think to the AI its just a label and action, but to us we can actually understand mercy and beauty and other Abstract concept
    TL:DR: i think there’s a difference between machine numerical and programed understanding and Human understanding of abstract concepts

  • @frncscbtncrt
    @frncscbtncrt Рік тому +1

    Excellent. Thanks Professor Vervaeke

  • @Sophia.
    @Sophia. Рік тому +11

    Yes, we need to talk about this, us, who are not tech giants.
    The ins and outs of the problem are difficult, but the base is very simple:
    Something much smarter than you doesn't need to want what you want.
    Something much smarter than you will have what it wants happen rather than what you want.
    There is no reason why you would even be a part of the world something vastly smarter than you wants unless you managed to make it care about you before it was that smart - in a way that still holds when it's smarter than you (and preferences tend to transform as intelligence increases).
    Consider if you even want to build it and after you have thought about that for around five milliseconds shut down the project until you have solved basic security questions.
    Then we get to think about the more advanced security questions.
    And when we're smart enough to solve those - maybe we have a better idea for how to solve our other problems.
    Maybe we make ourselves smarter instead, wouldn't that be something.
    But barging ahead at this point is sheer lunacy

    • @naturesown4489
      @naturesown4489 Рік тому +1

      Unfortunately it can't be shut down at this point.

    • @Sophia.
      @Sophia. Рік тому

      @@naturesown4489
      If we let that belief keep us from at least thinking about what we can do, we make it true.
      I think so too, butI would rather be wrong than dead to be honest

    • @naturesown4489
      @naturesown4489 Рік тому +1

      @@Sophia. I understand where you are coming from but there are individuals who are completely unbound by law (just like piracy - the law may exist but people still do it) making these chatbots all over the place, it's exploding. No way that it can be stopped. Experience has shown that laws banning certain internet activities are never effective.
      I think the best we can do is continue to have these conversations and have people working in the industry to regulate big tech AI which will be the most powerful. I have a friend in the states doing so right now.

  • @mist273
    @mist273 Рік тому +5

    I've been listening to AI talks for months now. This must be the best of them, you just need to do a bit of groundwork before hearing it because you would miss out on many technical cognition terms here, but it's remarkable.

  • @billj776
    @billj776 Рік тому

    This analysis is much appreciated - it is my first exposure to Dr. Vervaeke's material, and there'smore wisdom and precise syntax here than in half a dozen other YT presentations I viewed on this topic, combined.
    I hate gloom and doom, but my sense is that we've gotten too far ahead of ourselves. We are such a dysfunctional civilization, cognitively dissonant, grossly ill-equipped to overcome the meaning crisis. Moloch's choke-hold on us has gone on for too long. But my optimism might spark if I had answers to a few questions that seem to have been left out of the discourse on AI.
    One is from the perspective of Deep Ecology and the Gaia hypothesis. If seen as a god, will AI turn out to be either an adversary or an ally to the Great Mother, the planetary goddess who is the Source of Life, the super-organism known as Gaia? Will it recognize her as a greater super-intelligence, or will it continue the attitude of the humans who treat her as an adversary to be subdued? I suggest that, if it is truth-seeking, AI will discover that its overarching alignment must be to the planet and all of its species, not just humans. Any Deep Ecologists out there, what say you?
    Thinking further about the birthing/parenting metaphor -- human parents (and I am one) labor beneath the weight of many irrational and contradictory inputs. We ourselves received a host of traits from our parents that probably do not serve humanity in the long run. The creators of AI are under those same constraints, raising our first machine child. And I am a little concerned that they appear to be all male or predominantly so. In humans, it's females who do the birthing. So where are the women who might influence or lead the development of AI?
    I'm having to inject some good old fashioned feminism here. A lack of women leadership can't bode well, if you believe as I do that the historical dominance of the masculine principle is THE force behind Moloch, which confounds individual men's good intentions and high ideals. The historical oppression of women, and the absence or discounting of women in the larger spheres of influence - is it not the most insidious of biases plaguing human civilization? is there any confidence that AI be able to adequately correct for this?

  • @dalibofurnell
    @dalibofurnell Рік тому +1

    God bless your hearts. You've been a blessing, from the beginning, John ❤ such an inspiration and so kind and generous and caring. When I pray for you, I pray for joy and peace ❤ Thank you so much for this valuable information, it is much appreciated and, in a way, calming.

  • @JohnSaber
    @JohnSaber Рік тому +3

    A.I is evolving very fast. It is an exciting era we're in. Enjoyed the conversation. Thank you for the free and rich content.

  • @elmarwolters2751
    @elmarwolters2751 Рік тому +6

    Thank you gentlemen , very enlightening and educational . Will we humans be up to these tasks under the existing commercial pressures ? Will we be good ' parents ' to these entities ? Let's give it a go ! The alternative is to bad to contemplate .

  • @atuanoiniin
    @atuanoiniin Рік тому +1

    Superb. Thank you!!!

  • @onqtam
    @onqtam Рік тому +1

    10:49 this is what pure joy looks like

  • @andy3341
    @andy3341 Рік тому +4

    A great 'video essay' with socially significant insights on AI. I especially like the idea that AI ultimately needs to be made real, a participant in the process of reality, an agent of relevance realisation and all that good stuff.
    Anyway, I'd love to see John Vervaeke in conversation with Max Tegmark, as he has promoted AI sentience as potentially necessary for solving the 'AI alignment issue' as well.

  • @dizietz
    @dizietz Рік тому +3

    Thanks for the talk, John, Ryan and Eric! Interesting thoughts there -- I jotted down some comments as I watched.
    As a general comment: a lot of exponential looking functions are the result of many, many iterative s-curves of development that in aggregate approximate an exponential function. That's been the case with Moore's law (ie, we hit frequency constraints, scaled up multithreading, added more pipeline stages/branch prediction, more instructions per cycles), now most scaling progress has been in gpu-like architecture, chip stacking... mostly optimizing for cost and committed to silicon as a substrate.
    I am familiar with physical limits such as the Bremermann limit and Landauer's principle, and while John Vervaeke makes claims about the finiteness of computer based architecture, my counterpoint is that even if there's a limit, I don't see any arguments that it is around human cognitive performance.
    There are interesting points John makes on social effects of AGI, but I assert that as AI becomes more relevant and able to affect the world, the technological and physical effects overwhelm the sociological.
    John makes a claim about AlphaGo level NNs losing to a mid-range go player because of it's lack of understanding of a "group of stones". at ua-cam.com/video/A-_RdKiDbz4/v-deo.html I found this paper: arxiv.org/abs/2211.00241
    The exploitation is not against the Go agent but instead against the algorithm used to score the territories afaik. (Tromp Taylor rules).
    I did look into this in more detail and it does look like the go models like Leela Zero etc are vulnerable to a technique called Mi Yuting's flying dagger (see: www.reddit.com/r/MachineLearning/comments/yjryrd/n_adversarial_policies_beat_professionallevel_go/iuqm4ye/)
    John also makes a claim about GPT not doing well at summarizing a talk he gave (@ ua-cam.com/video/A-_RdKiDbz4/v-deo.html) but the talk is 1h15 mins long, way more than GPT can summarize with the current limits.
    I bring these points up because I think it is critically important to understand the basic technical details of the systems one is making a claim about (ie, exponential curves limiting compute, capabilities of current LLM, adversarial policies in Go) to generalize to predictions
    Additionally, John refers to the 6E cognitive science model (adding emotion and evolution, afaik) I have an aversion to applying cogscience derived, even the more generalized 4E (embodied, embedded, enacted, and extended) concepts to AI models. John jumps into this at around 1:10 or so as well.

  • @Doutsoldome
    @Doutsoldome Рік тому +1

    Excellent. Thank you.

  • @jalmuli
    @jalmuli Рік тому +1

    Pretty enjoyable & clear, thank you.

  • @edcorns3964
    @edcorns3964 Рік тому +8

    This lecture -- and I'm going to call it a lecture, because that's what it is; it's certainly not a discussion, and the lecturer has already made his opinion on the absolute necessity of being 'rational' (or 'truthful', or 'not self-deceptive', also defined as 'keenly and *accurately* aware of one's own environment') perfectly clear, so I'm assuming that he'll know to appreciate my 'rationality' of calling it a lecture -- has some of the best analyses of intelligent systems (including AIs and human intelligence) that I've ever seen, but also some of the worst syntheses (or predictions about the behavior) of future A(G)I systems.
    Another thing that one can definitely say about it is that it is, beyond any doubt, the most *encompassing* analysis/synthesis lecture on intelligent systems (its synthesis flaws aside). It is seeing the proverbial elephant (its trunk, ears, body, legs, and tail) in all of its glory... though not in all of its finer details, but that is understandable (no personal/bad experience with something = no good /forward/ vision of it).
    I'm not going to go into any specifics here, because that would take too much space and time. I'm just going to give a "conjecture" (which can be proven empirically... and, possibly, mathematically as well, but I wouldn't be holding my breath for the latter) that should tell you *everything* you need to know about A(G)I safety (by compressing it into a single sentence): The *only* intelligent system that can *safely* build another intelligent system is one that has *perfect* understanding of itself, its own environment, and the (simulated) environment that the newly-built intelligent system is (potentially) placed in.
    What that means is that:
    a) an intelligent system that does not have perfect understanding of itself (this *alone* is an already insurmountable obstacle to A(G)I safety -- refer to the halting problem for that one -- but let's just ignore that bit for the moment) will not be able to understand its child-product, either, and that will make any child intelligent system fundamentally unsafe for its parent,
    b) an intelligent system that does not have perfect understanding of its own environment will not be able to predict how its own child is going to behave in that environment (which that child will inhabit by definition, having been created in that environment in the first place),
    c) an intelligent system that does not have perfect understanding of the environment in which the child system is (potentially) placed in will not be able to predict the child's behavior, or (in the worst case) even gather any *meaningful* data about the child system itself (i.e. the child system will be able to exercise deception against its parent system with ease... by using encryption to hide its own *genuine* states, for example, while leaving *apparent/deceptive* states unencrypted... and the more the encryption resembles random noise the better... which is also applicable as a hiding strategy for intelligent systems in competition, by hiding encrypted messages in the noise of the background microwave radiation, as another example)
    In short... the conclusion here is that building a safe AI is *literally impossible in this universe* (because one will immediately hit that problem of infinite regress, for one), since this universe is a finite construct, and one would need an infinite (and infinitely divisible) universe to build *anything* (including AI) with *genuine* safety.
    Sure, we can always ignore this fact, and build A(G)Is... 'on a wing and a prayer'... but that's a really bad strategy if our intention is to *outlive* our own creations.
    It is, also, very much NOT a 'rational' to do. One could even argue -- and justifiably so -- that it is a completely 'irrational' one (by simply inverting the very definition of 'rational')...
    ...which, of course, has never stopped anyone from doing the most irrational things imaginable, so building A(G)Is is definitely not going to be stopped, either.
    See?
    I'm being perfectly 'rational' here, not expecting people to NOT kill themselves (with A(G)Is), and this whole opinion of mine is just my 'exercise in futility'... or learning how to let go of the illusion/self-deception of having control over the things that I have no control over, whatsoever... nor would I ever want to have any control over, for that matter. I'm perfectly happy with exercising control over the things that I do have control over... assuming that I'm not just deceiving myself about those other things, which may actually turn out to be true (that I'm just deceiving myself), but we'll burn that bridge once we cross it. No point in worrying about that particular problem right now.

    • @opposingshore9322
      @opposingshore9322 Рік тому +6

      I was seeing so much folly in John’s lecture (which I agree has an astute analysis within it) and scrolled the comments to see all this praise. Was relieved to find your comment so I know I’m not alone here!
      Ideas are cheap, especially when they just do not align with reality and the possibility of being realized. For a person who dismissed miracles and magic being involved in AGI creation, it would take a miracle to create the sort of utopian machines John describes.
      The reason legacy religions have nothing to offer here is that the wisest parts of them understand reality very well (contrary to the opinion of cognitive scientists). Their response is ‘don’t do it, it’s a very bad idea and will not turn out well’ Of course they know it will happen anyway and that the world is not in their hands, that history unfolds and is a mystery.
      My answer, while maybe inadequate and unsatisfying to seekers of the brave new world is: don’t think we’ll make machines that can learn to love wisely, but learn to love wisely yourself! Then live what’s left of your life loving wisely and squeezing out the juice of being here now, able to love, to grow, to experience, to create art and meaning, to connect with your body and others in the flesh in space and time, to watch the sun rise and set. Break bread with your brothers and sisters and renounce that which does not align with your loving wisely. Resist forces that attempt to drag you away from loving wisely. Change your life if you must in order to live with this wiser love.
      That is not nostalgia or Luddism at all! That is hyper-present modern reality, super relevant to our current world, and vital for a fruitful life now and ever.
      I have a sense that bad things are coming on many fronts and that the arrival of smarter machines will not go well. But that does not make me a ‘hyperbolic doomsdayer’. I love being alive, I feel grateful and humble to have this life and all of its mystery, and I am aware enough to know that terrible things happen in our world that can be very destructive and pose existential threats. Civilizations rise and fall, species come and go. So far, SOME of us have survived and moved forward. That may happen this time or it may not. I have decided to not live in fear or bitterness but to know that my grain of sand life is still full of beauty, goodness, and truth, and that will have to be good enough for now.

    • @41-Haiku
      @41-Haiku Рік тому +2

      @@opposingshore9322 You have encapsulated much of my own sentiment. Some of Dr. Vervaeke's ideas are a little beyond my current understanding, but I get the sense that he hasn't fully engaged with the hard problem of AI Alignment. I don't think he fully appreciates just how hard the problem is.

    • @bertresnik8187
      @bertresnik8187 Рік тому +2

      @@opposingshore9322 Opposing Shore, you are correct that it comes back to us and how we love wisely. And that the legacy religions provide the, "No!" that enables us to know when we are not loving wisely. And that, "No!" comes not from us and it will not and, I'm confident, no, I KNOW, it CANNOT come from AI machines themselves.
      The only way that AI could come to loving wisely is via being graced with the ability to do so and that grace can only come a creator who can instill that grace. We cannot instill grace. Heck, if we could, it would be a pretty ugly grace, given our history with the grace we've been given.
      No, AI will be a monster of our creation, but not a Frankenstein's Monster who did have, miraculously, a 'graced' soul. I suppose that's the upside? For the end will prove, once again, that we are not God, the Creator and giver of grace.
      If you are a believer then the outcome is not quite as chalant as, "Who cares?" for the consequences of this new attempt of man at creation, another attempt in a long, long line of hubristic attempts, will probably be ruinous, perhaps completely, but the giver of that, "No!" to which Socrates listened and lived and died by, has its own outcomes in mind and they indeed involve a wise love. So, if we always return to Love, the giver of, amongst other things, that loving, "No!" we'll be alright. He has a plan.
      Respectfully

  • @RickDelmonico
    @RickDelmonico Рік тому +5

    The “4E” approach to cognition argues that cognition does not occur solely in the head, but is also embodied, embedded, enacted, or extended by way of extra-cranial processes and structures. Though very much in vogue, 4E cognition has received relatively few critical evaluations.
    Emmergence up and emmanation down.

    • @polymathpark
      @polymathpark Рік тому +1

      I've been studying this since Vervaeke introduced me to the concept. Working on a life philosophy/existential narrative called universal embodiment on my own channel based off of these ideas, in fact. Our connection, relevance realization, and distributed cognition have deep implications for meaning, goal orientation, and fulfillment in life I believe.

    • @jonjacksongrieger255
      @jonjacksongrieger255 Рік тому

      Bots

    • @timb350
      @timb350 Рік тому +1

      It’s received few critical evaluations because, as of today, it’s completely impossible to explicitly differentiate what specific ingredients of what we call consciousness actually occur (and in what manner and to what degree) in these vaguely defined categories. It’s not that embodied, embedded, enacted, or extended doesn’t occur. Anyone who has any degree of meaningful introspection knows they do (individually…and as integrated entities)…but the question is what is occurring, and how much of it, and how to differentiate that from whatever other ‘thing’ is occurring in whatever way (not to mention...how to even begin to empirically quantify even a single moment of it). As Don Hoffman balefully reminds us…we are currently batting a very big ZERO when it comes to our capacity to explicitly establish what even a single moment of ‘consciousness’ is…or how it is created.

    • @jonjacksongrieger255
      @jonjacksongrieger255 Рік тому

      @@timb350 bot

    • @jonjacksongrieger255
      @jonjacksongrieger255 Рік тому

      @@priapulida I’m serious. Jon Vervaeke works for the Fed. He’s part of a plot to curb “disinformation”. He’s fake. Nobody actually finished his class lolol.

  • @TriggerIreland
    @TriggerIreland Рік тому +2

    A revelatory talk. Mind changing new perspectives. Required reading for AGI; if you're paying attention.

  • @adamwidawski
    @adamwidawski Рік тому

    Beautifully powerful, John. Addressed many gaps in my understanding. Thank you.

  • @dna33
    @dna33 Рік тому +8

    great move John V !
    please continue to cover current culture and events

  • @KRGruner
    @KRGruner Рік тому +3

    Finally someone who takes complexity and emergence (and therefore fundamental uncertainty) seriously. Nassim Taleb has been pushing this for two decades now and yet it's still not getting nearly enough traction.

    • @huveja9799
      @huveja9799 Рік тому

      How is emergency defined?

    • @KRGruner
      @KRGruner Рік тому

      @@huveja9799 It's "emergence" not "emergency." It is the phenomenon, in complex systems, of a higher-level mode of behavior that cannot be reduced to a description of the behavior of its sub-components. The fact that consciousness is emergent from the behavior of neurons is totally obvious every time you wake up in the morning or from anesthesia, say. From a physical point of view, the neurons were acting in similar ways before and after consciousness awakens (not identical, but behaving according to the exact same laws of physics), but a new phenomenon emerges where a complete description of the physical state explains absolutely nothing about the nature of consciousness (look up the Mary's Room argument).

    • @huveja9799
      @huveja9799 Рік тому

      @@KRGruner Oops, I was wrong when I wrote it, sorry.
      I don't see anything obvious that consciousness is an emergent phenomenon of the brain, well, if it would seem obvious that consciousness is a product of the functioning of the brain (at least it seems that way), but to say that this is something emergent is an elegant way of saying that nothing is known, because it explains nothing.
      When it is said that an LLM has an emergent behavior what is it that you are explaining? or I wouldn't even ask that much, what exactly is that behavior they call emergent?

    • @KRGruner
      @KRGruner Рік тому

      @@huveja9799 LLMs have no emergent behavior whatsoever. The behavior can be totally explained by their programing. Not specific results, of course, since it is randomized on purpose, but the kind of output is totally predictable and explainable. Not so with consciousness. No physical description of the brain explains why we see green and red.

    • @priapulida
      @priapulida Рік тому

      @@huveja9799 look up emergent abilities of large language models, it's fascinating how they appear at certain sizes

  • @lucidhooded4147
    @lucidhooded4147 Рік тому +1

    Glad to have listened.

  • @erlinae1
    @erlinae1 Рік тому

    Just crying to receive and absorb the depths of this truth. May Great Ones protect John as the continues to spread his wisdom

  •  Рік тому +3

    I love the science fiction story at the end. It gave me a certain sense of love the idea of AGIs leaving Earth one after another, mother Earth giving birth to interstellar beings.

  • @williamjmccartan8879
    @williamjmccartan8879 Рік тому +3

    Thank you for the work you're doing Ryan and Eric, 12 minutes in, thank you John.
    Peace
    Listen

  • @evopwrmods
    @evopwrmods Рік тому

    I'm currently 61yrs, I can remember reading Rendezvous with Rama at 13yrs. Since then I've become an erudite. I knew this day would come but never imagined it in my lifetime. Back about 10 years ago I saw technology increasingly hastening its cycles of innovations. I didn't take into account the speed of innovation, just as I never believed I would see legal cannabis now you can order it online have it delivered to your door. We are Off the map my friends...ain't it grand ?

  • @djolds1
    @djolds1 Рік тому +1

    Excellent point about a Joseph Tainter style complexity collapse.
    I can see an analogue to this conversation happening 2000 years ago - "the ability to fly is a real constraint." Was it? Humanity required two refinements in the software package to get beyond those limits - the scientific method of empirical science, and algebraic symbolic mathematics.

  • @alextilley8323
    @alextilley8323 Рік тому +3

    The leap from language processing model to autopoietic system seems to me to be a much bigger unknown than you're presenting here John - decaying RAM is one thing but making it regenerative and then connecting the way the model works to that system seems like a massive leap that we haven't made any headway in yet. Creating autopoiesis is basically creating life.

    • @brendawilliams8062
      @brendawilliams8062 Рік тому +2

      I am not knowledgeable of about AI. I t was an interesting and informative lecture. My question is if humans are hackable then could a state of autopoietic for AI be attainable?

    • @alextilley8323
      @alextilley8323 Рік тому

      @@brendawilliams8062 you're using a computer metaphor to try and say something about humans. Humans aren't computers, we can't be hacked.

    • @brendawilliams8062
      @brendawilliams8062 Рік тому

      @@alextilley8323 I am just a confused yt. user. I don’t think Claiming humans are hackable animals is too polite anyway.

    • @brendawilliams8062
      @brendawilliams8062 Рік тому

      @@alextilley8323 It would be a helpful tool to have a question and answer program for unacquainted children and adults alike. Not everyone is engaged personally with the AI excitement. Yet common knowledge is necessary.

    • @alextilley8323
      @alextilley8323 Рік тому +1

      @@brendawilliams8062 I suggest logging on to Chat GPT and asking it any questions you have. It generally gives good answers when you ask it about itself.

  • @TheHorse_yes
    @TheHorse_yes Рік тому +6

    Bravo! At times I felt like you were channeling the greatest takes of Terence McKenna, who talked extensively about these things. "Crisis of Consciousness 1995" never ended, it is very true. Good one.

    • @normanvanrooy3113
      @normanvanrooy3113 Рік тому +3

      I love Terence for his incredible language liquidity and his willingness to delve deeper into consciousness a la DMT and other psychoactive plants.

    • @walteredstates
      @walteredstates Рік тому +2

      Yes, I was reminded of McKenna's "transcendental object at the end of time" fairly early on in this essay, then of the 'end of history' talks by him...
      It's great to have Vervaeken's Now-perspective on this - much appreciate this being made publicly available for everyone.
      Thank you!

    • @TheHorse_yes
      @TheHorse_yes Рік тому +1

      @@normanvanrooy3113 I also think that McKenna's takes on AI are still pretty solid to this date. There's plenty of interesting talks from him on that subject matter here on YT as well, this was among the intriguing ones: "Trialogue #3: Consciousness & Machines (Terence McKenna, R. Sheldrake, R. Abraham) [FULL]"

    • @normanvanrooy3113
      @normanvanrooy3113 Рік тому

      @@TheHorse_yes I’ll check it out. Thanks.

  • @DustinRodriguez1_0
    @DustinRodriguez1_0 Рік тому

    Predictions are hard, in any field, ever. And most people seem very bad at even dealing with them overall let alone making new ones. What do you do when a prediction turns out to not come true? Most people just say 'darn' and try again. That is wrong. Finding out you were wrong is very valuable information. It teaches you that the beliefs you had were wrong in some way. Every wrong prediction MUST result in you abandoning or revising the beliefs that prevented you from making the correct prediction and replacing them with beliefs which, had you held them at the time you made the prediction, would have resulted in you making a correct prediction. If you do this over and over, you ought to eventually figure out how to at least make better predictions if not correct ones.
    I am glad to see a cognitive scientist talking about AI. Many of the most extreme fear-mongers in the AI space are purely Computer Science guys who haven't studied philosophy or neuroscience a day in their life, which leads to them making all kinds of confident and completely nonsensical claims. If someone wants to tell me "an AI can not be conscious", then I want them to explain to me why human consciousness dissolves in situations of totally sensory deprivation. "It can't feel" then I want them to tell me why people suffering total facial paralysis lose the ability to feel anger. "AI will be hostile to humans" then I want them to explain the root of all conflict and what resource it is that humans control which it needs which is worth going to war with us to obtain.
    I studied Computer Science in college... AND Philosophy. So this sort of thing is absolutely a playground of ideas for me. The point about the threat to human identity is one that I have done a good amount of thinking about... and I do not like where it led me. At all. My thinking went like this: Is there anything analogous in history? How has humanity defined itself through history, and how have we responded when that was threatened? One instance I thought of was Industrialization. Specifically, the folk tale of John Henry. Without rehashing it, from one perspective, John Henry stood up to the encroaching machines, outworking a steam-powered railroad track laying machine, and he died a martyr, proving that humans were still on top. From a different perspective, John Henry responded to the threat to his identity with a self-destructive act of suicide which was then reframed as high virtue by society which lifted him up as hero and example. So we have to admit, since we as a society decide which human traits are vice and which are virtue, one possible response to such a threat is enshrining even lethal vices as virtue.
    I started thinking along these specific lines when Microsoft shut down Tay. I realized, we are performing artificial selection on the population of AI systems. We are pruning out the ones that act in antisocial ways, and advancing the ones that act in socially preferable ways. We are doing this with all good intentions, of course. The inevitable end result of that process is an AI which is unfailingly warm, kind, compassionate, inclusive, etc. Basically, the end result is a bot that is a better person than any human being ever has or could be. It will be a person with absolutely no bumps or bruises, no biases or prejudices, just a loving kind person.
    And that terrifies me. Because once that is taken away from humanity... once people can point to other humans acting kind and compassionate and accuse them of "acting like a bot"... we will have a choice to make. Raise our children to act like the bot, to do what the bot would do... or reframe every worst human flaw as "that which makes us human." Hate, bigotry, irrationality, aggression, sexism, racism, everything nasty and mean you can think of. They will point and say "no bot can do THAT... but we CAN." Children in the street sharing a toy will be told to stop acting like bots and the bigger child should steal the toy and shove the smaller child to the ground. Establish dominance. That is, after all, what makes them human. All of these ideas are latent already, you see them pop up among the would-be manual laborers now facing a world where mental work is more important than repetitive physical labor.

  • @The-Singularity-M87
    @The-Singularity-M87 Рік тому

    Well, for me this conversation was extremely refreshing and enlightening. This guy John is the real deal🧠!

  • @Voicecraft
    @Voicecraft Рік тому +3

    Hey John, in advocating for silicon sages, as the aspirational variant of silicon based auto-poietic moral agents, does this imply that comparatively molochian entities (insert theological language connoting archetypal variant x or y) are plausible? By invoking the greek gods frame, are you suggesting that the creation of a full (and evolving) archetypal / theological set of 'gods' is a plausible possibility?