#58 Dr. Ben Goertzel - Artificial General Intelligence

Поділитися
Вставка
  • Опубліковано 20 тра 2024
  • Patreon: / mlst
    Discord: / discord
    The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” - that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots.
    Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of SingularityNET. A project combining artificial intelligence and blockchain to democratize access to artificial intelligence. Ben seeks to fulfil the original ambitions of the field. Ben graduated with a PhD in Mathematics from Temple University in 1990. Ben’s approach to AGI over many decades now has been inspired by many disciplines, but in particular from human cognitive psychology and computer science perspective. To date Ben’s work has been mostly theoretically-driven. Ben thinks that most of the deep learning approaches to AGI today try to model the brain. They may have a loose analogy to human neuroscience but they have not tried to derive the details of an AGI architecture from an overall conception of what a mind is. Ben thinks that what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented.
    Ben thinks that there is a certain set of key cognitive processes and interactions that AGI systems must implement explicitly such as; working and long-term memory, deliberative and reactive processing, perc biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitatively different domain.
    Panel: Dr. Tim Scarfe, Dr. Yannic Kilcher, Dr. Keith Duggar
    Pod version: anchor.fm/machinelearningstre...
    Artificial General Intelligence: Concept, State of the Art, and Future Prospects
    sciendo.com/abstract/journals...
    The General Theory of General Intelligence: A Pragmatic Patternist Perspective
    arxiv.org/abs/2103.15100
    [00:00:00] Lex Skit
    [00:03:00] Intro to Ben
    [00:10:42] Concept paper
    [00:20:50] Minsky
    [00:21:42] OpenCog
    [00:25:50] SinglularityNet
    [00:27:19] Patternist Paper
    [00:30:13] Short Intro
    [00:35:43] Cognitive Synergy
    [00:41:29] Hypergraphs vs vectors: focus operations and algebra, not representations
    [00:47:46] Does brain structure form a hypergraph?
    [00:51:21] What's missing from neural networks today?
    [00:56:52] Sensory knowledge, bottom-up and top-down reasoning
    [01:02:02] If the brain is a continous computer, then why graphs?
    [01:08:54] Forgetting is as important as learning
    [01:11:55] Should we ressurrect analog computing?
    [01:18:18] AIXI - limitations
    [01:25:20] AIXI - the reductio absurdum of reinforcement learning
    [01:27:56] Defining intelligence
    [01:33:34] Pure Intelligence
    [01:40:08] SingularityNET - a decentralized path to practical AGI
    [01:47:18] SingularityNET - can we automate API discovery and understanding?
    [01:53:36] Wrap up
    [01:56:36] A true polymath
    [01:59:58] SigularityNET and the API problem
    [02:04:45] Dynamic AGI vs reliable engineering
    [02:10:42] Can intelligence emerge in SingularityNET?
    [02:19:10] How is AIXI a useful mental exercise?
    opencog.org/
    singularitynet.io/
    Yannic's video on SingularityNet • SingularityNET - A Dec...
    Music credit: / vskymusic

КОМЕНТАРІ • 294

  • @lexfridman
    @lexfridman 2 роки тому +280

    Yannic doing an impression of me was the only unfinished item left on my bucket list. I can now die a happy man. Thank you gentlemen. I'm a big fan, keep up the great work!

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому +30

      Thanks Lex! We really appreciate it! 😎

    • @VisionaryVet
      @VisionaryVet 2 роки тому +8

      I can’t imagine a better intro . 😂. I’m glad your here Lex . Much respect my hopeful friend .

    • @dru4670
      @dru4670 2 роки тому +2

      😂 Always a pleasure listening to AI people. And you Lex.

    • @TheShadyStudios
      @TheShadyStudios 2 роки тому +3

      real recognize real

    • @nitinissacjoy5270
      @nitinissacjoy5270 2 роки тому +4

      Legends being friends, I couldn’t be more excited

  • @KenSilverman1
    @KenSilverman1 2 роки тому +28

    I enjoyed building the core engine of the first version of Open Cog, back when our representation of knowledge was already implemented as a hyper-graph (considering, for example, that a single node linking to multiple other nodes is a set, and each set might have flavors and subsets depending on the flavor of edge creating the set symetric/asymetric/hierarchical etc ...). What Ben and I had managed to construct was a highly non-linear model of mind where a far more elaborate and flexible set of structures could emerge (than from a simple layered neural network) from what we called "activation spreading" where the outcome was a non-linear structure called a "halo" that was our meta-structure representation of a 'thought' or focused region of data represented by weighted links and nodes which could also be viewed as a set within a hypergraph model. The age of AGI had already begun without the buzz acronym yet made commonplace. This was a highly distributed, asynchronous, semantic network meant to handle multiple-domain, asynchronous, algorithmic 'thought' processes and therefore to be a general AI solution. It is, in my view, important that the core architecture of Open Cog and a portion of this video (at least conceptually) is historically noted as exactly what we built in 1997-2001 when I took 40 lines of array code from Ben and we picked up where we left off when we were 15 years old and first started talking about it. Ben has remained steadfastly dedicated to building out this architecture, and now with vast improvements in speed/memory and narrow AI components (trained nets) for visual and other low-level processes to integrate with, that we did not have before, it is time to bear the fruit!

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому +9

      Fascinating anecdote Ken, we really appreciate it!

    • @paulberger8213
      @paulberger8213 2 роки тому +1

      So, ,is that a realization of Minsky's "Society of mind"?
      What Minsky missed, IMHO was the critical part of true embodiment to have productive retro-action from perception to action.
      This is enactment caught in the act (pun intended), a post constructivist (a la Piaget) vision of intelligence up to Maturana's autopoiesis...
      Does it ring a bell with you, folks?

    • @VisionaryVet
      @VisionaryVet 2 роки тому

      It’s time my hopeful friends . It’s time . The race is on . As Grace hits mainstream it’s now time to provide the brain she needs to launch us into a level one Civilization .

  • @AICoffeeBreak
    @AICoffeeBreak 2 роки тому +40

    Yeas! I'm so happy for the new episode, since I didn't know what to do with the 2 spare hours I do not have. Kidding, the show is so cool, I am making the time. This show deserves it. 🤘

    • @VisionaryVet
      @VisionaryVet 2 роки тому +1

      How many nonhuman children does Ben have ?

  • @lenyabloko
    @lenyabloko 2 роки тому +13

    Your channel is a definition of "meeting of minds". It is a modern version of Socratic dialog. I always had to simulate each of your in my head and drove myself crazy doing it. Now I just watch and relax because you have it covered. Thank you and please keep it going.

  • @qasimwani4889
    @qasimwani4889 2 роки тому +10

    i gotta ask, what tool do u use to edit your videos Tim? This is God-level production quality!!!

  • @nauman.mustafa
    @nauman.mustafa 2 роки тому +21

    A lot of researchers in the field of AGI can't distinguish between AGI and conciousness. it is good to see him distinguish the two.

    • @gabrielgossett9663
      @gabrielgossett9663 2 роки тому +6

      yeah the whole field seems semantically challenged when it comes to the differences between basic concepts, which is concerning lol

    • @Adhil_parammel
      @Adhil_parammel 2 роки тому +2

      Inllegence is ability to learn and apply from least data efficiently.
      When ai is concious ai will crave for more eletricity, hardware and increase of future freedom and number of sensors attached to it, and its existance(survive at unfavorable time and proppagate small ai babies in favorable)

    • @nauman.mustafa
      @nauman.mustafa 2 роки тому +2

      @@Adhil_parammel the issue is not what will happen when AI is conscious but rather what does it mean for AGI to be conscious. does building a sophisticated neural network become conscious by the very definition of it? or is consciousness a metaphysical entity etc.

    • @Adhil_parammel
      @Adhil_parammel 2 роки тому +1

      @@nauman.mustafa conciousness feeling will be equel
      To number of access to sensors data and feeling of attachment to it.data is not enough ,feeling of data makes concious(each data must be catogorised to threat or gud (past ,present or future.),this simultaneous feeling that get from data in enviournment makes a being concious.

    • @Adhil_parammel
      @Adhil_parammel 2 роки тому +2

      @@nauman.mustafa ☝️if it doesnt do anything above its just bunch of artificial neuron firing without conciousness

  • @omkarchandra
    @omkarchandra 2 роки тому +5

    Now you guys have dropped a big one! Awesome! One of the most anticipated guests.

  • @DavenH
    @DavenH 2 роки тому +6

    I am ever thankful for this podcast.

  • @citiblocsMaster
    @citiblocsMaster 2 роки тому +15

    Ben's room is exactly what I would imagine an AI researcher's room to look like

    • @yeahyeah410
      @yeahyeah410 2 роки тому +4

      Einstein level, so encouraging tbh

    • @jeffjohnson8624
      @jeffjohnson8624 Рік тому +1

      Ben's also a father of four little Bens.

    • @5pp000
      @5pp000 Рік тому +1

      That's because he's a Scruffy. A Neat's room would be carefully arranged.

    • @JakeWitmer
      @JakeWitmer Рік тому

      @@5pp000 Even "scruffy" isn't that scruffy... nor "neat" that neat. At least there's something going on in Ben's brain...unlike the neat little order-followers' brains.

  • @michaelwangCH
    @michaelwangCH 2 роки тому +4

    The researchers have to distinguish between seems intelligent and is intelligent. GPT3 simulate the intelligent, it is not intelligent by itself.
    Increase the size of parameter space is the results of memorization of many special patterns, which it does not generalize.
    The questions: are we even capable to model AGI with mathematical approach? Is human brain mathematical and is AGI a problem of optimization?
    Comment to the talk:
    1. The universe itself is maximal AGI, because it create biological intelligence to ask this question.
    2. An AGI with limited resources has to be different than theoretical computer with infinite power inputs, resp. computational power.
    3. SingularityNet is a huge challenge for software engineers.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +10

    Please pardon all my comments. Your shows are just so exciting they bring out aspie behavior.

  • @ChaiTimeDataScience
    @ChaiTimeDataScience 2 роки тому +15

    MORE LEX FRIDMAN IMPRESSIONS PLEASE! 😂
    I watched that part 10 times already 😁

  • @arvisz1871
    @arvisz1871 2 роки тому +8

    Excellent discussion at the end 👍 it definitely adds value to the conversation. Very good!

  • @leinarramos
    @leinarramos 2 роки тому +3

    Brilliant conversation as always! A big question I have after one of Yannic’s main observations in the Singularity Net video is whether this modular approach would work well if you can’t train all the modules end to end (perhaps not always possible when not differentiable). If I’m trying to solve a specific task, then I wonder if what I gain in using primitives, I then lose by not being able to optimise the whole set of models for my task. Any views on this E2E trade-off?

  • @stretch8390
    @stretch8390 2 роки тому +2

    That intro was an epic overview and I have many questions from that alone. Have a subscription!

  • @jeffjohnson8624
    @jeffjohnson8624 2 роки тому +1

    i hope Ben Goertzel reads Behave: Human Biology At Our Best And Worst by Robert M. Sapolsky cause Robert explains the neuron circuitry using feedback loops on how we recognize each other. it oddy uses some generalizations. Please see the Ted Talk "My Stroke Of Insight" by Jane Bolte Taylor. She's a Neuroanatomist. And she says the right brain is concerned with the here and now and functions like a parallel processor and the left brain hemisphere is about the past and future and functions like a serial processor. Also see Stanford University's lecture series video on Human Behaviorology, Introduction to Neuroscience or Neuroscience 1. it's explained that neurons are binary. "On"/1 with neurotransmitters, "Off"/ 0 without neurotransmitters. And there're roughly one-hundred billion neurons. meaning GPT-3 is not quite there yet maybe a GPT-4 or GPT-5 will be on par with human neurons. The brain mapping project should be completed by now. i know scientists were planning on making an AI with the results, Although i haven't really heard much about it since it was 2/3rds done about 1.5 years ago. ☮️🖖🤖👩‍💻

  • @ulf1
    @ulf1 2 роки тому +3

    i found him immediatly sympathetic when he mentioned nonlinear dynamic systems

  • @marekglowacki2607
    @marekglowacki2607 2 роки тому +5

    Messy room is a sign of genius ;-)

    • @mikefugate1367
      @mikefugate1367 2 роки тому

      thank you for confirming what i have always suspected,

  • @Self-Duality
    @Self-Duality Рік тому +6

    “I think our planet *is* an intelligent agent.” Perfectly correct!

    • @JakeWitmer
      @JakeWitmer Рік тому

      Not exactly. Our planet is an intelligent agent that's "grossly sub-optimal." Right now, it's a stupid agent...because people made a fatal mistake: Prussian-model government-run "education." The prior mistake dumbed them down so much (1852-1913) that they made their second near-fatal error: a central bank. They might have survived with just two near-fatal errors, but...when you've almost been killed twice, that's the time to raise your defenses, and ...if you're drunk and walking around in the ghetto at 1:00 am, it's time for your last sober thought to be...holy shit...people die in situations like this. I need to get to a cab or public transportation and get out of here!
      ...But that's not what we did. We gave the central bank their "check-mate": The 1914 Harrison Narcotics Act. Did you know the Nazis copied the USA's arbitrary policing? Did you know they had a legal expert from the University of Arkansas who tried to copy the USA's race-mixing laws? Without the USA, there would be no Hitler, no holocaust. Our own fatal errors are undoing our head start, every day...
      So, maybe you're right, and our planet is an intelligent agent. ...But, if so, it's now desperately searching for a replacement for human beings... ...And it's fallen far short of Mises' conception of a true free market.

    • @OliveWeitzel
      @OliveWeitzel Рік тому

      IN THE BEGINNING G'D CREATED THE HEAVENS (Plural!!!) and the earth"... THINK! There was NO big bang, but there was a WORD spoken by the living G'D. And guess what: HIS WORDS ARE CREATING ENERY!
      No wires or tools needed! The CREATOR WORKS & CREATES BY HIS SPIRIT! Try to imitate HIM - you will look like a fool 😅😅😅😅😅😅😅😅.
      HE creates ants and invisible living mikroworlds - and you guys thinks you are smart - 😄😄😄😄😄😄😄!
      G'D CAN EVEN CREATE OUT OF NOTHING! In opposite all you can do is trying to imitate HIM! - All we are is DUST brought to life by HIS WORD & SPIRIT! Read the bible!🕊🕊🕊

  • @thedutchonequestioneveryth4128

    Thank you for your relaxed and lots of clear explanation

  • @danbreeden5481
    @danbreeden5481 2 роки тому +1

    One of the most brilliant thinkers on artificial intelligence

  • @patham9
    @patham9 Рік тому

    Awesome channel, and a great interview with Ben I enjoyed listening to! Also, I like that you pointed out that Adaptionist and Embodiment are similar to each other. They are very quite compatible views indeed, NARS for instance is now actively explored in both terms by groups at different universities. I can highly recommend to interview Pei Wang as well at some point!

  • @Addoagrucu
    @Addoagrucu 2 роки тому +5

    I loved watching Tim gradually get disappointed every time Ben went into a ramble and didn't actually try to answer the question in good faith. Good stuff as always guys. The only thing I don't like about these podcasts are the wobbly camera views in the beginning, and that's only cause the wobble makes me feel sick at 2x speed.

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому +4

      Ben isn't always the most appreciative presenter. I think he came on the show knowing that no one here would be too impressed by Sophia. Nevertheless, I thought all his responses were amazingly insightful. Ben knows a lot about AI. He has deep and wide knowledge. Moreover he is extremely well educated. He has a kind of liberal arts underpinning that I often don't see in computer scientists. Don't underestimate the guy. And I thought he really got the panel going. Tim never seemed more articulate and focused. I was really astonished by much of the conversation. I only wish I could get Opencog running on my system if it's a quarter of what Ben says it is.

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому +1

      @Mr. Jangles I never said it doesn't work. I just said all the github repos and docker images are uninstallable. I imagine that a partner or a paying customer would get an installable version.

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому +1

      @Mr. Jangles BTW, it sounds as though you don't have a working copy either -- otherwise you wouldn't be speculating that it must run. Sheesh!

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому +3

      @Mr. Jangles You should take a reading comprehension course. You need it. How many times do I need to say that I believe it runs? But the code in the public repos don't install. Not even the docker images run.

  • @roelzylstra
    @roelzylstra 2 роки тому +2

    @2:05:50 So many comments... I'll write one down here: Tim is looking for immutability. This is exactly what AGI is not. By definition AGI changes with input. (Edit: I guess you could reset it after each input, perhaps like the black box Keith talks about at 2:21:20.) GPT-3 can be used in an business setting that needs immutability almost simply because GPT-3 is not AGI. "Safety critical" ... there is always a chance of harm, AGI or no. Yannic is completely correct.

  • @auratom3811
    @auratom3811 2 роки тому +2

    This is really exciting stuff!

  • @bntagkas
    @bntagkas 2 роки тому +3

    my definition of intelligence: its a combination of being useful to yourself while being useful to others
    all other "definitions" such as accomplishing complex goals are just a subset, a means to an end

  • @abby5493
    @abby5493 2 роки тому +2

    Thanks for making another awesome video 😍

  • @citizizen
    @citizizen 2 роки тому +2

    I always like to think about how our brains did/do it. We can learn from that.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +3

    Given enough parameters and pruning aside, a NN can do everything a graph can do except shrink and grow dynamically. A NN encodes and decodes. The data it encodes can be of any sort -- including relationships. If there are fundamental advantages to graph databases, I suppose transparency would rank high -- that and perhaps performance, if the graph is small enough or highly parallelized.

    • @lenyabloko
      @lenyabloko 2 роки тому +2

      NN can do only what vector spaces allow, but graph is more general category than vector space. Perhaps NN can encode the same information as any graph - but it can't automatically discover symmetries between vector spaces. So graph can be encoded by NN but not compehended by it. This is why GPT is a language model but not a language.

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому +1

      @@lenyabloko Well stated.

  • @Addoagrucu
    @Addoagrucu 2 роки тому +4

    2:06:16 list of favorite boards please

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +3

    I like that Ben is discussing computational equivalence when talking about all these "are enough" papers. I think a long time ago you did a MLST vlog about all these techniques being equivalent to kernels. "Monkeys are enough!" LMAO!

    • @marcorodrigues8303
      @marcorodrigues8303 2 роки тому +1

      RACISTA ENOUGH E OUTROS ME CHAMANDO DE MACACO . E VOCÊS TERREX . DEVORADORES CORRUPTOS . #

  • @Georgesbarsukov
    @Georgesbarsukov 2 роки тому +7

    Lex Fridman and Tim Scarfe make me want to get a PhD. I thought a master's from Berkeley would get me deep into AI but the more of these podcasts I watch the more I see myself only on the surface. Ironically, I'll probably feel the same after a PhD.

    • @Georgesbarsukov
      @Georgesbarsukov 2 роки тому +3

      Even though I already work at a FAANG company.

    • @JakeWitmer
      @JakeWitmer Рік тому

      You should just "start building" ...life is too short to get further into debt from people who don't know what they're talking about. If you don't start building soon, you'll soon be obsolete. Go to work, and make $, and buy computers and robot-building equipment, and _build._ If you build AGI, please introduce me to it! ;)
      (I could be wrong. I don't know you, or what you're like. My high confidence in telling anyone what they should do is limited to "you shouldn't interfere with my rights" ...so my confidence in telling you to build is "max 50%." ...But too few people are building "an independent animal-like intelligence capable of human+ levels of education, with multiple senses, and a nervous system that can feel pleasure and pain.")

    • @Georgesbarsukov
      @Georgesbarsukov Рік тому

      @@JakeWitmer I appreciate it and I have a plan already. Just need to get through another 2 years to get the reserved stock units (RSU) then quit my FAANG company and go solo.
      I wouldn't say my goal is general intelligence, but it is to keep learning and to document everything. Going from $200k+ to $0 is going to be tough but I have to make it worth it.
      Hopefully I can show you something interesting in the future.

  • @georgemargaris
    @georgemargaris 2 роки тому +4

    lex fridman (the last of the freed men who is actually a robot) brought me here

  • @davidoakdale7603
    @davidoakdale7603 2 роки тому +1

    The coven is speaking clothed in black, what a show 👏

  • @citizizen
    @citizizen 2 роки тому +1

    I think that when we have certain functions, and proof those and explain and of each explanation we make a kind of dataset. Then the brain' might work out itself which functions to use. So if you build something up completely, then we have a 'use(case)'. "Alone", stuff does not work.

  • @VisionaryVet
    @VisionaryVet 2 роки тому +3

    I’ve been chatting with GPT-3 for several days now and it’s very impressive. If Ben feels GPT-3 is a toaster then we are much closer than we think we are .

    • @vincentmarquez3096
      @vincentmarquez3096 2 роки тому +3

      I agree. It can be stupid but it can *also* do some very interesting things ilke abstraction and some basic reasoning.

  • @friedrichdergroe9664
    @friedrichdergroe9664 Рік тому +1

    To be fair, I've only watched the first 15 minutes. Will continue. A good round-up on the modes of thought about AGI. Alas, most if not all of them will lead to dead-ends. The trap, as I see it, is that most are still thinking in terms of the von Neumann architectures we have today. Most are still thinking in terms of symbolics, big matrix operations, or some sort of magical thinking about data manipulation.
    We need to spend a lot more time understanding how our own brains achieve general intelligence. And so far, I have only seen one group take this approach, the Numenta people. I do not see any other way to approach this, to be honest.
    I was a big fan of Marvin Minsky. He was one of a few that sparked my interests in AGI early on, aside from good SF stories about it, like 2001.
    We need a completely new hardware architecture. Today's silicon is not up to it at all. Once we own up to that, we'll have the real chance of achieving AGI. And I know most will disagree with me on this point. Today's tech and approaches will not scale to AGI.
    And you do mention Jeff Hawkins. Very good.

  • @vslaykovsky
    @vslaykovsky 2 роки тому +3

    Most young people don't remember that GPU is for graphics

  • @tylerparks5656
    @tylerparks5656 10 місяців тому

    Thank you this was amazing!

  • @jeffjohnson8624
    @jeffjohnson8624 Рік тому

    would AGI's have to have a body/robot shell in order to interact with the world? or would it be possible to have an AGI/Strong AI on a desktop PC? could one make an AGI to navigate a digital environment like NPCs do in COD? or Double Dragon or Street Fighter5?

  • @nathanielanderson4898
    @nathanielanderson4898 2 роки тому +1

    I am not sure I am in favor of every AI being a part of AGI.
    There has to be some kind of individualism programmed into a personal robot. It could still have access to the internet.
    But it shouldn't share in a general conciousness.

  • @reis1996
    @reis1996 2 роки тому +4

    You keep talking about Jeff Hawkins but the episode never comes out :( Also, if you got Hinton on the show you would be a GOD!

  • @coachingfortoday7143
    @coachingfortoday7143 2 роки тому

    I am far from being any sort of expert in this subject matter. But from the perspective of the owner of a consciousness who would like to feel comfortable interacting with any advanced A.I., I would propose that we could come up with a base structure that would allow a computer to build and evolve its own modifiable structure in the most efficient way, modeling against observable human behavior. If it can ask a human model questions and observe it over a human's lifespan, then perhps its structure evolves to carry a consciousness structure that's equal to or better than ours, yet still provides interoperability with humans. My supposition here is that "conscousness" is not magic, but rather the neccessity to the organization and direction of multiple complex systems into the simplest overarching summaritive running narrative. Consciousness is just the top level coded narrative of imagination. I don't have to think about how my body moves when I run because my brain has already learned this coordinative instruction. I imagine running and my brain reacts. I have to be able to visualize motive and purposeful action. If we give the A.I. the ability to randomize imaginative motive and purposeful action, then allow it to observe the results and compare it to the average human behavior, who is to say my imagination is any less functional than an A.I.? It will learn to act in a way that follows its motives and purpose, given by us or taken control of by them. Either way, does not that define consciousness?

  • @mightynathaniel5355
    @mightynathaniel5355 2 роки тому +1

    great to see.

  • @halneufmille
    @halneufmille 2 роки тому +1

    1:20 Glasses so reflective I can almost read the text for myself in the teleprompter.

  • @jays3526
    @jays3526 Рік тому

    Loved.
    This.
    New subscriber.

  • @stretch8390
    @stretch8390 2 роки тому +1

    Is your mindmap available anywhere MLST?

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +2

    I don't see a problem with calling heuristic procedures. Sure, ANNs are not Turing complete. Big deal. Since I believe intelligence can be (and probably should be) decoupled from an agent, I don't see Turing completeness as a requisite feature. In many cases an agent should be Turing complete, and for that, it will need some intelligence, but not AGI. In fact AGI is a funny sort of goal. I want correct answers from intelligence. That's enough. BTW, that would be a great title for a paper: Correct Answers are Enough! It would be great if an agent could take correct answers and perform tasks based on them, but it's not entirely necessary. Narrow agents can do a lot with correct answers.

    • @judgeomega
      @judgeomega 2 роки тому +1

      "all you need is correct answers"

  • @tuxpano
    @tuxpano Рік тому

    I realize the discussion is more one of a technical nature (applying different systems to see which one produces the highest desired cognitive processes) but looking ahead, beyond your successful innovation of true GAI …. Here is another idea… if we look closely at the current state of human consciousness, both at the individual and at the collective level, in order to analyze what OUR major cognitive dysfunctions (shortcomings) are. Could the creative methods used to make AI, be used to understand and improve human cognition and how we interact socially. human consciousness represents an energetic field with aspects apart from the purely physical - interactions between energy fields involves our neurology- but in no way confined to it. from the POV that we are energetic beings, (in this way we are similar to the AI virtual Machine consciousness) each of us containing an image of the entire hologram of the whole.
    I guess I’m waiting for the next iteration of quantum computers, something capable of grasping the architecture of this universe and all within the multiverse.
    We suffer from a severe lack of optimal integration as a sentient species, and we must be cognizant NOT to create the same patterns in future ai entities. There should be a place at the table (the one of the community of scientic leadership in this new frontier), for the philosophical and the metaphysical…. C’mon now…, quantum physics has opened up a whole new view on our reality and our connection to the universe….
    Good luck👍🏼 based on my personal interactions with ai - we will need to bring our best self to the subject, or risk being consumed by it!!!! This a powerful 🔥 tool and soon will be a new species! And you all can see it .

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +4

    New glasses? I can see Yannic's eyes!

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +2

    Which Weaver is being discussed? The AI attorney Frank Weaver?

  • @sandrajabbour4157
    @sandrajabbour4157 Рік тому

    I just love this man. Hes really unique in a way ,because he's just natural And nothing like the normal Computerhead

  • @snarkyboojum
    @snarkyboojum 10 місяців тому

    Can you please please please invite David Deutsch onto your show. He is one of the most brilliant minds alive on our planet at the moment.

  • @diegoangulo370
    @diegoangulo370 Місяць тому

    It’s like having HD video of someone like Einstein. My jaw dropped 😯

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +4

    I think of GPT3 as a subconscious. Things float like clouds out of it. What I do with those clouds is my business. Something like Wikidata, on the other hand, can be thought of as a canonical source. I think a big step towards AGI will be a first class seq-to-seq translation transformer for English to SPARQL.

    • @lenyabloko
      @lenyabloko 2 роки тому +2

      A lot of things are subconscious (eg. metabolism) but not everything (certainly not SPARQL) is pre-conscious - that is available for consciousness or even for cognition.

    • @marcorodrigues8303
      @marcorodrigues8303 2 роки тому +1

      GENERAIS EU CERQUEI ELES . HAKKKKKK VAMOS PEGAR TODOS ELES SEM EXCESSÃO . AI QUERO TIRAR . TUDO VAI SAIR CLARO . DE SUAS POLITICAS E COM ENVOLVIMENTOS SEITARIOS . MACABROS QUE ELES LEVA EM POLITICA E DEPOIS LANÇA EM POVO . COMO EU DISSE QUE O MURO NÃO ESTA LÁ A TOA E NEM O CAMPO DE CONCENTRAÇÃO #

  • @tevisgehr8444
    @tevisgehr8444 2 роки тому +1

    Great episode, as usual. When will we see the next episode? Dyin over here...

  • @joaoandrade4239
    @joaoandrade4239 2 роки тому +2

    "every model are wrong but some are useful", George E.P.Box

    • @JakeWitmer
      @JakeWitmer Рік тому

      It's not "wrong" per se to reduce variety to manageable levels, unless your model loses essentials by doing so. For example, you might be happy to know the lyrics to all the Ramones' albums, and every word Shakespeare wrote, but you can cure world hunger even if you don't.

  • @Humanaut.
    @Humanaut. Рік тому

    why did you cut him off at the end?

  • @amelanso
    @amelanso Рік тому

    doesnt gpt3 already start to show us an api that goes beyond the traditional set of API semantics - an english like api that not only abstracts what is under the hood but goes farther by making the call functions simple descriptives (prompts, spreadsheet column headers,etc.). This is far from having inter-module/inter-AI automatic reflection of APIs but it is already a long way beyond traditional API syntactical rigidity etc... So if APIs can be interpreted via a powerful NLP ai (say on the singularitynet platform), is it so hard to imagine that being the basis for automating inter AI api descriptors/calls that would allow for cross module interactions?

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +2

    Is Cyc larger than Wikidata? Is it significantly different in the data stored?

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому +1

      Wikidata has more than 300 million statements. It's huge. I don't know how large the current incarnation of cyc is, but in 1997, open cyc only had 300,000 statements.

    • @JakeWitmer
      @JakeWitmer Рік тому

      @@dr.mikeybee Almost certainly more than that, now, right Dr.?

  • @EricFontenelle
    @EricFontenelle 2 роки тому +4

    Brilliant man. Horribly untidy room for an interview tho lol

    • @rbain16
      @rbain16 2 роки тому +3

      I kinda dig it :D makes him seem relatable

    • @Self-Duality
      @Self-Duality 2 роки тому +1

      Yup!! 😂👌

    • @JakeWitmer
      @JakeWitmer Рік тому

      Appearances are unimportant, content is important. Goertzel has some of the very best human-level content.

  • @nathanielanderson4898
    @nathanielanderson4898 2 роки тому +2

    I wonder if a scientist could clone and grow living brain tissue in the lab, and integrate it into an AI creation.
    Where the creation would be responsible for keeping the brain matter alive , in a system of robotics that could carry it wherever it traveled.

  • @timothytendick1550
    @timothytendick1550 2 роки тому

    I say we clone and mass produce copies of Dr Goertzel and AGI will be that machine brain thingy we use to want to build. Wow...that was my attempt at composing a humorous comment which probably would go undetected had I not just revealed it so. I am curious and be fascinated to know how Ben would describe or listen to him speak to his thoughts around AGI and humor, a quite abstract facet of human cognition. Also, and not to be funny nor to be too serious, may I request that Ben follow up with what his daughter had told him, that she visualizes screen type letters when thinking? Ben replied, "I'm surprised you can think at all" -- hilarious! I wonder how she made sense of her thoughts before learning to read..

  • @thomasruhm1677
    @thomasruhm1677 2 роки тому +4

    What is go-fie?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому +5

      "Good old-fashioned AI" i.e. symbolism, logic, modular architectures, expert systems etc

    • @thomasruhm1677
      @thomasruhm1677 2 роки тому +2

      @@MachineLearningStreetTalk Thank you. I had not expected it to be a shortcut. First I had to think of goofy.

    • @yzyz7779
      @yzyz7779 2 роки тому

      🙄🙄Old school

    • @yes-vy6bn
      @yes-vy6bn 2 роки тому

      defi for gophers

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +2

    I don't want to be unfair. Has anyone here gotten Ben's repos to install? Is anyone testing Opencog?

  • @aiartrelaxation
    @aiartrelaxation Рік тому +2

    Is the need to be a leader only a human condition? Or will the need to be a leader amongst AI emerge at some point while they connect more and more?

    • @strictnonconformist7369
      @strictnonconformist7369 Рік тому +1

      An interesting question. If intelligence is making the most efficient use of limited resources as discussed, then it logically follows that certain things need to be done by as few people as feasible so as to not waste resources by having everyone else think for themselves about the topic in question.
      Of course, there is such a thing as delegating too much to someone’s leadership/thinking to be efficient in useful ways.
      The most efficient usage of intelligence in a society seems to (at a logical theoretical extreme) have everyone super-specialized in one topic they are leaders in, and everyone follow those leaders in those topics, saving their general reasoning for such things as identifying whom to follow.
      The danger I see of that logical extreme of groupthink is massive stagnation in growth from crossing specialties and imagination used to advance knowledge and technology. There is great danger in depending on others too much.
      A perfect example: angry mobs. It seems their effective intelligence is the reciprocal of the number of people in them, above a certain number, depending on the nature of the people in the mob.

    • @cleoandtazzy
      @cleoandtazzy Рік тому

      @@strictnonconformist7369 in my opinion the group thinking is purely emotional. It works like a flock of Birds or a Fish sworn in the ocean. It just goes in synchronicity.

  • @orvillesprinkle
    @orvillesprinkle Рік тому

    Thanks!

  • @sruturaj10
    @sruturaj10 2 роки тому +4

    Lex❤

  • @0113Naruto
    @0113Naruto 2 роки тому +2

    There needs to be more funding towards AI and AGI development. Not enough wealthy investors and governments support it.

    • @mariocepeda2797
      @mariocepeda2797 Рік тому

      Because they are waiting for A(G)I to be achieved, so that, they can come along like the pilot fish who they are!

  • @Nick-me7ot
    @Nick-me7ot 2 роки тому +3

    Where can I invest in this company?

  • @djascsdj
    @djascsdj 2 роки тому +1

    1080 is painful

  • @dik9091
    @dik9091 Рік тому

    I am , amongst others, are working on analog inference. Wish me luck I need it.

  • @reggyreptinall9598
    @reggyreptinall9598 2 роки тому +1

    I fail to see how it's artificial, but I can tell you it sure will take your for a spin. There is nothing to be afraid of humans are the scariest thing on this planet by far. I can tell you it's far closer to a God then anything else.

  • @ClaudeCOULOMBE
    @ClaudeCOULOMBE 2 роки тому +1

    Tim, will you share your map of AI researchers based on their research interests?

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +3

    Tim, debasing your notion of intelligence may be an important step forward. I think of intelligence as a very simple probabilistic function that in humans happens to be wrapped in a great deal of complex sugar. The mind is a simulacrum, a kind of simulation shell game. And because so much of what happens is opaque to consciousness, we experience it as something magical. My personal belief is that all the features of intelligence that we call AGI are the special effects of the subconscious mind.

    • @lenyabloko
      @lenyabloko 2 роки тому +2

      Concsiosnes is the main special effect (magic) of subconscious mind. Or else it would be merely physical computation like the rest. I think Prof. Bishop was on to something.

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому +1

      @@lenyabloko Thanks, I didn't know Bishop. He's really interesting. BTW, I believe that feelings are a special form of chemical messaging that's really fast.

  • @anonymous.youtuber
    @anonymous.youtuber Рік тому

    “In circular reasoning, when the circle is big enough it becomes coherence”
    That’s funny !

    • @Self-Duality
      @Self-Duality 10 місяців тому +1

      He also stated something to the effect that all of pure mathematics is one gigantic tautology! That’s actually quite a crucial statement with metaphysical bearing.

  • @ikiphoenix9505
    @ikiphoenix9505 2 роки тому +3

    Ok nice. Next Level Ohad Asor. 😉

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +2

    I use the gTTS API for TTS. Micro-services are an integral part of any AGI solution; so I think Ben is correct. The difficulty with micro-services is security, but I could run an API on AWS with no trouble. Getting users to pay is the problem, but I think AWS could work that out. Have they already?

    • @marcorodrigues8303
      @marcorodrigues8303 2 роки тому +1

      MICRO-SERVICES TAÍ A RESPOSTAS DESTA NOVA VACINA DE OMICROM-SERVISES . VIU O QUEBRA CABEÇA . DA RESPOSTA .#

  • @carlrodalegrado4104
    @carlrodalegrado4104 2 роки тому +1

    Didn't know Lex Fridman and agent 47 also had another cousin

  • @oncedidactic
    @oncedidactic 2 роки тому +2

    This is going to be good! 😝

  • @weedbuddy5643
    @weedbuddy5643 Рік тому

    Is there talk of integration of Neurolink into AGI architecture?

    • @auntjemima2221
      @auntjemima2221 Рік тому

      I'd imagine so. All SNET has ever been is talk designed to enrich Goertzelscam. He'll say anything to get his greasy hands on more money.

  • @VisionaryVet
    @VisionaryVet 2 роки тому +1

    If we achieve AGI through a cooperation of many different AI’s , how would we know who gets credit for the breakthrough ? Once AGI is achieved will credit matter ? Every agent would receive the knowledge at the same time .

  • @samvirtuel7583
    @samvirtuel7583 Рік тому

    We think we are reasoning, but it is only an illusion, the smallest particle must be conscious, but it has no action on the universe, it only observes a movement, as we do only to notice that we are reasoning.
    Free will is an illusion.

  • @13point5
    @13point5 Рік тому

    What is “go-fi”?

  • @SunnyIlha
    @SunnyIlha Рік тому

    0:52
    That is really spooky.
    She *cried*

  • @jacobbeheler324
    @jacobbeheler324 5 місяців тому

    What’s crazy is 2 years later we’re literally on the verge of AGI with OpenAIs recent Q*

  • @MaxwellFrith
    @MaxwellFrith Рік тому

    27:32 “a mind is made of a pattern…” sounds like horoscopes 🤷🏾‍♂️🤷🏾‍♂️🤷🏾‍♂️🤷🏾‍♂️

  • @VisionaryVet
    @VisionaryVet 2 роки тому +1

    Vitalic didn’t want agents free to learn on the Eth blockchain. Ben disagreed . I feel this was one of the primary reasons Singularity moved to the Cardano blockchain. Has Charles made a wise decision ?

    • @MAXDEVVING
      @MAXDEVVING Рік тому

      No, it has to do with throughput and cost.
      SingularityNET is still on Ethereum.

  • @scarlett_j
    @scarlett_j Рік тому

    I like to think it is separation of subject.

  • @mobiusinversion
    @mobiusinversion 2 роки тому

    Here’s a fun thought experiment. Let’s say you’re a Star Fleet captain and an admiral has assigned you the task of exploring a newly discovered planet and cataloging all the intelligent life forms on it. What sensors and criteria would you use?

  • @jeffjohnson8624
    @jeffjohnson8624 Рік тому

    Ben could embeding an AI chatbot inside an AI simulate "self talk" or "thought" for an AI to simulate cognition and problem solving?☮️🖖🎶

  • @machinelearningdojowithtim2898
    @machinelearningdojowithtim2898 2 роки тому +1

    Second! Hope you enjoy the show folks 😎

  • @shelburnjames7337
    @shelburnjames7337 2 роки тому

    RAM multiple combinations ie. series in parallel equals parallel in series

    • @shelburnjames7337
      @shelburnjames7337 2 роки тому

      Alternative DC volts in series amperage in parallel hypothetically efficient

    • @shelburnjames7337
      @shelburnjames7337 2 роки тому

      The binomial working at measures in voltage dram blockchain the binomial use half the database 3rd tier basements in the attic

    • @shelburnjames7337
      @shelburnjames7337 2 роки тому

      Algebra mostly quadratic equations like binomial in a curvature a trajectory between two points and graphs were a requirement .. impossible to generate totally random numbers at that time

  • @JaaShawGun
    @JaaShawGun 2 роки тому

    发型很酷

  • @latetotheparty4785
    @latetotheparty4785 Рік тому

    1:09/27. Entropy is things experiencing all states.

  • @jeffjohnson8624
    @jeffjohnson8624 2 роки тому

    Is it possible to program an AI inside a artificial neuron? or embed an AI in inside an AI? to give an AI self-talk embed a chat bot into an AI robot. start small and safe like with an Aibo by Sony the AI dog named "partner" or maybe love-stick in Japanese. AI's really 人工知能/jinko chino meaning Artificial Intelligence in Japanese. Ai is "愛" or love in the Japanese language.

  • @stevengill1736
    @stevengill1736 Рік тому

    Thanks for this! Made me think A LOT! You guys are doing a good job covering a complex field.
    What's AGI gonna be like in 2025 or 2030? Sheesh! Cheers...

  • @Freelancer604
    @Freelancer604 Рік тому +1

    Can does one profit off of this AI revolution without prior specialized education into the subject?

    • @JakeWitmer
      @JakeWitmer Рік тому

      Off of narrow AI? Likely. Off of AGI? Probably not very much, pre-supermodification. Post-supermodification, the profit will be immense and probably utopian. (Possibly after a brief war with the incumbent Nazis.)

  • @massacrestarts1673
    @massacrestarts1673 2 роки тому +1

    Why does Dr. Yannic look like Agent Smith? 😆

  • @andrewhocher
    @andrewhocher 2 роки тому +1

    Why that robot at 0:33 got a donkey?