Mathematician debunks AI intelligence | Edward Frenkel and Lex Fridman

Поділитися
Вставка
  • Опубліковано 22 гру 2024

КОМЕНТАРІ • 901

  • @LexClips
    @LexClips  Рік тому +10

    Full podcast episode: ua-cam.com/video/Osh0-J3T2nY/v-deo.html
    Lex Fridman podcast channel: ua-cam.com/users/lexfridman
    Guest bio: Edward Frenkel is a mathematician at UC Berkeley working on the interface of mathematics and quantum physics. He is the author of Love and Math: The Heart of Hidden Reality.

    • @quantum_ocean
      @quantum_ocean Рік тому +3

      Terrible title @lex he’s not talking about “AI” generally but about LLMs specifically.

    • @robertmartin2262
      @robertmartin2262 Рік тому +1

      I look at it the opposite way, a large languague model would have never assumed that the square root of a negative number is impossible...

    • @reellezahl
      @reellezahl Рік тому

      Lex, your guest didn't even scratch the surface on the issue. I'll summarise his argument:
      - It took humans centuries to break the barriers.
      - I *don't think* that LLMs can do this.
      That's a god-of-the-gaps argument.
      LLMs *at the moment* are just performing (roughly) two functions: imitation of and summaries of all the discussions that humans have conducted (both in forums and documentation) on the internet for the past decades.
      There are other systems that are going to come online soon, that will put this linguistic mimicry to shame. *Artificial reasoning* and experimenting. In part this is already being done.
      You don't even need to know much about this tech. As a kid I grew up hearing all these stories about *how special* so-and-so in Italy or England or wherever was. So hearing the same ol' tripe from this Russian mathematician made my eyes roll so hard. He did not bring anything new or substantial to this interview.
      Take √-1: there really is not anything more to this than: find an algebraic framework which extends the reals and solves X² + 1 = 0. Extensions of structures are _very_ common concepts. The fact that it took centuries for humankind to do this is not something to be in awe of but to be ashamed of. Give an AI a few such goals and it *will* come up with a suitable framework.
      I spent my life trying to show that all these ideas and results _can in principle_ be independently be found *without* an Einstein/von Neumann/Gödel, etc. And it works. (The historical proof of this is that mathematical results often get proved _completely independently_ by multiple people.) Some ingredients are: necessity-is-the-mother-over-invention[or: discovery] + reflection (about concepts and connections you already know) + refinement of ideas + test-cases. *THIS IS ALL STUFF YOU CAN AUTOMATE.*

    • @quantum_ocean
      @quantum_ocean Рік тому

      ​@@reellezahl it's bit more than imitation of and summaries. It's creating and maintaining representations, among other things.

    • @reellezahl
      @reellezahl Рік тому

      @@quantum_ocean sure, that's why I wrote 'some ingredients'. I would like to add: anybody who is not undergoing an existential crisis at the moment, has not reflected enough on how thinking, discovery, etc. work and may even think it's just all magic. I think Frenkel has reflected on history a lot, but not on enough on mechanical thinking, esp. the current 'organic' paradigms being implemented.

  • @psyfiles7351
    @psyfiles7351 Рік тому +157

    This is now one of my favorite interviews. Love and math. What a guy

    • @FederatedConsciousness
      @FederatedConsciousness Рік тому +3

      It was so good. Absolutely one of the best Lex has done. A conversation that pushes the boundaries of everything we know.

    • @Crusade777
      @Crusade777 Рік тому +2

      Yet people say," Where did he debunk A.I/General Intelligence ".

  • @julioguardado
    @julioguardado Рік тому +78

    Frenkel has to be the most interesting mathematician ever. The whole interview is tops.

  • @SinanAkkoyun
    @SinanAkkoyun Рік тому +122

    He is a top tier mathematician and explains to the audience in simple, digestable but still leapful mysterious manner. Giving 'simple' examples like srqt(-1) to explain the emotional concept hiding behind just brings joy to me, such a lovely person!

    • @timsmith2525
      @timsmith2525 Рік тому +2

      That's the sign of a true expert: He can explain things clearly to non-experts.

    • @JoshTheTechnoShaman
      @JoshTheTechnoShaman Рік тому +1

      You can tell this is how wins the ladies 😂

    • @theecharmingbilly
      @theecharmingbilly Рік тому

      Yeah, we watched the video too.

  • @therealwildfolk
    @therealwildfolk Рік тому +43

    Wow I not only totally get his point but simultaneously finally understood complex numbers from this. Fantastic guest, Lex

    • @jasonbowman9521
      @jasonbowman9521 Рік тому +2

      I don't know if it's exactly the same but I find I understand concepts better when shown. I study 3D computer art as a hobby. In order to let the computer help make certain textures and bump maps a person can use these things called nodes. They have texture nodes and geometry nodes and I think or tell myself I understand somewhat what those math formulas mean because I can see objects changing in real time every time a node is adjusted. The 3D program is free. It's called Blender. And I think if a mathematician could learn it they could figure out a way for everyday people to see certain things. I kind of get what a black hole is but I doubt I could chart out everything that is going on.

  • @dannygjk
    @dannygjk Рік тому +49

    AI has functional intelligence but it is not the same as human intelligence. It's the same as birds do not fly in the same way as aircraft fly.

    • @aaronjennings8385
      @aaronjennings8385 Рік тому +10

      Interesting analogy. I'll remember that as an example.

    • @PabloVestory
      @PabloVestory Рік тому +3

      And humans have consciousness, whatever that would mean. If it's possible for AI's to sustain some kind of "real" (not "simulated") self-awareness or not, that's yet to be proven.

    • @dannygjk
      @dannygjk Рік тому +5

      @@PabloVestory I think self-awareness in AI will be different from humans.

    • @ChristianIce
      @ChristianIce Рік тому +5

      @@PabloVestory
      AI only mimicks intelligence, it could even mimick consciousness, but mimicking is the very foundation of how it works.
      The mimicking process can be extended and improved to the point it's indistinguishable from the real deal, but it will be still mimicking.
      AGI, on the other hand, is a different approach and it's the attempt of creating an actual thinking machine.
      As Carmack said, the first iteration of AGI will probably look like a 4 years old kid, and you start from there.

    • @alexnorth3393
      @alexnorth3393 Рік тому +1

      @@ChristianIce
      No they don't mimic intelligence

  • @lacedmilk8586
    @lacedmilk8586 Рік тому +26

    Wow! This dude is absolutely passionate about math. There was pure joy in his eyes as he spoke.

  • @Gordin508
    @Gordin508 Рік тому +20

    When going through education, consider yourself blessed if you got teachers/instructors/professors who are as passionate about their field as Frenkel is about math.

    • @Brian6587
      @Brian6587 Рік тому +2

      I had one such teacher in high school and he turned something I hated to something I loved! It makes a difference!

  • @ChristianIce
    @ChristianIce Рік тому +2

    AI cannot come up with new ideas, but it can see patterns in a large set of data that we didn't notice.
    It's not an emergent property, it's an unexpected result.
    Given the impossibility for a human being to read and memorize said dataset, unexpected results are to be expected.

    • @katehamilton7240
      @katehamilton7240 Рік тому

      IKR? I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"

  • @leighedwards
    @leighedwards Рік тому +82

    Where in this clip did Edward Frenkel debunk AI intelligence?

    • @zfloe
      @zfloe Рік тому +33

      Click bait sadly

    • @BillStrathearn
      @BillStrathearn Рік тому +46

      The person that Lex hires to write titles for his UA-cam clips is truly the worst person

    • @falklumo
      @falklumo Рік тому +25

      Well, Frenkel indeed argues that LLMs won’t be able to show imagination like humans do. But AI and LLMs aren’t synonymous.

    • @timorantalainen3940
      @timorantalainen3940 Рік тому +16

      ​@@falklumoI don't think we have a single example of AI that did not involve teaching or creating a model. No space for imagination in the methods available today and hence the click bait is somewhat warranted in my opinion.
      Based on the fact that we can imagine and invent new rules (e.g., add another dimension to make room for complex numbers) it cannot be ruled out that a creative AI arises at some point but we don't have an idea of how that might be achieved at the moment as far as I know. The thing holding us back is the lack of understanding regarding how conciousness arises in the brain. Unless we understand that, we cannot manufacture such a system other than by accident.
      Please correct me if I'm wrong. I'd be curious to read up on machine intelligence methods that are not dependent on creating a model. Or even just a description of such a method in case we haven't yet managed to implement it.

    • @WralthChardiceVideo
      @WralthChardiceVideo Рік тому +5

      In the title of the video

  • @angelcastro3129
    @angelcastro3129 Рік тому +22

    Edward Frenkel... A beautiful Mind but What a beautiful Soul this man has, It shows through his eyes wise yet childlike. Awesome Thanks you Lex great interview.

  • @jjacky231
    @jjacky231 Рік тому +71

    When I was a kid many people explained why computers would never be as good at chess than the best humans. The explanations where similar to Edward Frenkels explanations: "There just is something that we get and computers won't get in chess / mathematics.

    • @agatastaniak7459
      @agatastaniak7459 Рік тому +9

      Back then we didn't know that a master chess palyer simply recalls 70 possible tactical combinations of a new move per 1 minute. Now we know it, so it's more than obvious that's it all about time within which someone or some device can perform such an operation at higher speed. People you mention didn't have this knowledge this is why nowadays we judge them too harshly.

    • @jjacky231
      @jjacky231 Рік тому +32

      @@agatastaniak7459 Ray Kurzweil predicted back in the 80ies that a computer will beat the chess world champion. He roughly also predicted this: "when a computer beats the world chess champion one of three things will happen: people will think more highly of computers, less highly of themselves or less highly of chess. My guess is the latter." He was right. Everybody knew that computers could compute faster than humans and that they would become even faster. And that software would become better and better. But people thought that wouldn't be enough.
      And I don't judge the people back then harshly. It was easy to underestimate the potential of computers. But I think it's not wise to make the same mistake again.

    • @amotriuc
      @amotriuc Рік тому +19

      Edward Frenkels didn't say that this never can be done, he was specific that he thinks that LLM can't do it. I have suspicion he is right, as well I have suspicion Open AI guys know this as well, they just build up hype to get more funding. It is not like this is the first time it did happen.

    • @AKumar-co7oe
      @AKumar-co7oe Рік тому +1

      @@agatastaniak7459 the same thing is true for regular computation - at this point we know we are running an algorithm

    • @heywrandom8924
      @heywrandom8924 Рік тому +10

      ​​​@@amotriuc there is also an interview with CEO of open AI on this channel and he also looked doubtful that llm's will be enough for AGI but he says he wouldn't be too surprised if it turns out that gpt7 or gpt 10 is an AGI. The thing is that these models have emergent capabilities that can suddenly appear after becoming large enough

  • @masteryoda9044
    @masteryoda9044 Рік тому +1

    Do we have any use for fractional dimensions or even complex ones and not just integral ?

  • @splashmaker2
    @splashmaker2 Рік тому +5

    It might depend on how you define imagination, but then how do you categorize experts learning new moves from AlphaGo/Zero? Were those moves not imagined if they were not done before?

    • @josephsellers5978
      @josephsellers5978 5 місяців тому

      The role of the subconscious is vital to any definition of imagination. It would be a stretch to say AI is capable of being conscious It's even more improbable that anyone would be able to create a program that would manifest a subconscious in AI, as we don't even really understand how it truly works. A lot of imagination is also instinctual, especially subconsciously, and everyone knows you can't teach instincts.

  • @steliostoulis1875
    @steliostoulis1875 Рік тому +202

    The title sounds awkward and wrong somehow....

    • @AutitsicDysexlia
      @AutitsicDysexlia Рік тому

      Yeah... almost redundant and repetitive... like a pleonasm.

    • @BKNeifert
      @BKNeifert Рік тому +16

      No, AI is debunked. It makes perfect sense.

    • @dannygjk
      @dannygjk Рік тому +8

      @@BKNeifert Need to agree on definitions.

    • @BKNeifert
      @BKNeifert Рік тому +38

      @@dannygjk It's hard to say. Have you ever looked at AI? It doesn't think. It just repeats what it's programmed to say. It doesn't have the capacity to understand.
      Like, can it make beautiful pictures? Yes. But it doesn't make meaningful pictures.

    • @BKNeifert
      @BKNeifert Рік тому

      @@dannygjk Like, I doubt AI could understand the Romantic Poets, or write something like Coleridge or Southey. If it tried, it'd be vapid, dischordent.
      A lot of the metaphor AI creates, is within the human mind itself, programming the AI to create it. It's not creating, but the human is, which the AI interprets and then vomits a sort of copy of what the person who gave the prompt said, only in more detail.
      And it also plagiarizes. I've noticed that, too.

  • @Ronnypetson
    @Ronnypetson Рік тому +12

    In order to search for new mathematical concepts, a LLM would have to be grounded not only on natural language, but also on things like formal logic, like a mathematician is. Because natural language already carries some logic in it, current LLMs already can "create" new concepts.

    • @georglehner407
      @georglehner407 Рік тому +7

      For "new" mathematics, that's not good enough either. It needs to be able to discard, forget, and boil down things it learned to distill the "most useful concepts". A mathematician that is good in formal logic and nothing else is still a poor mathematician.

    • @hayekianman
      @hayekianman Рік тому +3

      then it would indeed be the stochastic parrot it is called. the mathematician knows what to discard

    • @Ronnypetson
      @Ronnypetson Рік тому

      @@georglehner407 in this case there is some notion of value that good human mathematicians have. This notion may or may not be learned by an AI. Can you think of something like that?

    • @Ronnypetson
      @Ronnypetson Рік тому +3

      @@hayekianman I agree with the stochastic part but not so much with the parrot one. We humans are stochastic too. The mathematician knowing what to discard can be emulated by a stochastic guided search, which has learned how much weight to put in each decision

    • @BitwiseMobile
      @BitwiseMobile Рік тому +9

      Incorrect. They don't create anything. They iterate over their already known knowledge. They cannot - yet - recognize they don't have the correct knowledge and try to improve themselves. That's called GAI - or general AI - and it's very scary. We are working on that. Generative AI is very different. The fact that you can game generative AI using prompts tells you everything you need to know. I have told it ridiculous stuff before and it happily agreed with me and proceeded to iterate over that bullsh!t. That's not cognition and it's not innovation. It might seem like that to us, but it's really just reflecting what you are saying to it. It's not innovating, you are.

  • @MrDarwhite
    @MrDarwhite Рік тому +40

    He asserted it. There was no evidence provided.

    • @jarodgutierrez5389
      @jarodgutierrez5389 Рік тому +9

      You must be a fellow prompt engineer.

    • @MrDarwhite
      @MrDarwhite Рік тому +6

      @@jarodgutierrez5389 I’ve been playing around, but my main issue is that of a person who follows the process of skeptical inquiry. He provided no evidence or even a logical argument. Nothing. He simply asserted that it was not possible. I’m not claiming it is, but I’m certainly not going to claim it’s not possible, especially after playing with GPT-4 and it’s ability to reflect on its answers without any specific prompting. It seems trivial to me to have an AI system throw out prior assumptions one or two at a time and see what the results are. Not exactly imagination, but it would likely solve his example. Having said that, I wish I could call myself a prompt engineer. As a programmer, that level of expertise would be very valuable.

    • @MrDarwhite
      @MrDarwhite Рік тому +4

      @@seventeeen29 will do. To be fair, he doesn’t provide the evidence in this clip, and the title of this clip is where I have the issue. He seems like a great guy and I enjoyed what he said.

    • @andrewshantz9136
      @andrewshantz9136 Рік тому +3

      He’s making the point that complex numbers are have strictly conceptual meaning which is not conceivable to an LLM because it is not extrapolatable based on past knowledge.

    • @hardboiledaleks9012
      @hardboiledaleks9012 Рік тому

      @@andrewshantz9136 LLM where the L stands for LANGUAGE not MATHEMATICS...
      Wait until some bozo trains a Mathematic or algebra model on the same level as GPT 4 and it will shit all over your world of fkin complex numbers... Humans really aren't as clever as they think.

  • @theTranscendentOnes
    @theTranscendentOnes Рік тому +12

    such a great guest! thanks for bringing him on. he's eloquent, seems enthusiastic with such affectiveness for the topic and probably the kind of person I could sit down and talk about stuff for a long time. I love his accent too! adds "flavor"

  • @MrSidney9
    @MrSidney9 Рік тому +4

    Wolfram told the story of him playing with GPT 3.5. He asked it to write a persuasive essay arguing about the bluest bear that exist. So chat gpt started with " Most people don't know this fact but blue bears do exist... They are found In the Tibetan Mountain... their color doesn't come from pigment, instead it comes from a phenomenon analogous to how butterflies produce colors ....." Near the end of the essay, he was like , " wait a minute do blue bears actually exist ? " He had to google it to make sure.
    Now tell me again, AI can't have imagination.

    • @leandroaraujo4201
      @leandroaraujo4201 Рік тому +1

      I am not denying the idea that AI models can have imagination or emotion, but that story just means that the AI can be convincing, not necessarily imaginative.

    • @heinzditer7286
      @heinzditer7286 Рік тому +1

      There is no reason to assume that a computer can have emotions.

    • @MrSidney9
      @MrSidney9 Рік тому +1

      @@leandroaraujo4201 How did he manage to be convincing? By MAKING UP plausible facts. That's what imagination is about.

    • @leandroaraujo4201
      @leandroaraujo4201 Рік тому

      ​@@MrSidney9 *It* managed to be convincing by arranging its ideas and using words in a certain way, in order to be persuasive. Those ideas could have come from imagination, but imagination is completely secondary to the ability to convince someone. You can convince someone of something false with facts (e.g. confusing correlation with causation).

    • @MrSidney9
      @MrSidney9 Рік тому +1

      @@leandroaraujo4201 My working definition of imagination is the faculty to create/conjure concepts of external objects not available in the real world. It did just just that and managed to be convincing ( testament to the coherence and of its imagination). Hence it proved it could be both convincing and imaginative.

  • @The-KP
    @The-KP Рік тому +9

    "Everybody knows that the dice are loaded, everybody rolls with their fingers crossed"

    • @sunandablanc
      @sunandablanc Рік тому +4

      "Everybody knows the war is over, everybody knows the good guys lost"

    • @aaronjennings8385
      @aaronjennings8385 Рік тому +1

      The cavalry isn't coming.

    • @Gizziiusa
      @Gizziiusa Рік тому +1

      "Everybody knows...Da' po' always bein' fucked ova by da' rich. Always have...Always will." Keith David, Platoon (1986)

  • @yarpenzigrin1893
    @yarpenzigrin1893 Рік тому +16

    LLMs are not AGI. However if something exists in nature, like intelligence, it can be atificially replicated.

    • @hardboiledaleks9012
      @hardboiledaleks9012 Рік тому +5

      i have no idea why this basic concept is so hard for people to understand... It's almost like the smarter someone think they are, the harder it is for them to understand that they arent special 😂 So pretentious

    • @uphillwalrus5164
      @uphillwalrus5164 Рік тому

      Nature exists in intelligence

    • @yarpenzigrin1893
      @yarpenzigrin1893 Рік тому

      @@uphillwalrus5164 Nature exists in flight.

    • @Josh-cp4el
      @Josh-cp4el Рік тому +2

      Can plastic become titanium? There are physical limits of different materials in our universe.

    • @GroockG
      @GroockG Рік тому +1

      Maybe intelligence doesn't exist

  • @nickwalczak9764
    @nickwalczak9764 Рік тому +24

    I hoped he would talk about AI more but he's right - language models are mostly not built on their own experience (mostly, it is supervised learning although there is some reinforcement learning in newer models). They act more like function interpolaters which can produce impressive results in the right context. Getting them to extrapolate anything and they can produce complete nonsense. They don't understand concepts deeply, there simply very very good mimics of the training data they have seen.

    • @luckychuckybless
      @luckychuckybless Рік тому +2

      The computer learns language exactly like a child does. Using context from other sources of information or people

    • @jeffwads
      @jeffwads Рік тому +2

      Sure dude...see you when GPT-5 starts doing your homework.

    • @spencerwilson-softwaredeve6384
      @spencerwilson-softwaredeve6384 Рік тому +2

      This is correct for now, but I believe it would only be a small tweak to gpt to convert it from language model to agi, the tweak isnt quite understood yet

    • @federicoz250
      @federicoz250 Рік тому +13

      @@luckychuckybless Not at all. Babies don’t need to read the entire web to understand language 😂

    • @rprevolv
      @rprevolv Рік тому +2

      Alphazero extrapolates rather amazingly

  • @AndreaCalaon73
    @AndreaCalaon73 Рік тому +9

    Dear Lex, I can’t resist commenting on what Edward Frenkel says in this interview.
    He uses the discovery of complex numbers as an example of something that an artificial intelligence could not cum up with.
    I think that that example shows precisely the opposite.
    Let me first mention that since the late 1960s, mainly thanks to the work of David Hestenes, we have known what complex numbers are, their intuitive and simple geometrical meaning, contrary to what E. Frenkel suggests. Geometric Algebra defines the “well behaving” product in 3d, which exists in any dimension, not only in 2, 4 and 8, that Frankel says does not exist. You can look up “Geometric Algebra” yourself.
    I am well convinced that an AI with some “model based reasoning” would have discovered the marvellous and beautifully symmetric structure of Geometric Algebra together with the few rules for 2D that Gauss and other mathematician discovered centuries ago, when the story of the complex numbers originated. The absence of the structure of Geometric Algebra kept the simple significance of complex numbers (rotors) hidden and created the myth that Frenkel describes.
    In other words, an AI would not have been foolishly fascinated with the mysteriousness of the complex numbers, so incomplete and unjustified, because it would have arrived straight to the structure of Geometric Algebra, inside which complex numbers, quaternions, octions, the vector product, rotation in any dimension, … are all easily explained with a single product!
    Geometric Algebra impacts quantum mechanics, computer graphics, general relativity, …
    Complex numbers are just rotors …
    Have a nice weekend Lex!
    Well done, as always!!!!!

    • @DingbatToast
      @DingbatToast Рік тому +6

      I agree. I don't believe an AI would get hung up on the same things (or in the same way) humans do.

    • @coolcat23
      @coolcat23 Рік тому +4

      @@electrocademyofficial893 I believe Frenkel was rather justified in wanting to cite someone else to give weight to his view on AI, because in his hearts of hearts of hearts he knows that human brains are not magical. If current AI implementations cannot make leaps yet, it is because they are not operating at meta levels yet. An AI "simply" has to know of an example of a leap in one field to be able to apply it to another field. Voilà, there's your leap that isn't possible by simply trying to extrapolate at the same level. We can romanticize about human intelligence all we want, the writing is on the wall: At some point in time, AI is going to outperform us in every mental capacity.

    • @GeekProdigyGuy
      @GeekProdigyGuy Рік тому +1

      ​@@coolcat23 saying "at some point in time" isn't interesting. if it takes 1000 years, nobody alive today will care. the question is exactly how long it will take to achieve the necessary breakthroughs. no AI up until now has truly "invented" anything akin to what we ascribe to the greatest of human intellect.

    • @ivankaramasov
      @ivankaramasov Рік тому +2

      ​@@GeekProdigyGuyIt won't take 1000 years

    • @reellezahl
      @reellezahl Рік тому

      @@GeekProdigyGuy it has. In Physics for example, AI has been implemented to come up with both experimental and theoretical results in short spans of time, which would have taken decades/centuries for human beings. The problem with the academic world is the bullsh!t of cermony and time-wasting nonsense: conferences, workshops, seminars, … all of that is utter utter bullsh!t. An AI does not need to attend any of that: just give it sense, raw computing power, and wire it right, and it will overtake any human being. Soon humanity will learn: there is NOTHING special about Einstein, or von Neumann, or Nash, or any of these people. Truth is inherently _discoverable_ and does not depend on being 'special' or some bullsh!t magic (and most definitely not on crappy workshops that we waste so much time, money, and fuel on).

  • @BEDLAMITE-5280ft.
    @BEDLAMITE-5280ft. Рік тому +12

    The “observed and observer” is a phrase coined by Jidu Krisnamurti, then taken by David Bohm and used in his description of quantum mechanics. I always find that fascinating.

  • @spacebunyip8979
    @spacebunyip8979 Рік тому +11

    I want to read this man’s ChatGPT history. I’m sure it would be fascinating

    • @5sharpthorns
      @5sharpthorns Рік тому +1

      Omg right?!

    • @ChatGPT1111
      @ChatGPT1111 Рік тому +2

      Well, he has an affinity toward Issac Asimov, plus Rick and Morty, Jerry Springer shorts and Dilbert (fav is Dogbert).

    • @ronking5103
      @ronking5103 Рік тому

      Probably not, It'd be him correcting the machine over and over at least if he was attempting to plunge the depths of his expertise. The rest of it would amount to the machine being convincing enough to seem expert in a field, but only because the user isn't. It'll get better, but right now its purpose is not expertise, it's general information that we should all take as being friendly if not accurate advice.

    • @shyshka_
      @shyshka_ Рік тому +2

      chat gpt at that level of expertise is useless

  • @christopherrobbins0
    @christopherrobbins0 Рік тому +2

    What we know about consciousness already seems prove something fantastical lies beneath the surface of our current knowledge. Ancients seemed to have understood this much more than we do now.

  • @carlosfreire8249
    @carlosfreire8249 Рік тому +3

    A sufficiently smart model can extract deeper meaning from less evidence.
    Who is to say new mathematics is not already hidden in the relationships found in the existing training data?
    The fact that the canon implies that something is not possible would not necessarily detain a LLM, because the it is not explicitly trained to respect the rules of Mathematics or take them with any special regard.
    There’s actually nothing blocking it from going beyond, from using obscure references or just stumble into a new way of solving a problem, thus creativity needs to be considered using non-anthropomorphic lens in this case.

    • @amotriuc
      @amotriuc Рік тому

      A sufficiently smart model probably can do a lot but it does not mean we know how to build it. LLM are trained on existing knowledge and to predict exiting knowledge means so if you train it that 1+1=2 it is not likely to discover that 1+1=4. The claim "there is noting stopping it from going beyond" is a wishful thinking any real system has limitations we just don't know what it is for LLM. The guy is a mathmagician, mathmagicians don't get anything for granted just the axioms. There are a lot of BIG claims coming from Open AI with 0 prove that they are true. I suspect with LLM we will get to same situation as we have with self driving card, still not ready even it was promised to be done yesterday I am wiling to bet money on this.

    • @carlosfreire8249
      @carlosfreire8249 Рік тому

      @@amotriuc GPT-4 had been observed generalizing 40-digit numbers addition without any explicit training. The emergent behaviors of these models betray the simplicity of their architecture.
      People arguing transformers are “stochastic parrots” are not paying close attention to second-order effects.

    • @amotriuc
      @amotriuc Рік тому

      @@carlosfreire8249 The question is which emerging behaviour this is? If it really did discover what a number and addition is why it did stop at 40 digits?. It should be able to to any addition if it understood. So your example actually shows signs that it does not builds any understanding needed for AGI. As a see it is still a very sophisticated “stochastic parrot".

    • @carlosfreire8249
      @carlosfreire8249 Рік тому

      @@amotriuc the model does not need to be able to add two arbitrarily long numbers without a calculator, more than you need to.
      The addition of two 40-digit numbers is emergent for at least two reasons: it is was not repeating data from the training set, it learned to do math having not being instructed to do so.
      We should be careful not to apply a “god of the gaps”-type of reasoning here, because generalization is not an all or nothing situation. Even if the model has blind spots, even if its internal language is not as expressive, even its functioning is not as efficient as our cortexes, a LLM reaching increasing levels of generalization capability by virtue of scaling is a surprising (and humbling) discovery.
      Stalin’s cold remark that “quantity is a quality all its own” applies here, hyper-parameterization is a quality all its own.

    • @amotriuc
      @amotriuc Рік тому

      ​@@carlosfreire8249 It does not matter what I need or not, I can add 2 number more then 40 digits without a calculator since I know what a number is and what is addition. The limit of 40 digits shows that it did learn how to add 2 numbers without understanding what a number is. I am not clamming it does not have any emerging properties the issue is those properties have nothing to do with AGI since it don't discover an understanding of the subject, which is much harder then just predicting a result (even some most simple system can have emerging properties it means nothing). I have to be clear I do believe at some point with will have AGI but it defiantly will not be an LLM. If AGI was so simple that LLM can do it, we defiantly would have had other intelligent creatures appearing during evolution and our Galaxy would be full of alien's. So don't be overoptimistic all the claims that LLM can do AGI have no scientific base they just hopes.

  • @nickr4957
    @nickr4957 Рік тому +1

    I think that the creative spark that Frenkel is describing is what philosophers call abductive inference, as opposed to deductive and inductive inference.

  • @cnrspiller3549
    @cnrspiller3549 Рік тому +14

    I remember being taught imaginary and complex numbers, and I remember hearing my brain say, "That's it, I'm out of here".
    That was the point at which me'n'maths bifurcated. But I often reflected on the first maniac to pursue imaginary and complex numbers; what sort of lunatic does that? Now I know he was the same fella that invented the double cv joint - weird.

    • @abeidiot
      @abeidiot Рік тому +2

      funny. That was when I got back into math.
      i suck at arithmatics, but actual mathematics is fascinating

    • @rokko_fable
      @rokko_fable Рік тому +1

      I agree. I still think they are meaningless, just a substitute for things we cannot comprehend.
      They're used in formula to reach a solution, but it seems one starts with the conclusion they want and fill in nonsense to get there.

    • @shyshka_
      @shyshka_ Рік тому +6

      @@rokko_fable how are they meaningless if they're literally used in engineering all the time and not just in theoretical maths

    • @Gizziiusa
      @Gizziiusa Рік тому

      lol, kinda like how when you try to divide by zero with a calculator, to says ERROR.

    • @kingol4801
      @kingol4801 Рік тому

      @@Gizziiusa Because that expression does not have meaning.
      They could have also written “infinity” or “undefined”. Would you be happy then?
      Since it is NOT a number when you divide by 0.

  • @danielmurogonzalez1911
    @danielmurogonzalez1911 Рік тому +1

    What about searching for numbers structures in the 16 dimension? I got curious as he said only powers of 2 made sense and 16 is a power of 2

    • @almightysapling
      @almightysapling Рік тому +2

      There's a set for those too.what he failed to mention is that with every step we go up we lose an important property. Quaternions are not commutative. Octonians are not associative. The Sedonians don't get much love because they have so few properties that we just don't care about them

    • @reellezahl
      @reellezahl Рік тому

      @@almightysapling for an algebra with 2^n generators (and basis elements wrt to the additive structure?) what exactly do we demand? Is it always an algebra over ℝ? or is it an algebra over the previous (2^{n-1}) algebraic structure?

  • @yvesbernas1772
    @yvesbernas1772 Рік тому

    What does the title do with the interview ?

  • @justin4202
    @justin4202 Рік тому +7

    Best clip ever. Wow. Raw and true and vulnerable. Great job catching this moment. My goodness

  • @alikazemi5491
    @alikazemi5491 Рік тому +1

    For GAI to understand that sqrt(-1) could have real value its matter of learning to do design of experiments which means it will eventually construct it.

  • @kray97
    @kray97 Рік тому +3

    Could an LLM come up with a concept like sqrt(-1)? Great question. If it has a huge corpus of mathematical proofs, maybe it could?

    • @hardboiledaleks9012
      @hardboiledaleks9012 Рік тому +5

      Could a large language model come up with a concept like that? maybe? Shouldn't this be a question we give to a large MATHEMATICS model?

    • @rebusd
      @rebusd Рік тому +2

      @@hardboiledaleks9012 except according to Kurt Godel and his incompleteness theorems, there could be no such model; it would either be inconsistent (spitting out logical contradictions), or incomplete (there would be true statements expressible with the model that would be unprovable)

    • @hardboiledaleks9012
      @hardboiledaleks9012 Рік тому +4

      @@rebusd currently our human model has true statements expressible that are unprovable. So this a model issue not a computing issue. I'd also argue that it's not because we can't comprehend A.I being perfect that A.I can't be better than us without being perfect.
      There's a simple fact, carbon based life forms are great for living, good for evolution through reproduction. through evolution our little meat computer we call the brain ended up being pretty good at computing. but computers are literally computing machines... build for computing. They are also not limited to a physical size and their bandwidth are orders of magnitudes more efficient than ours. Computers will end up computing better than us. Human intelligence will be replaced by silicon in the future. Regardless of if you agree that it is conscious or whatever other arbitral philosophical concepts you try to apply to it.

    • @reellezahl
      @reellezahl Рік тому +1

      @@rebusd ​ Gödel (or _Goedel_ in latin, but not "Godel") developed his result for systems that have a recursive presentation. This condition is a *critical* component to his results. The new paradigms of computing (machine learning, etc.) are NOT recursive: they're analog, empirical, and moving in an organic direction. Gödel's results do not apply.

  • @particleconfig.8935
    @particleconfig.8935 Рік тому +1

    In my opinion this argument starts off with the assumption that the LLM can't deduce the new way of thinking, Simply by means of the historical data of said mathematician that pondered sqrt(-17). It can simply deduce from even only that 1 instance that divergent "thinking" needs to be done. If, how am I wrong?

    • @dolosdenada771
      @dolosdenada771 Рік тому

      You are not wrong. He quotes Einstein suggesting imagination is unlimited. He then goes on to say he can't imagine AI solving X.

  • @AnimusOG
    @AnimusOG Рік тому +8

    This guy is truly awesome, great interview Lex!

  • @Thomas-sb8xh
    @Thomas-sb8xh 7 місяців тому

    A mathematician is one who knows how to find analogies between theorems, a better one who sees analogies between proofs, a still better one who sees analogies between theories, and one can imagine one who sees analogies between analogies. Stefan Banach, polish mathematician, one of the greatest ever lived...Feynman/Frenkel type, so you all would love him. Fantastic interview ))))

  • @CaptainValian
    @CaptainValian Рік тому +3

    Brilliant discussion.

  • @nate_d376
    @nate_d376 Рік тому

    Same clip as the other video? Or did he remake the title and upload it?

  • @erlstone
    @erlstone Рік тому +3

    as they say.. when u know the rules, u can break the rules

  • @jaydawgmac88
    @jaydawgmac88 Рік тому +1

    To summarize, LLMs predict the most common answers to a particular input. Solving complex problems requires imagination and predictions that go AGAINST the grain and expected future. LLMs have to keep predicting future based on the past.

    • @jaydawgmac88
      @jaydawgmac88 Рік тому

      I love the example of thinking about dimensions as powers of 2 and wondering why that’s the case. Very powerful and inspiring example for anyone that wants to be a mathematician. He said so much in that one section. Does he mean that 3 dimensions is not currently compatible with mathematics because multiplication can’t be solved? 1,2,4 and 8 dimensions were viewed as ok but something was wrong with 3 and couldn’t deal with multiplication on some level. Perhaps time is such a critical component that we can’t have 3 dimensions without time, and by then you just jump from 2 to 4 dimensions? Very awesome interview. Gets your brain thinking. Time to go ask Chat GPT some follow up questions 😅

  • @carefulcarpenter
    @carefulcarpenter Рік тому +5

    As a highly creative designer-craftsman I was fortunate to work in Silicon Valley for some of the best and brightest, and richest, people in the world. I listened to their "theories on their dreams" and brought it to fruition. I also witnessed their private lives, and decisions they had made about their dream.

    • @ivanmatveyev13
      @ivanmatveyev13 Рік тому +11

      cool story, bro

    • @carefulcarpenter
      @carefulcarpenter Рік тому +2

      @@ivanmatveyev13 I have been in some places, and had some conversations, that no one else in history could ever have. I am a "trusted man" in the hearts and minds of people who had to be cautious about people as a rule--- never knowing who to trust.
      My work still speaks for me, and likely will for hundreds of years. That is the way of a master craftsman who took the Road Less Travelled. It is a lonely path, but there are a few others I've worked for that lived lonely lives. The world out there is full of highwaymen, gypsies, and thieves. 👀🐡

    • @lakonic4964
      @lakonic4964 Рік тому +6

      I have seen things you people wouldn't believe 👀

    • @justinava1675
      @justinava1675 Рік тому +3

      Good for you? Lol

    • @mikerosoft1009
      @mikerosoft1009 Рік тому +2

      ​@@carefulcarpenter tell us more

  • @tan_ori
    @tan_ori 7 місяців тому

    Regarding the complex number point, you can just explicitly ask the “agi” to probe things in mathematics that has historically been seen as impossible or unintuitive. Seems a very simple “fix” for a advanced llm (with mathematical reasoning) to discover complex numbers etc.

  • @takisally
    @takisally Рік тому +12

    What seems like a jump to us might be obvious to AI

    • @reellezahl
      @reellezahl Рік тому

      @@kulumbula317 it's wired analogously to imitate aspects of human thinking. The advantage is the hardware. AI does not need to sleep or eat or be loved. It can churn through trillions of images or documents, where we would give up after a dozen attempts. THAT's the power of this thing. Your reaction is like scoffing at the crappy vision of a horseshoe crab, failing to see the big picture of the machinery of evolution.

  • @MoversOnDutyUSA
    @MoversOnDutyUSA Рік тому

    The square of -1 is equal to 1. In other words, (-1) multiplied by (-1) gives us 1.
    However, it is not possible to take the square root of -1 in the real number system. In order to represent the square root of -1, mathematicians use the imaginary unit "i", which is defined as the square root of -1. Therefore, the square root of -1 is represented as "i" in mathematics.
    So the square root of -1 can be written as √(-1) = i.

  • @sm12hus
    @sm12hus Рік тому +5

    I understand none of this but am super relieved to see a brilliant person confirm my hope and feeling that AI cannot ever be sentient

    • @momom6197
      @momom6197 Рік тому +5

      That's not at all what he said! His point was about one specific ability that LLM do not display. He does not say that AI won't ever be sentient; in fact, his argument is not even evidence that we won't reach AGI in the near future.

    • @jakubsebek
      @jakubsebek Рік тому +5

      "I understand none of this but.."

    • @sherlyn.a
      @sherlyn.a Рік тому +1

      @Az Ek present day AI isn’t actual AI, it’s just linear algebra + some fancy stuff. Real AI would simulate a human brain. Besides, we’re made of DNA-and that’s a form of algorithm/code. We’ve already proved that someone’s genetics can affect how they think (i.e. if they will have certain mental illnesses), so it’s only logical to conclude that we are also algorithms-or at least, hardwired to some extent. Otherwise, why would humans act so similarly if there isn’t something that makes them act that way? We just have to replicate that artificially.

    • @robertthrelfall2650
      @robertthrelfall2650 Рік тому +1

      ​@@sherlyn.a Sounds like the insane ramblings if Dr. Frankenstien.
      Good luck with that.

    • @carleynorthcoast1915
      @carleynorthcoast1915 Рік тому

      current computers certainly can't they just execute code, and you can't code sentience nonmatter how bad people want to think so. That would be analogous to writing a paragraph that made the paper self-aware.

  • @stt5v2002
    @stt5v2002 Рік тому +1

    You could make a good argument that a machine intelligence would more easily embrace complex numbers than humans do. After all, humans are endlessly constrained by “that’s not allowed” or “that doesn’t make sense.” These are basically emotions. A program that can self improve would already have the quality of “there are some things that are true but that I don’t already know and understand.”

    • @Martinit0
      @Martinit0 Рік тому

      I would not say emotions but rather false conclusions rooted in insufficient understanding of underlying assumptions.

  • @thechadeuropeanfederalist893
    @thechadeuropeanfederalist893 Рік тому +3

    I think AI would be capable of coming up with sqrt(-1), because it doesn't require imagination, it just requires generaliization of algebraic rules. An AI trained on math would have seen the concept of generalization already numerous times and be able to apply it to new fields it hasn't seen yet.

    • @reellezahl
      @reellezahl Рік тому +1

      Absolutely! Came here to say something similar. I grew up hearing all these stories about *how special* so-and-so in Italy or England or wherever was. So hearing the same ol' tripe from this Russian mathematician made my eyes roll so hard. All these ideas and results _can in principle_ be independently be found *without* an Einstein/von Neumann/Gödel, etc. And it works. (The historical proof of this is that mathematical results often get proved _completely independently_ by multiple people. Only stuff like the Internet ruins this, because people often give up as soon as they hear somebody else beat them to it.) Some ingredients are: necessity-is-the-mother-over-invention[or: discovery] + reflection (about concepts and connections you already know) + refinement of ideas + test-cases. These are just tasks that can be automated.

    • @kingol4801
      @kingol4801 Рік тому

      Agreed. AI is great at generalizations.
      But it is not so great at real understanding/inference.
      It combines things until they are real-like. It does not comprehend them itself. It just sees a connection/relevance and capitalizes on it further.
      Low-level thinking, which is still interesting, but very low-level

    • @mikewiskoski1585
      @mikewiskoski1585 Рік тому

      Also A.I. is free to lie and be wrong so it can definitely tell you the answer. (It just won't be right)

    • @katehamilton7240
      @katehamilton7240 Рік тому

      what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible?

  • @D.Eldon_
    @D.Eldon_ Рік тому +1

    _@Lex Fridman_ -- Edward Frenkel is brilliant and I appreciate his insights and his humility very much. Thanks for posting this video clip.
    For another, more down-to-earth, perspective on complex math, you should interview an engineer. You know, the people who apply the crazy things mathematicians dream up. A good electro-mechanical engineer can easily provide tons of real-world examples where the "imaginary" number system is essential to describe the day-to-day reality we observe. For example, audio engineers would know nothing about phase without complex math. They would have no idea how two seemingly identical sound waves (identical magnitudes) can completely cancel (when they are 180° out of phase). And it goes even deeper because complex math is at the center of the Heisenberg uncertainty principle. In audio we can know everything about the magnitude of sound. But if we do, we'll know nothing about when in time the sound occurred. On the other hand, we can know everything about the time when a sound occurred, but we'll know nothing about its magnitude. Both cannot be fully known at the same time, creating the uncertainty. This is why advanced audio measurements systems must trade the magnitude-frequency domain for the time domain, depending on the job requirement. And it illustrates how complex math affects the macro world -- not just the micro world of quantum mechanics.
    Then along came a clever guy (Richard Heyser 1931-1987) who discovered that you could map mathematically into an abstract dimension via a Hilbert transform and operate simultaneously on both the magnitude and phase of sound, then map back to our reality with the result. The technology this birthed is Time Delay Spectrometry or TDS. Heyser applied this same "trick" to medical MRI (magnetic resonance imaging) systems to greatly increase their resolution.
    This just touches the surface of the amazing way complex math weaves throughout our world. Another great example is kinetic vs potential energy. Kinetic energy requires the "real" numbers and potential energy requires the "imaginary" numbers.
    It bugs me no end that we are stuck with these awful names for these two essential number systems. I wish we could do away with the "real" and "imaginary" labels and call them something else.

  • @bokoler9107
    @bokoler9107 Рік тому +4

    Mr. Einstein solved his mysteries on a solid couch, while lucid daydreaming.

    • @HkFinn83
      @HkFinn83 Рік тому +3

      Yeh but you aren’t going to solve the mysteries of the universe while musing about shit you don’t understand, so don’t even think about 😂

    • @bokoler9107
      @bokoler9107 Рік тому +3

      @HkFinn83...hate is your main emotion?

    • @hardboiledaleks9012
      @hardboiledaleks9012 Рік тому +1

      @@bokoler9107 No I think logic would be a better word...??? Feel free to prove him wrong... 🤣

  • @arboghast8505
    @arboghast8505 Рік тому +2

    It's all nice and well explained but how does it relate to AI?

  • @shyshka_
    @shyshka_ Рік тому +7

    the moment we create an AI machine without any concrete goals or set objectives/tasks but yet it still goes on to do something (even something as simple as moving around (if it has a robotic body)) is the moment we know it's self-aware and conscious. IDK maybe I'm dumb but that's the way I imagine we would know that it's the real deal

    • @potat0-c7q
      @potat0-c7q Рік тому +2

      I imagine the benchmark being that the AI is supposed to do something but refuses to do so or tries to terminate its own existence because it isnt allowed to be free

    • @essassasassaass
      @essassasassaass Рік тому

      you could actually be right.
      ai does not „want“ anything yet it is just a tool. and maybe (and that makes me optimistic about our future) it will never have a will to do anything. a beeing must value things to take actions by its own but how can a machine create its own values? id argue that it is imposaible because a machine will always mimic the intentions of its creators. but then it would not be the will of the machine itself. but just a theory idk 😄

    • @kingol4801
      @kingol4801 Рік тому +2

      That is not how any of it works.
      AI improves because it gets rewarded for doing a certain action.
      Kinda like our brain makes us like doing something because we get dopamine/endorphins from it etc.
      So, if I were to program a “robot”, I HAVE to define what is the reward mechanism (what are they being rewarded for) - and the “robot” tries until it get’s better at it. And you can guide the process by setting closer goals or changing it’s architecture/brain make-up.
      Without it being rewarded for anything all it will do is pure white noise. And it will only ever “learn” on how to stay alive within the confines of it’s environment, since the robots that don’t stay alive don’t reproduce.
      Since we intentionally set it’s goal via setting the reward mechanism, it will do things to get rewarded (although not necessarily in the way we might expect). Kinda unintentionally reaching a goal etc.
      So, no, it won’t be sentient (at least how AI neural networks are modeled now) because of that. It needs some reward mechanism to do things, and that is pre-defined by a person.
      Source: Masters in Robotics and AI.
      P.S: You CAN technically assume that we GOT sentient as a result of us developing certain neural networks. But that would require BILLIONS of cycles of evolution AND a VERY VERY big neural network AND a complex environment to stimulate us through it’s survival AND ability to form new nodes.
      Yes, AI currently simply optimizes it’s neurons. It does NOT build new nodes/changes it’s pre-determined structure itself - just chooses OUT of that structure the most efficient pathway to get rewarded.
      So, not really, no.

    • @DeTruthful
      @DeTruthful Рік тому +1

      What do you mean tho every living being has concrete goals and set objectives. You get hungry, you get horny, you feel social pressure. Its not an accident you feel these things you’re designed to survive.
      So to say an AI should act without a purpose when you act with multiple purposes built in is a bad goal post.

    • @DeTruthful
      @DeTruthful Рік тому +1

      @@essassasassaass you could argue that your prefrontal cortex is simply a tool of your limbic system.
      Dogs feel hungry, horny, have a desire for safety and social status, we strive to achieve all the same things just in a more convoluted ways.
      Our great minds are largely just a tool to get mammal desires met.

  • @VictorRodriguez-zp2do
    @VictorRodriguez-zp2do Рік тому +2

    He didn't really debunk it, he explained why he thinks it is not intelligent. And he wasn't even ralking about AI in general but about Large Language models. People often forget that AI is a ridiculously large subject and LLMs (and more specifically transformers) are just one way to go about it.

    • @MRVNKL
      @MRVNKL Рік тому +1

      Ai is another great example of 2 + x = 1. Someone will always try to say there is a number that could represent x but we just haven't found it yet and you can't disprove it, even though common sense would tell you it's bs. Just because we build airplanes that doesn't mean we created birds. Silicon based computers can't be conscious, the brain is not a computer.

  • @peterbellini6102
    @peterbellini6102 Рік тому

    At the core of his statements is the fact that humans use inferential reasoning not just the compilation of data. There's the learning of facts, even the curation and organization of facts, but the leaps come from our DRAM. Not a Mathematician, but a very enjoyable video. Kudos for the Einstein references !

  • @solarwind907
    @solarwind907 Рік тому +7

    Here’s to the amazing teachers in our lives! Thank you Lex and Mr. Frankel!

    • @mikewiskoski1585
      @mikewiskoski1585 Рік тому

      They said a lot of words, I'll give you that much.

  • @laxmanneupane1739
    @laxmanneupane1739 Рік тому

    So, Bilbo Baggins was a mathematician too! (Huge respect for the guest)

  • @spades35
    @spades35 Рік тому +3

    Every time the consciousness question comes up, it is a sign that we need a new scientific revolution

    • @jeffwads
      @jeffwads Рік тому +3

      Yeah, this guy is just clueless.

    • @gianpa
      @gianpa Рік тому +1

      I ask permission to steal your quote

    • @sufficientmagister9061
      @sufficientmagister9061 Рік тому +1

      ​@@jeffwads
      We know you are.

  • @limelightmuskoka
    @limelightmuskoka Рік тому

    So elegant in conversing such a complex and mysterious topic.

  • @bendokis4989
    @bendokis4989 Рік тому +3

    When we talk mathematics, we may be discussing a secondary language built on the primary language of the nervous system.
    As quoted in John von Neumann, 1903-1957 (1958) by John C. Oxtoby and B. J. Pettis, p. 128

    • @jonathanlamarre3579
      @jonathanlamarre3579 Рік тому

      Very interesting, finally someone giving references. Thank you.

  • @eerohughes
    @eerohughes Рік тому +1

    I invented a language with my farts. Let's see AI do that!

    • @reellezahl
      @reellezahl Рік тому +1

      give it a body, and it will.

  • @magua73
    @magua73 Рік тому +3

    I'm always very skeptic about those who proclaim that AI will never show creativity, I wonder if they ever heard about emergent properties.

    • @ADreamingTraveler
      @ADreamingTraveler Рік тому +1

      I can tell this guy hasn't kept up on any advancements in AI in just the past month alone or he wouldn't have said that. People said AI wouldn't be able to create art anywhere near a human just a few years ago and yet it's already here.

    • @reellezahl
      @reellezahl Рік тому +1

      exactly. Also people 5 years ago said: AI will automate technical stuff, but never creative stuff. Literally the first major public AI models-Art and writing. 🤦🏻‍♂

    • @obnoxiaaeristokles3872
      @obnoxiaaeristokles3872 Рік тому

      He didn't say anything about creativity, he said imagination. And that's obviously true.

    • @magua73
      @magua73 Рік тому

      @@obnoxiaaeristokles3872 True enough, they are not the same, although creativity is commonly referred to as the ability to create something real using the imagination, so ultimately to be able to create something you need imagination.

    • @obnoxiaaeristokles3872
      @obnoxiaaeristokles3872 Рік тому

      @@magua73 As a society we have been wrong about most things related to thinking, perceiving and conscience/self-awareness. And there was no need to talk about creativity without imagination until recently. A lot of ideas and paradigms will be unsettled in the coming years

  • @FitTestThePlanet
    @FitTestThePlanet Рік тому

    @9:41 - wait. Grassmann / Clifford algebra can’t do that?

  • @jeffwads
    @jeffwads Рік тому +4

    This dude has never used GPT-4. That much is super-clear.

    • @aaronjennings8385
      @aaronjennings8385 Рік тому +2

      What do you mean by that?

    • @peter9477
      @peter9477 Рік тому +3

      @@aaronjennings8385 If one uses GPT-4 much at all (for non-trivial interactions perhaps) one would quickly realize it can and sometimes does produce novel ideas.

    • @mikewiskoski1585
      @mikewiskoski1585 Рік тому

      @@peter9477 and incorrect facts, as well as lies.

  • @sonarbangla8711
    @sonarbangla8711 Рік тому

    Complex number i is defined as a ratio of effect to cause, when in a complex number z=x+iy, change in effect y due to change in cause x, mapped on to the w plane. i= effect y/cause x.

    • @reellezahl
      @reellezahl Рік тому

      what the heck? No. You just extend the algebraic structure (ℝ, +, ·, 0, 1) to (ℝ[X] / ⟨X²+1⟩, +, ·, 0, 1), which can be done, since the polynomial X²+1 ∈ ℝ[X] is irreducible over ℝ. By irreducibility, ℂ := ℝ[X] / ⟨X²+1⟩ constitutes a field and the (equivalence class of the) polynomial X is invertible and satisfies X² = -1. One then simply sets i := X (or -X, doesn't really matter). There nothing more to it than this.
      Also there is no cause-and-effect involved anywhere here.

    • @sonarbangla8711
      @sonarbangla8711 Рік тому

      @@reellezahl Please refer to page 217 of Tristan Needham's VISUAL COMPLEX ANALYSIS and the definition of complex number i.

    • @reellezahl
      @reellezahl 11 місяців тому

      @@sonarbangla8711no thanks. I did advanced algebra at university and already have enough literature. I don't need 'visualisations' designed either for children or to patronise adults who cannot think abstractly or process abstract information.
      Btw _i_ is not not in the primary sense a 'complex number'. It is (one of the two) zeroes of X² + 1. A complex number is an element of the field obtain by extending ℝ in the smallest possible way such that it contains one (and thereby both) of these zeroes. Before defining this field _i_ is just loose entity, not (yet) a member of that field, and thereby not (in the primary sense) a complex number.

  • @jonogrimmer6013
    @jonogrimmer6013 Рік тому +3

    By the end of this decade AI will be able to do mathematics humans can’t even dream about! Least in my option :) feel free to come back to this in 2030 if we still exist to tell me that was bollocks

    • @katehamilton7240
      @katehamilton7240 Рік тому

      I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"

  • @5sharpthorns
    @5sharpthorns Рік тому

    So in the 4th dimension, you can't multiply by 3, 5, or 7. I would want to look into the significance of that.

  • @sanjaymajhi4428
    @sanjaymajhi4428 Рік тому +4

    After this video, I got a different view towards mathematics. The mathematical world exists for real.

  • @davidvalderrama1816
    @davidvalderrama1816 Рік тому +1

    A complete and open minded person isn’t one thing, intuition is important.

  • @bellsTheorem1138
    @bellsTheorem1138 Рік тому +4

    I don't think artificial intelligence is going to be limited by the confines of established knowledge. As he said, it took a human great courage to think beyond the rules to discover imaginary numbers. An AI won't even need courage. It will be free to try everything and anything and do it a thousand times faster than a human struggling with all thier haters and detractors. Maybe current LLMs on thier own are limited but they will be integrated with other AI technologies and will be improving themselves. Its going to get crazy very soon.

    • @ADreamingTraveler
      @ADreamingTraveler Рік тому +1

      There are signs that GPT-4 shows some resemblance of consciousness on a very limited scale that's slightly different from our own. And that's just GPT-4 which is already old by todays standards since GPT-5 is almost done. It just shows you how many people barely understand what's happening. For example even the creators of AI don't understand fully how it all works on a technical level, certain things happen that are constantly unexpected. People speak like AI hit its peak and isn't going to rapidly advance still. We can't even figure out when we'll reach that peak.

    • @Publicinformation7
      @Publicinformation7 Рік тому

      The AIs capability to imagine and create new thoughts, still needs to be seen....

    • @reellezahl
      @reellezahl Рік тому

      @@Publicinformation7 it can already do this sort of via stable diffusion. Who's to say that the way our imagination works is not dissimilar?

    • @katehamilton7240
      @katehamilton7240 Рік тому

      I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"

  • @hillosand
    @hillosand Рік тому +2

    I mean, LLNs aren't going to be the models that advance mathematics, but even still you could try to program an neural network to 'play', e.g. allow ignoring certain rules I'm order to solve problems. Cool episode though.

  • @SB-pj3oj
    @SB-pj3oj Рік тому +3

    Artificial intelligence intelligence

  • @georgechyz
    @georgechyz Рік тому

    Math being rational is a subset of reality which includes rational and irrational. For example, emotions are very important features of our consciousness and they are irrational. What's remarkable about the irrational/emotional aspects of consciousness is how creativity comes from our emotional aspect. It's the irrational that leaps from what we know to entirely new possibilities. Conversely, the intellect relies on logic which plods along from what we know inching toward a slightly different idea. That's why revolutionary new ideas first appear using our irrational emotions. However, since irrational emotions lie outside the limits of rational math and logic computers cannot explore emotions or use those irrational features of consciousness to leap to entirely new perspectives, solutions, etc.
    “If I create from the heart, nearly everything works; if from the head, almost nothing.”
    -Marc Chagall (1887-1985), artist

  • @luckychuckybless
    @luckychuckybless Рік тому +5

    This is a dumb title he didn’t “debunk” AI intelligence he gave its opinion on it. And he’s a mathematician not a computer engineer. So it’s worth listening to but not an authority on the topic at all it’s like a piano player giving you his advice on a violin….yeah he knows music but he doesn’t play violin

    • @神林しマイケル
      @神林しマイケル Рік тому +1

      Except the world of computing is basically mathematics. Hell, Even 1's and 0's are mathematics.

    • @TheIslandDivision
      @TheIslandDivision Рік тому +3

      And then you drop your opinion. Are you not drastically less qualified?
      Also violin and piano are closer than you think.

    • @IsomerSoma
      @IsomerSoma Рік тому

      Tbh i don't think a computer scientist is better qualified than a mathematician whike talking about the *foundations* of AI. Yeah but sure it is just an opinion being presented here.

    • @Afreshio
      @Afreshio Рік тому

      Also most computer scientists lacks deep understanding of the human brain, the mind, etc. So theres a bunch of them claiming idiotic things like the LLM learn the language like a child does. The complete lack of insight intro how a baby AND humans learn language and generates a model of the perceived world creates this false pretension for them that they somehow can make such clickbaity claims.
      It's Dunning Kruger all over the place lately with the tech bros, AI accelerstionists, the press, tech CEOs and a chunk of the general public. Some misguided ignorance for some and for the grifters and wealthy a new way to make money. Few people really understand the difference between a neural network and an organic brain. They confound the wordplays invented by marketing teams with proper accuracy. I.e a very simplistic, límited model of neuron is never gonna compete against a real neuron.
      The most important aspect for us human is the mind, imagination , consciousness. And those remain a mystery even for the experts. It's wild seeing tech bros and chatgptbros claiming wild nonsense about AI, intelligence and consciousness.
      Their own lack of depth, their ignorance about the theories of the mind and consciousness, neurology and even for some of them the actual models by with this novel tech is actually built really eludes them.
      A really sad state of affairs.

  • @integrallens6045
    @integrallens6045 Рік тому +2

    I like the use of the phrase "imaginary parts" this is very similar to how people have their "real parts" or their bodily parts, and then they have their "imaginary parts" which would be the mind, thoughts values, feelings, goals etc. Even numbers have this interior terrain

    • @rokko_fable
      @rokko_fable Рік тому

      Do they really? Or that us projecting our imagination onto them to reach the desired conclusion?
      Methinks the latter.

    • @integrallens6045
      @integrallens6045 Рік тому

      @rokko that's your opinion and that's fine. But what other kind of metaphor fits the idea of negative numbers? What happens when you go backwards past zero? Your numbers don't pop back into the positive, they are taking up negative space. If you can't image that as a folding inward then I don't know what other metaphors you could use to help your mind grasp these types of processes and numbers.
      Also what do you believe is my desired conclusion and how did you become a mind reader?

  • @DaggerSecurity
    @DaggerSecurity Рік тому +5

    Basically what he is saying is that AI is missing imagination.

    • @hardboiledaleks9012
      @hardboiledaleks9012 Рік тому

      what he's not saying is that AI doesnt need imagination, because it operates through brute forcing to test impossible or improbably questions...

    • @DaggerSecurity
      @DaggerSecurity Рік тому

      @@hardboiledaleks9012 but does it embark on the path of brute force innately or only at the behest of the one who designed it? For example why would an AI even begin to brute force the possibility of a negative number having a square root? Such a path requires imagination to even be considered, not merely brute forcing.

  • @stevenschilizzi4104
    @stevenschilizzi4104 Рік тому

    Prof. Frenkel stops at octonions, but I’ve read that numbers of dimension 2 to the power of 4, or 16, called sedenions, have also been defined and studied, and have very curious properties. Or rather, they lack properties that are fundamental to real or complex numbers, like associativity and commutativity. They also allow division by zero, where multiplying two non-zero sedenions can give zero as an answer!! I don’t know that they have found any practical applications though.

    • @martinkunev9911
      @martinkunev9911 Рік тому +1

      multiplying two non-zero sedenions can give zero as an answer ≠ division by zero
      The technical term is that there are divisors of zero. The same is true for e.g. 2x2 matrices of real numbers.

  • @roucoupse
    @roucoupse Рік тому +3

    clickbait title

  • @dudicrous
    @dudicrous Рік тому +2

    How mathematicians can be romantics

  • @YouBetterBeYou
    @YouBetterBeYou Рік тому +1

    There are some highly contested philosophical presumptions made here.
    As a physicist-since he brings up quantum mechanics-a I can confidently state that imaginary numbers do at no point describe anything real. If you want to make an empirical prediction you always have to extract the real part and only the real part.
    So, yes, complex numbers allow you mathematically sound manipulation where you would get stuck otherwise but it remains an intellectual tool.
    Compare to this to the Dirac distribution: this also is a tool with no strict real equivalent but it approximates shocks to a system or can be thought of as an ideal „sampler“. Hence, The Dirac distribution has an approximate real equivalent. Imaginary numbers don’t.
    Saying that imaginary numbers are part of reality is, therefore, highly debatable.

  • @lancemarchetti8673
    @lancemarchetti8673 Рік тому

    Great interview indeed!
    Here's an example of how far we still have to go...Just take a look at this simple instruction that the poor bot fumbled up this evening as I enjoyed a cup of coffee and a few giggles.
    User:
    aAbcde9fz
    list every 2nd character in the above string please.
    ChatGPT:
    Sure, here are the every second characters in the string "aAbcde9fz":
    "Abd9z"
    User: why did you include 'b' ?
    ChatGPT: I apologize for the confusion. The correct every second character in the string "aAbcde9fz" is:
    "Abe9z"
    User:
    No. The correct answer is actually: Acef
    ChatGPT:
    I apologize for my mistake earlier. You are correct that the correct every second character in the string "aAbcde9fz" is:
    "Acef"
    //Unless the bot, being a computer takes it for granted that the first character of any given string is always '0' and then 1 2 3 etc ?
    But even so, its first answer is still wrong. Or am I missing something?

  • @Apjooz
    @Apjooz Рік тому +2

    And it only took 200,000 years to find those imaginary numbers.

  • @arthu03
    @arthu03 Рік тому

    we need more ppl like Edward Frenkel...they are the ones who create "foundation" !

  • @sgramstrup
    @sgramstrup Рік тому +2

    It's so embarrassing at the moment, when so many super bright people shows their simplistic 'human exceptionalist' worldview. People that say: 'oh no, AI can't do this like we can', is deniers bc it clashes with their worldview that we are something special - which it turns out - we are not. I look forward to hear them again when they have moved on from their old standpoints.

    • @Martinit0
      @Martinit0 Рік тому +1

      I agree. AI will just generate an embedding for concepts and we will be puzzled about what that embedding stands for. Just like people were puzzled about the square root of -1 before Cardano.

    • @Nathaniel_Bush_Ph.D.
      @Nathaniel_Bush_Ph.D. Рік тому +1

      It is super cringy at the moment! I think many bright people will look back with chagrin on their hot takes on early AI.
      I also find it kind of hilarious that we have a LANGUAGE model that is already better than the average doctor, lawyer, teacher, writer, and poet, and yet we're still debating whether or not it qualifies as intelligent... and it wasn't even trained narrowly on any of those things. When we do narrow modular training, I fully expected to exceed 90%+ of human experts... and people will still be debating its intelligence.

    • @katehamilton7240
      @katehamilton7240 Рік тому

      I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"

  • @MrAnderson2845
    @MrAnderson2845 Рік тому

    Its almost like complex numbers exist in a higher dimention and we know they exist but dont know what they are. Yet they are directly linked to us and some how calculate our physical world in the mandbrot set.

  • @maureenparisi5808
    @maureenparisi5808 Рік тому +1

    This is plainly speaking inbox, the yellow brick road of progress.

  • @liamroche1473
    @liamroche1473 Рік тому

    I disagree with the example of imagining the square root of -1 for a rather concrete reason. Neural networks have features that are fundamentally made up of real parameters, and these features can achieve extremely high levels of abstraction - for example a feature representing whether a picture has a cat in it! Even that should be a strong clue they can come up with other sorts of abstraction, like the square root of minus one.
    There is one much simpler type of feature which is relevant to the claim. Topologically, a real-valued feature has no loop - if you keep increasing or decreasing it, you never see the same values again. But from a single such feature it is possible to generate two new features using sine and cosine that are related by the familiar sin^2 + cos^2 = 1 rule. This effectively maps the line of a single feature to a circle in the complex plane by the transformation x -> e^ikx. The two transformed features are effectively a single new feature with different topology. More generally two features always have the capability of being used so that they represent complex numbers, and where complex features are useful to a model they can emerge naturally. So it is safe to say that not only can general neural networks come up with the notion of a square root of minus one, they can do this sort of thing quietly in the background where it turns out to be useful to a model. And if they can do it quietly, it is certainly reasonable to believe they could talk about it if they had a large language model as well!

  • @xman933
    @xman933 Рік тому +1

    While current AI cannot imagine or conceive of the square root of minus 1, does he believe it won’t be able to in the future.
    Current AI can be considered as infants and just like human infants might not be able to imagine the square root of minus 1, adult humans and likely adult AI might.

  • @Timmiee76
    @Timmiee76 Рік тому

    6:17 he supposedly quoted the Italian mathematician “why are we so adamant that these things don’t exist?” And that’s ironic given that he is adamant that machine intelligence does not exist. Overall i find his argument very weak. To progress this debate we first need to have a non-ostensive definition of intelligence. Because otherwise humans will keep on pointing to something else every time. There is no such definition yet.

  • @CohenRautenkranz
    @CohenRautenkranz Рік тому

    The devices we employ to build and run ”AI" models would not exist in the absence of the mathematics which the models themselves are unlikely to be capable of even conceiving. It seems to me that a (ironic) parallel could also exist with regard to humans attempting to decipher consciousness?

  • @ronking5103
    @ronking5103 Рік тому

    From about 300BCE to until the early 19th century, humanity made a pretty basic assumption that two parallel lines would never intersect. Euclid. It was taken as law. Yet, it's pretty clear to anyone that studies a globe, that parallel lines can indeed intersect, they will at the poles. It's not an abstraction that difficult to come to terms with, you don't need to be Einstein to grasp it. Yet all of humanity missed it, even when they were actively looking for it, for a very long time. Sometimes even things that are staring at us in plain sight, elude us, because we fall into dogmatic beliefs about what we take as law.

  • @thzzzt
    @thzzzt Рік тому

    I had no idea Girolamo Cardano was a conehead. But of course. Explains a lot.

  • @ben_spiller
    @ben_spiller Рік тому

    There's nothing stopping an AI from adopting the hypothesis that the square root of a negative exists and seeing what happens.

  • @jacksmith4460
    @jacksmith4460 Рік тому +2

    01:46 Honestly...this might be the smartest thing Einstein said, and he has said some pretty smart things
    Without intuition Science would never move forward, and intuition commonly looks insane from the outside, Tesla was a great example of that. With out the imagination Science would be dead in the water, yet the modern scientific world almost mocks it, and certainly doe snot respect it.

    • @katehamilton7240
      @katehamilton7240 Рік тому

      IKR? I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"

  • @RetzyWilliams
    @RetzyWilliams 4 місяці тому

    It should say "Mathematician Imagines He Debunks AI Intelligence"

  • @TheViperZed
    @TheViperZed Рік тому

    There is actually a fundamental error in the assumption being made by Frenkel, which is that the knowledge known by a human exists in a closed system, it doesn't. It constantly interacts with outside "noise" in the form of new experiences that challenge them to interpret those, and their knowledge, in new contexts and adjust their knowledge. A universal experience of this is curiosity. There certainly are leaps to be made in AI that are still required for it to get to that point, like an AI being able to update the knowledge intrinsic to it, but how would you feel about ChatGPT, after having prompted it to explain some form of human interaction, ChatGPT giving a response trying to explain it and then finishing it's response with a question to you "My experience of this is limited, but I am curious, what are your experiences with this?".

    • @amotriuc
      @amotriuc Рік тому

      I don't think he assumes that, you just assume that somehow open system will result in AI which we know it does not have to. How do you know this will give ChatGPT AI? My cat is in open system but no mater how much I wish he is not becoming Einstein and I even call him that. I can can guaranty his neural network is bigger and more complex then ChatGpt or anything we can build prob in 100 years.

    • @TheViperZed
      @TheViperZed Рік тому

      @@amotriuc no, the point I am making is that you can't distinguish between continual disruption from outside in an open system and an internal "spark" of special sauce.

    • @amotriuc
      @amotriuc Рік тому

      @@TheViperZed If I understand him correctly I do interpret his "spark" of special sauce as an un unknown process that bring you to a discovery. Continual disruption from outside is a random process and the other one I suspect is not. I would say this is a significant difference. Ex: should I wait all the molecules to get in a corner of the room or better have a machine that does it?

    • @TheViperZed
      @TheViperZed Рік тому

      @@amotriuc The human brain is that machine that will put all the molecules into one corner of a room. Not actually, I am talking about the brain as a pattern recognition machine, and it will use knowledge within it and the outside input to do this. Even just a person reflecting on their knowledge can be seen as this, and most of human ingenuity and imagination can be explained easily just using that. A spark of imagination "ex nihilo" is indifferentiable from this and an assumption. There are great examples of leaps of imagination occuring because of prompts of interacting with existing knowledge, eg Einstein speaking of thinking about a falling person triggering his insight into inertial frames.

    • @amotriuc
      @amotriuc Рік тому

      @@TheViperZed I will drop my comment then, I am not sure that I understand your point now.

  • @chrisdavey3113
    @chrisdavey3113 Рік тому

    06:30 "knowledge is limited".
    How can he be confident that that is correct?
    David Deutsch would disagree.

  • @TheWilliamHoganExperience
    @TheWilliamHoganExperience Рік тому +2

    It's not artificial intellence that scares me.
    It's artificial stupidity...

  • @markbrown1609
    @markbrown1609 Рік тому

    The square root of negative one is denoted by the symbol "i" in mathematics. It is an imaginary unit, and its square is defined as -1. In other words, i^2 = -1. a good analogy for imagination.

  • @robcharteris1756
    @robcharteris1756 Рік тому +1

    A human can forget or ignore data. Maybe this is an essential part of conciousness?
    Without forgeting maybe we just clog up with data.

    • @agatastaniak7459
      @agatastaniak7459 Рік тому +1

      Fair point. There must be a practical reason why a healthy human brain is good at forgetting and perfect recalling is a rare genetic abnormality in humans.

  • @Matryoshkabomb
    @Matryoshkabomb 8 місяців тому

    In my opinion, square root of negative one is just two number lines or axises that can be oriented in any way. If it is on the same numberline, say x, then it should be one. I think its a fake problem since weve favord the x, y, and z which are arbitrarily at 90°s from eachother. Spherical coordinates are the next step. Negative numbers are literally whole numbers if you just translate their values into the real plane. Always have a camera or measuring device to measure your original data called the origin.